Search results for: gravitational search algorithm
3664 A Location-Allocation-Routing Model for a Home Health Care Supply Chain Problem
Authors: Amir Mohammad Fathollahi Fard, Mostafa Hajiaghaei-Keshteli, Mohammad Mahdi Paydar
Abstract:
With increasing life expectancy in developed countries, the role of home care services is highlighted by both academia and industrial contributors in Home Health Care Supply Chain (HHCSC) companies. The main decisions in such supply chain systems are the location of pharmacies, the allocation of patients to these pharmacies and also the routing and scheduling decisions of nurses to visit their patients. In this study, for the first time, an integrated model is proposed to consist of all preliminary and necessary decisions in these companies, namely, location-allocation-routing model. This model is a type of NP-hard one. Therefore, an Imperialist Competitive Algorithm (ICA) is utilized to solve the model, especially in large sizes. Results confirm the efficiency of the developed model for HHCSC companies as well as the performance of employed ICA.Keywords: home health care supply chain, location-allocation-routing problem, imperialist competitive algorithm, optimization
Procedia PDF Downloads 3983663 Linear Semi Active Controller of Magneto-Rheological Damper for Seismic Vibration Attenuation
Authors: Zizouni Khaled, Fali Leyla, Sadek Younes, Bousserhane Ismail Khalil
Abstract:
In structural vibration caused principally by an earthquake excitation, the most vibration’s attenuation system used recently is the semi active control with a Magneto Rheological Damper device. This control was a subject of many researches and works in the last years. The big challenges of searchers in this case is to propose an adequate controller with a robust algorithm of current or tension adjustment. In this present paper, a linear controller is proposed to control the MR damper using to reduce a vibrations of three story structure exposed to El Centro’s 1940 and Boumerdès 2003 earthquakes. In this example, the MR damper is installed in the first floor of the structure. The numerical simulations results of the proposed linear control with a feedback law based on clipped optimal algorithm showed the feasibility of the semi active control to protecting civil structures. The comparison of the controlled structure and uncontrolled structures responses illustrate clearly the performance and the effectiveness of the simple proposed approach.Keywords: MR damper, seismic vibration, semi-active control
Procedia PDF Downloads 2853662 Systematic and Meta-Analysis of Navigation in Oral and Maxillofacial Trauma and Impact of Machine Learning and AI in Management
Authors: Shohreh Ghasemi
Abstract:
Introduction: Managing oral and maxillofacial trauma is a multifaceted challenge, as it can have life-threatening consequences and significant functional and aesthetic impact. Navigation techniques have been introduced to improve surgical precision to meet this challenge. A machine learning algorithm was also developed to support clinical decision-making regarding treating oral and maxillofacial trauma. Given these advances, this systematic meta-analysis aims to assess the efficacy of navigational techniques in treating oral and maxillofacial trauma and explore the impact of machine learning on their management. Methods: A detailed and comprehensive analysis of studies published between January 2010 and September 2021 was conducted through a systematic meta-analysis. This included performing a thorough search of Web of Science, Embase, and PubMed databases to identify studies evaluating the efficacy of navigational techniques and the impact of machine learning in managing oral and maxillofacial trauma. Studies that did not meet established entry criteria were excluded. In addition, the overall quality of studies included was evaluated using Cochrane risk of bias tool and the Newcastle-Ottawa scale. Results: Total of 12 studies, including 869 patients with oral and maxillofacial trauma, met the inclusion criteria. An analysis of studies revealed that navigation techniques effectively improve surgical accuracy and minimize the risk of complications. Additionally, machine learning algorithms have proven effective in predicting treatment outcomes and identifying patients at high risk for complications. Conclusion: The introduction of navigational technology has great potential to improve surgical precision in oral and maxillofacial trauma treatment. Furthermore, developing machine learning algorithms offers opportunities to improve clinical decision-making and patient outcomes. Still, further studies are necessary to corroborate these results and establish the optimal use of these technologies in managing oral and maxillofacial traumaKeywords: trauma, machine learning, navigation, maxillofacial, management
Procedia PDF Downloads 583661 A Subband BSS Structure with Reduced Complexity and Fast Convergence
Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin
Abstract:
A blind source separation method is proposed; in this method, we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work, the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each subband than the input signal at full bandwidth, and can promote better rates of convergence.Keywords: blind source separation, computational complexity, subband, convergence speed, mixture
Procedia PDF Downloads 5803660 Bitplanes Image Encryption/Decryption Using Edge Map (SSPCE Method) and Arnold Transform
Authors: Ali A. Ukasha
Abstract:
Data security needed in data transmission, storage, and communication to ensure the security. The single step parallel contour extraction (SSPCE) method is used to create the edge map as a key image from the different Gray level/Binary image. Performing the X-OR operation between the key image and each bit plane of the original image for image pixel values change purpose. The Arnold transform used to changes the locations of image pixels as image scrambling process. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Gary level image and completely reconstructed without any distortion. Also shown that the analyzed algorithm have extremely large security against some attacks like salt & pepper and JPEG compression. Its proof that the Gray level image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.Keywords: SSPCE method, image compression, salt and peppers attacks, bitplanes decomposition, Arnold transform, lossless image encryption
Procedia PDF Downloads 5013659 Investigation of Soil Slopes Stability
Authors: Nima Farshidfar, Navid Daryasafar
Abstract:
In this paper, the seismic stability of reinforced soil slopes is studied using pseudo-dynamic analysis. Equilibrium equations that are applicable to the every kind of failure surface are written using Horizontal Slices Method. In written equations, the balance of the vertical and horizontal forces and moment equilibrium is fully satisfied. Failure surface is assumed to be log-spiral, and non-linear equilibrium equations obtained for the system are solved using Newton-Raphson Method. Earthquake effects are applied as horizontal and vertical pseudo-static coefficients to the problem. To solve this problem, a code was developed in MATLAB, and the critical failure surface is calculated using genetic algorithm. At the end, comparing the results obtained in this paper, effects of various parameters and the effect of using pseudo - dynamic analysis in seismic forces modeling is presented.Keywords: soil slopes, pseudo-dynamic, genetic algorithm, optimization, limit equilibrium method, log-spiral failure surface
Procedia PDF Downloads 3393658 Multilabel Classification with Neural Network Ensemble Method
Authors: Sezin Ekşioğlu
Abstract:
Multilabel classification has a huge importance for several applications, it is also a challenging research topic. It is a kind of supervised learning that contains binary targets. The distance between multilabel and binary classification is having more than one class in multilabel classification problems. Features can belong to one class or many classes. There exists a wide range of applications for multi label prediction such as image labeling, text categorization, gene functionality. Even though features are classified in many classes, they may not always be properly classified. There are many ensemble methods for the classification. However, most of the researchers have been concerned about better multilabel methods. Especially little ones focus on both efficiency of classifiers and pairwise relationships at the same time in order to implement better multilabel classification. In this paper, we worked on modified ensemble methods by getting benefit from k-Nearest Neighbors and neural network structure to address issues within a beneficial way and to get better impacts from the multilabel classification. Publicly available datasets (yeast, emotion, scene and birds) are performed to demonstrate the developed algorithm efficiency and the technique is measured by accuracy, F1 score and hamming loss metrics. Our algorithm boosts benchmarks for each datasets with different metrics.Keywords: multilabel, classification, neural network, KNN
Procedia PDF Downloads 1553657 Using Artificial Vision Techniques for Dust Detection on Photovoltaic Panels
Authors: Gustavo Funes, Eduardo Peters, Jose Delpiano
Abstract:
It is widely known that photovoltaic technology has been massively distributed over the last decade despite its low-efficiency ratio. Dust deposition reduces this efficiency even more, lowering the energy production and module lifespan. In this work, we developed an artificial vision algorithm based on CIELAB color space to identify dust over panels in an autonomous way. We performed several experiments photographing three different types of panels, 30W, 340W and 410W. Those panels were soiled artificially with uniform and non-uniform distributed dust. The algorithm proposed uses statistical tools to provide a simulation with a 100% soiled panel and then performs a comparison to get the percentage of dirt in the experimental data set. The simulation uses a seed that is obtained by taking a dust sample from the maximum amount of dust from the dataset. The final result is the dirt percentage and the possible distribution of dust over the panel. Dust deposition is a key factor for plant owners to determine cleaning cycles or identify nonuniform depositions that could lead to module failure and hot spots.Keywords: dust detection, photovoltaic, artificial vision, soiling
Procedia PDF Downloads 503656 Flood-prone Urban Area Mapping Using Machine Learning, a Case Sudy of M'sila City (Algeria)
Authors: Medjadj Tarek, Ghribi Hayet
Abstract:
This study aims to develop a flood sensitivity assessment tool using machine learning (ML) techniques and geographic information system (GIS). The importance of this study is integrating the geographic information systems (GIS) and machine learning (ML) techniques for mapping flood risks, which help decision-makers to identify the most vulnerable areas and take the necessary precautions to face this type of natural disaster. To reach this goal, we will study the case of the city of M'sila, which is among the areas most vulnerable to floods. This study drew a map of flood-prone areas based on the methodology where we have made a comparison between 3 machine learning algorithms: the xGboost model, the Random Forest algorithm and the K Nearest Neighbour algorithm. Each of them gave an accuracy respectively of 97.92 - 95 - 93.75. In the process of mapping flood-prone areas, the first model was relied upon, which gave the greatest accuracy (xGboost).Keywords: Geographic information systems (GIS), machine learning (ML), emergency mapping, flood disaster management
Procedia PDF Downloads 953655 Quadrature Mirror Filter Bank Design Using Population Based Stochastic Optimization
Authors: Ju-Hong Lee, Ding-Chen Chung
Abstract:
The paper deals with the optimal design of two-channel linear-phase (LP) quadrature mirror filter (QMF) banks using a metaheuristic based optimization technique. Based on the theory of two-channel QMF banks using two recursive digital all-pass filters (DAFs), the design problem is appropriately formulated to result in an objective function which is a weighted sum of the group delay error of the designed QMF bank and the magnitude response error of the designed low-pass analysis filter. Through a frequency sampling and a weighted least squares approach, the optimization problem of the objective function can be solved by utilizing a particle swarm optimization algorithm. The resulting two-channel QMF banks can possess approximately LP response without magnitude distortion. Simulation results are presented for illustration and comparison.Keywords: quadrature mirror filter bank, digital all-pass filter, weighted least squares algorithm, particle swarm optimization
Procedia PDF Downloads 5233654 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 2053653 Supervised-Component-Based Generalised Linear Regression with Multiple Explanatory Blocks: THEME-SCGLR
Authors: Bry X., Trottier C., Mortier F., Cornu G., Verron T.
Abstract:
We address component-based regularization of a Multivariate Generalized Linear Model (MGLM). A set of random responses Y is assumed to depend, through a GLM, on a set X of explanatory variables, as well as on a set T of additional covariates. X is partitioned into R conceptually homogeneous blocks X1, ... , XR , viewed as explanatory themes. Variables in each Xr are assumed many and redundant. Thus, Generalised Linear Regression (GLR) demands regularization with respect to each Xr. By contrast, variables in T are assumed selected so as to demand no regularization. Regularization is performed searching each Xr for an appropriate number of orthogonal components that both contribute to model Y and capture relevant structural information in Xr. We propose a very general criterion to measure structural relevance (SR) of a component in a block, and show how to take SR into account within a Fisher-scoring-type algorithm in order to estimate the model. We show how to deal with mixed-type explanatory variables. The method, named THEME-SCGLR, is tested on simulated data.Keywords: Component-Model, Fisher Scoring Algorithm, GLM, PLS Regression, SCGLR, SEER, THEME
Procedia PDF Downloads 3973652 Dynamic Background Updating for Lightweight Moving Object Detection
Authors: Kelemewerk Destalem, Joongjae Cho, Jaeseong Lee, Ju H. Park, Joonhyuk Yoo
Abstract:
Background subtraction and temporal difference are often used for moving object detection in video. Both approaches are computationally simple and easy to be deployed in real-time image processing. However, while the background subtraction is highly sensitive to dynamic background and illumination changes, the temporal difference approach is poor at extracting relevant pixels of the moving object and at detecting the stopped or slowly moving objects in the scene. In this paper, we propose a moving object detection scheme based on adaptive background subtraction and temporal difference exploiting dynamic background updates. The proposed technique consists of a histogram equalization, a linear combination of background and temporal difference, followed by the novel frame-based and pixel-based background updating techniques. Finally, morphological operations are applied to the output images. Experimental results show that the proposed algorithm can solve the drawbacks of both background subtraction and temporal difference methods and can provide better performance than that of each method.Keywords: background subtraction, background updating, real time, light weight algorithm, temporal difference
Procedia PDF Downloads 3443651 Experimental Analysis of Control in Electric Vehicle Charging Station Based Grid Tied Photovoltaic-Battery System
Authors: A. Hassoune, M. Khafallah, A. Mesbahi, T. Bouragba
Abstract:
This work presents an improved strategy of control for charging a lithium-ion battery in an electric vehicle charging station using two charger topologies i.e. single ended primary inductor converter (SEPIC) and forward converter. In terms of rapidity and accuracy, the power system consists of a topology/control diagram that would overcome the performance constraints, for instance the power instability, the battery overloading and how the energy conversion blocks would react efficiently to any kind of perturbations. Simulation results show the effectiveness of the proposed topologies operated with a power management algorithm based on voltage/peak current mode controls. In order to provide credible findings, a low power prototype is developed to test the control strategy via experimental evaluations of the converter topology and its controls.Keywords: battery storage buffer, charging station, electric vehicle, experimental analysis, management algorithm, switches control
Procedia PDF Downloads 1663650 Modeling Sediment Transports under Extreme Storm Situation along Persian Gulf North Coast
Authors: Majid Samiee Zenoozian
Abstract:
The Persian Gulf is a bordering sea with an normal depth of 35 m and a supreme depth of 100 m near its narrow appearance. Its lengthen bathymetric axis divorces two main geological shires — the steady Arabian Foreland and the unbalanced Iranian Fold Belt — which are imitated in the conflicting shore and bathymetric morphologies of Arabia and Iran. The sediments were experimented with from 72 offshore positions through an oceanographic cruise in the winter of 2018. Throughout the observation era, several storms and river discharge actions happened, as well as the major flood on record since 1982. Suspended-sediment focus at all three sites varied in reaction to both wave resuspension and advection of river-derived sediments. We used hydrological models to evaluation and associate the wave height and inundation distance required to carriage the rocks inland. Our results establish that no known or possible storm happening on the Makran coast is accomplished of detaching and transporting the boulders. The fluid mud consequently is conveyed seaward due to gravitational forcing. The measured sediment focus and velocity profiles on the shelf provide a strong indication to provision this assumption. The sediment model is joined with a 3D hydrodynamic module in the Environmental Fluid Dynamics Code (EFDC) model that offers data on estuarine rotation and salinity transport under normal temperature conditions. 3-D sediment transport from model simulations specify dynamic sediment resuspension and transport near zones of highly industrious oyster beds.Keywords: sediment transport, storm, coast, fluid dynamics
Procedia PDF Downloads 1163649 Effects of Zinc and Vitamin A Supplementation on Prognostic Markers and Treatment Outcomes of Adults with Pulmonary Tuberculosis: A Systematic Review and Meta-Analysis
Authors: Fasil Wagnew, Kefyalew Addis Alene, Setegn Eshetie, Tom Wingfield, Matthew Kelly, Darren Gray
Abstract:
Introduction: Undernutrition is a major and under-appreciated risk factor for TB, which is estimated to be responsible for 1.9 million TB cases per year globally. The effectiveness of micronutrient supplementation on TB treatment outcomes and its prognostic markers such as sputum conversion and serum zinc, retinol, and hemoglobin levels has been poorly understood. This systematic review and meta-analysis aimed to determine the association between zinc and vitamin A supplementation and TB treatment outcomes and its prognostic markers. Methods: A systematic literature search for randomized controlled trials (RCTs) was performed in PubMed, Embase, and Scopus databases. Meta-analysis with a random effect model was performed to estimate risk ratio (RR) and mean difference (MD), with a 95% confidence interval (CI), for dichotomous and continuous outcomes, respectively. Results: Our search identified 2,195 records. Of these, nine RCTs consisting of 1,375 participants were included in the final analyses. Among adults with pulmonary TB, zinc (RR: 0.94, 95%CI: 0.86, 1.03), vitamin A (RR: 0.90, 95%CI: 0.80, 1.01), and combined zinc and vitamin A (RR: 0.98, 95%CI: 0.89, 1.08) supplementation were not significantly associated with TB treatment success. Combined zinc and vitamin A supplementation was significantly associated with increased sputum smear conversion at 2 months (RR: 1.16, 95%CI: 1.03, 1.32), serum zinc levels at 2 months (MD of 0.86umol/l, 95% CI: 0.14, 1.57), serum retinol levels at 2 months (MD: 0.06umol/l, 95 % CI: 0.04, 0.08) and 6 months (MD: 0.12umol/l, 95 % CI: 0.10, 0.14), and serum hemoglobin level at 6 months (MD: 0.29 ug/dl, 95% CI: 0.08 to 0.51), among adults with TB. Conclusions: Providing zinc and vitamin A supplementation to adults with pulmonary TB during treatment may increase early sputum smear conversion, serum zinc, retinol, and hemoglobin levels. However, the use of zinc, vitamin A, or both were not associated with TB treatment success.Keywords: zinc and vitamin A supplementation, tuberculosis, treatment outcomes, meta-analysis, RCT
Procedia PDF Downloads 1743648 Design and Development of Fleet Management System for Multi-Agent Autonomous Surface Vessel
Authors: Zulkifli Zainal Abidin, Ahmad Shahril Mohd Ghani
Abstract:
Agent-based systems technology has been addressed as a new paradigm for conceptualizing, designing, and implementing software systems. Agents are sophisticated systems that act autonomously across open and distributed environments in solving problems. Nevertheless, it is impractical to rely on a single agent to do all computing processes in solving complex problems. An increasing number of applications lately require multiple agents to work together. A multi-agent system (MAS) is a loosely coupled network of agents that interact to solve problems that are beyond the individual capacities or knowledge of each problem solver. However, the network of MAS still requires a main system to govern or oversees the operation of the agents in order to achieve a unified goal. We had developed a fleet management system (FMS) in order to manage the fleet of agents, plan route for the agents, perform real-time data processing and analysis, and issue sets of general and specific instructions to the agents. This FMS should be able to perform real-time data processing, communicate with the autonomous surface vehicle (ASV) agents and generate bathymetric map according to the data received from each ASV unit. The first algorithm is developed to communicate with the ASV via radio communication using standard National Marine Electronics Association (NMEA) protocol sentences. Next, the second algorithm will take care of the path planning, formation and pattern generation is tested using various sample data. Lastly, the bathymetry map generation algorithm will make use of data collected by the agents to create bathymetry map in real-time. The outcome of this research is expected can be applied on various other multi-agent systems.Keywords: autonomous surface vehicle, fleet management system, multi agent system, bathymetry
Procedia PDF Downloads 2733647 Estimation of the Temperatures in an Asynchronous Machine Using Extended Kalman Filter
Authors: Yi Huang, Clemens Guehmann
Abstract:
In order to monitor the thermal behavior of an asynchronous machine with squirrel cage rotor, a 9th-order extended Kalman filter (EKF) algorithm is implemented to estimate the temperatures of the stator windings, the rotor cage and the stator core. The state-space equations of EKF are established based on the electrical, mechanical and the simplified thermal models of an asynchronous machine. The asynchronous machine with simplified thermal model in Dymola is compiled as DymolaBlock, a physical model in MATLAB/Simulink. The coolant air temperature, three-phase voltages and currents are exported from the physical model and are processed by EKF estimator as inputs. Compared to the temperatures exported from the physical model of the machine, three parts of temperatures can be estimated quite accurately by the EKF estimator. The online EKF estimator is independent from the machine control algorithm and can work under any speed and load condition if the stator current is nonzero current system.Keywords: asynchronous machine, extended Kalman filter, resistance, simulation, temperature estimation, thermal model
Procedia PDF Downloads 2853646 Diagnostic Performance of Mean Platelet Volume in the Diagnosis of Acute Myocardial Infarction: A Meta-Analysis
Authors: Kathrina Aseanne Acapulco-Gomez, Shayne Julieane Morales, Tzar Francis Verame
Abstract:
Mean platelet volume (MPV) is the most accurate measure of the size of platelets and is routinely measured by most automated hematological analyzers. Several studies have shown associations between MPV and cardiovascular risks and outcomes. Although its measurement may provide useful data, MPV remains to be a diagnostic tool that is yet to be included in routine clinical decision making. The aim of this systematic review and meta-analysis is to determine summary estimates of the diagnostic accuracy of mean platelet volume for the diagnosis of myocardial infarction among adult patients with angina and/or its equivalents in terms of sensitivity, specificity, diagnostic odds ratio, and likelihood ratios, and to determine the difference of the mean MPV values between those with MI and those in the non-MI controls. The primary search was done through search in electronic databases PubMed, Cochrane Review CENTRAL, HERDIN (Health Research and Development Information Network), Google Scholar, Philippine Journal of Pathology, and Philippine College of Physicians Philippine Journal of Internal Medicine. The reference list of original reports was also searched. Cross-sectional, cohort, and case-control articles studying the diagnostic performance of mean platelet volume in the diagnosis of acute myocardial infarction in adult patients were included in the study. Studies were included if: (1) CBC was taken upon presentation to the ER or upon admission (within 24 hours of symptom onset); (2) myocardial infarction was diagnosed with serum markers, ECG, or according to accepted guidelines by the Cardiology societies (American Heart Association (AHA), American College of Cardiology (ACC), European Society of Cardiology (ESC); and, (3) if outcomes were measured as significant difference AND/OR sensitivity and specificity. The authors independently screened for inclusion of all the identified potential studies as a result of the search. Eligible studies were appraised using well-defined criteria. Any disagreement between the reviewers was resolved through discussion and consensus. The overall mean MPV value of those with MI (9.702 fl; 95% CI 9.07 – 10.33) was higher than in those of the non-MI control group (8.85 fl; 95% CI 8.23 – 9.46). Interpretation of the calculated t-value of 2.0827 showed that there was a significant difference in the mean MPV values of those with MI and those of the non-MI controls. The summary sensitivity (Se) and specificity (Sp) for MPV were 0.66 (95% CI; 0.59 - 0.73) and 0.60 (95% CI; 0.43 – 0.75), respectively. The pooled diagnostic odds ratio (DOR) was 2.92 (95% CI; 1.90 – 4.50). The positive likelihood ratio of MPV in the diagnosis of myocardial infarction was 1.65 (95% CI; 1.20 – 22.27), and the negative likelihood ratio was 0.56 (95% CI; 0.50 – 0.64). The intended role for MPV in the diagnostic pathway of myocardial infarction would perhaps be best as a triage tool. With a DOR of 2.92, MPV values can discriminate between those who have MI and those without. For a patient with angina presenting with elevated MPV values, it is 1.65 times more likely that he has MI. Thus, it is implied that the decision to treat a patient with angina or its equivalents as a case of MI could be supported by an elevated MPV value.Keywords: mean platelet volume, MPV, myocardial infarction, angina, chest pain
Procedia PDF Downloads 873645 Trauma in the Unconsoled: A Crisis of the Self
Authors: Assil Ghariri
Abstract:
This article studies the process of rewriting the self through memory in Kazuo Ishiguro’s novel, the Unconsoled (1995). It deals with the journey that the protagonist Mr. Ryder takes through the unconscious, in search for his real self, in which trauma stands as an obstacle. The article uses Carl Jung’s theory of archetypes. Trauma, in this article, is discussed as one of the true obstacles of the unconscious that prevent people from realizing the truth about their selves.Keywords: Carl Jung, Kazuo Ishiguro, memory, trauma
Procedia PDF Downloads 4043644 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations
Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh
Abstract:
Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy
Procedia PDF Downloads 983643 Building Scalable and Accurate Hybrid Kernel Mapping Recommender
Authors: Hina Iqbal, Mustansar Ali Ghazanfar, Sandor Szedmak
Abstract:
Recommender systems uses artificial intelligence practices for filtering obscure information and can predict if a user likes a specified item. Kernel mapping Recommender systems have been proposed which are accurate and state-of-the-art algorithms and resolve recommender system’s design objectives such as; long tail, cold-start, and sparsity. The aim of research is to propose hybrid framework that can efficiently integrate different versions— namely item-based and user-based KMR— of KMR algorithm. We have proposed various heuristic algorithms that integrate different versions of KMR (into a unified framework) resulting in improved accuracy and elimination of problems associated with conventional recommender system. We have tested our system on publically available movies dataset and benchmark with KMR. The results (in terms of accuracy, precision, recall, F1 measure and ROC metrics) reveal that the proposed algorithm is quite accurate especially under cold-start and sparse scenarios.Keywords: Kernel Mapping Recommender Systems, hybrid recommender systems, cold start, sparsity, long tail
Procedia PDF Downloads 3413642 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators
Authors: K. O'Malley
Abstract:
Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university
Procedia PDF Downloads 343641 Sexual Health in the Over Forty-Fives: A Cross-Europe Project
Authors: Tess Hartland, Moitree Banerjee, Sue Churchill, Antonina Pereira, Ian Tyndall, Ruth Lowry
Abstract:
Background: Sexual health services and policies for middle-aged and older adults are underdeveloped, while global sexually transmitted infections in this age group are on the rise. The Interreg cross-Europe Sexual Health In Over 45s (SHIFT) project aims to increase participation in sexual health services and improve sexual health and wellbeing in people aged over 45, with an additional focus on disadvantaged groups. Methods: A two-pronged mixed-methodology is being used to develop a model for good service provision in sexual health for over 45s. (1) Following PRISMA-ScR guidelines, a scoping review is being conducted, using the databases PsychINFO, Web of Science, ERIC and PubMed. A key search strategy using terms around sexual health, good practice, over 45s and disadvantaged groups. The initial search for literature yielded 7914 results. (2) Surveys (n=1000) based on the Theory of Planned Behaviour are being administered across the UK, Belgium and Netherlands to explore current sexual health knowledge, awareness and attitudes. Expected results: It is expected that sexual health needs and potential gaps in service provision will be identified in order to inform good practice for sexual health services for the target population. Results of the scoping review are being analysed, while focus group and survey data is being gathered. Preliminary analysis of the survey data highlights barriers to access such as limited risk awareness and stigma. All data analysis will be completed by the time of the conference. Discussion: Findings will inform the development of a model to improve sexual health and wellbeing for among over 45s, a population which is often missed in sexual health policy improvement.Keywords: adult health, disease prevention, health promotion, over 45s, sexual health
Procedia PDF Downloads 1313640 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network
Authors: Jia Xin Low, Keng Wah Choo
Abstract:
This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.Keywords: convolutional neural network, discrete wavelet transform, deep learning, heart sound classification
Procedia PDF Downloads 3493639 A Novel Combined Finger Counting and Finite State Machine Technique for ASL Translation Using Kinect
Authors: Rania Ahmed Kadry Abdel Gawad Birry, Mohamed El-Habrouk
Abstract:
This paper presents a brief survey of the techniques used for sign language recognition along with the types of sensors used to perform the task. It presents a modified method for identification of an isolated sign language gesture using Microsoft Kinect with the OpenNI framework. It presents the way of extracting robust features from the depth image provided by Microsoft Kinect and the OpenNI interface and to use them in creating a robust and accurate gesture recognition system, for the purpose of ASL translation. The Prime Sense’s Natural Interaction Technology for End-user - NITE™ - was also used in the C++ implementation of the system. The algorithm presents a simple finger counting algorithm for static signs as well as directional Finite State Machine (FSM) description of the hand motion in order to help in translating a sign language gesture. This includes both letters and numbers performed by a user, which in-turn may be used as an input for voice pronunciation systems.Keywords: American sign language, finger counting, hand tracking, Microsoft Kinect
Procedia PDF Downloads 2983638 Registration of Multi-Temporal Unmanned Aerial Vehicle Images for Facility Monitoring
Authors: Dongyeob Han, Jungwon Huh, Quang Huy Tran, Choonghyun Kang
Abstract:
Unmanned Aerial Vehicles (UAVs) have been used for surveillance, monitoring, inspection, and mapping. In this paper, we present a systematic approach for automatic registration of UAV images for monitoring facilities such as building, green house, and civil structures. The two-step process is applied; 1) an image matching technique based on SURF (Speeded up Robust Feature) and RANSAC (Random Sample Consensus), 2) bundle adjustment of multi-temporal images. Image matching to find corresponding points is one of the most important steps for the precise registration of multi-temporal images. We used the SURF algorithm to find a quick and effective matching points. RANSAC algorithm was used in the process of finding matching points between images and in the bundle adjustment process. Experimental results from UAV images showed that our approach has a good accuracy to be applied to the change detection of facility.Keywords: building, image matching, temperature, unmanned aerial vehicle
Procedia PDF Downloads 2933637 Optimal Design of Tuned Inerter Damper-Based System for the Control of Wind-Induced Vibration in Tall Buildings through Cultural Algorithm
Authors: Luis Lara-Valencia, Mateo Ramirez-Acevedo, Daniel Caicedo, Jose Brito, Yosef Farbiarz
Abstract:
Controlling wind-induced vibrations as well as aerodynamic forces, is an essential part of the structural design of tall buildings in order to guarantee the serviceability limit state of the structure. This paper presents a numerical investigation on the optimal design parameters of a Tuned Inerter Damper (TID) based system for the control of wind-induced vibration in tall buildings. The control system is based on the conventional TID, with the main difference that its location is changed from the ground level to the last two story-levels of the structural system. The TID tuning procedure is based on an evolutionary cultural algorithm in which the optimum design variables defined as the frequency and damping ratios were searched according to the optimization criteria of minimizing the root mean square (RMS) response of displacements at the nth story of the structure. A Monte Carlo simulation was used to represent the dynamic action of the wind in the time domain in which a time-series derived from the Davenport spectrum using eleven harmonic functions with randomly chosen phase angles was reproduced. The above-mentioned methodology was applied on a case-study derived from a 37-story prestressed concrete building with 144 m height, in which the wind action overcomes the seismic action. The results showed that the optimally tuned TID is effective to reduce the RMS response of displacements up to 25%, which demonstrates the feasibility of the system for the control of wind-induced vibrations in tall buildings.Keywords: evolutionary cultural algorithm, Monte Carlo simulation, tuned inerter damper, wind-induced vibrations
Procedia PDF Downloads 1353636 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm
Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali
Abstract:
Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir
Procedia PDF Downloads 2673635 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm
Authors: Frodouard Minani
Abstract:
Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks
Procedia PDF Downloads 148