Search results for: binary classifier
216 Microstructure, Mechanical, Electrical and Thermal Properties of the Al-Si-Ni Ternary Alloy
Authors: Aynur Aker, Hasan Kaya
Abstract:
In recent years, the use of the aluminum based alloys in the industry and technology are increasing. Alloying elements in aluminum have further been improving the strength and stiffness properties that provide superior compared to other metals. In this study, investigation of physical properties (microstructure, microhardness, tensile strength, electrical conductivity and thermal properties) in the Al-12.6wt.%Si-%2wt.Ni ternary alloy were investigated. Al-Si-Ni alloy was prepared in a graphite crucible under vacuum atmosphere. The samples were directionally solidified upwards with different growth rate (V) at constant temperature gradient G (7.73 K/mm). The microstructures (flake spacings, λ), microhardness (HV), ultimate tensile strength, electrical resistivity and thermal properties enthalpy of fusion and specific heat and melting temperature) of the samples were measured. Influence of the growth rate and flake spacings on microhardness, ultimate tensile strength and electrical resistivity were investigated and relationships between them were experimentally obtained by using regression analysis. According to results, λ values decrease with increasing V, but microhardness, ultimate tensile strength, electrical resistivity values increase with increasing V. Variations of electrical resistivity for cast samples with the temperature in the range of 300-1200 K were also measured by using a standard dc four-point probe technique. The enthalpy of fusion and specific heat for the same alloy was also determined by means of differential scanning calorimeter (DSC) from heating trace during the transformation from liquid to solid. The results obtained in this work were compared with the previous similar experimental results obtained for binary and ternary alloys.Keywords: electrical resistivity, enthalpy, microhardness, solidification, tensile stress
Procedia PDF Downloads 376215 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques
Authors: Gizem Eser Erdek
Abstract:
This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet
Procedia PDF Downloads 76214 Application Difference between Cox and Logistic Regression Models
Authors: Idrissa Kayijuka
Abstract:
The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio
Procedia PDF Downloads 454213 Bias-Corrected Estimation Methods for Receiver Operating Characteristic Surface
Authors: Khanh To Duc, Monica Chiogna, Gianfranco Adimari
Abstract:
With three diagnostic categories, assessment of the performance of diagnostic tests is achieved by the analysis of the receiver operating characteristic (ROC) surface, which generalizes the ROC curve for binary diagnostic outcomes. The volume under the ROC surface (VUS) is a summary index usually employed for measuring the overall diagnostic accuracy. When the true disease status can be exactly assessed by means of a gold standard (GS) test, unbiased nonparametric estimators of the ROC surface and VUS are easily obtained. In practice, unfortunately, disease status verification via the GS test could be unavailable for all study subjects, due to the expensiveness or invasiveness of the GS test. Thus, often only a subset of patients undergoes disease verification. Statistical evaluations of diagnostic accuracy based only on data from subjects with verified disease status are typically biased. This bias is known as verification bias. Here, we consider the problem of correcting for verification bias when continuous diagnostic tests for three-class disease status are considered. We assume that selection for disease verification does not depend on disease status, given test results and other observed covariates, i.e., we assume that the true disease status, when missing, is missing at random. Under this assumption, we discuss several solutions for ROC surface analysis based on imputation and re-weighting methods. In particular, verification bias-corrected estimators of the ROC surface and of VUS are proposed, namely, full imputation, mean score imputation, inverse probability weighting and semiparametric efficient estimators. Consistency and asymptotic normality of the proposed estimators are established, and their finite sample behavior is investigated by means of Monte Carlo simulation studies. Two illustrations using real datasets are also given.Keywords: imputation, missing at random, inverse probability weighting, ROC surface analysis
Procedia PDF Downloads 416212 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri
Authors: Shishay Kidanu, Abdullah Alhaj
Abstract:
Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri
Procedia PDF Downloads 74211 Pattern Recognition Approach Based on Metabolite Profiling Using In vitro Cancer Cell Line
Authors: Amanina Iymia Jeffree, Reena Thriumani, Mohammad Iqbal Omar, Ammar Zakaria, Yumi Zuhanis Has-Yun Hashim, Ali Yeon Md Shakaff
Abstract:
Metabolite profiling is a strategy to be approached in the pattern recognition method focused on three types of cancer cell line that driving the most to death specifically lung, breast, and colon cancer. The purpose of this study was to discriminate the VOCs pattern among cancerous and control group based on metabolite profiling. The sampling was executed utilizing the cell culture technique. All culture flasks were incubated till 72 hours and data collection started after 24 hours. Every running sample took 24 minutes to be completed accordingly. The comparative metabolite patterns were identified by the implementation of headspace-solid phase micro-extraction (HS-SPME) sampling coupled with gas chromatography-mass spectrometry (GCMS). The optimizations of the main experimental variables such as oven temperature and time were evaluated by response surface methodology (RSM) to get the optimal condition. Volatiles were acknowledged through the National Institute of Standards and Technology (NIST) mass spectral database and retention time libraries. To improve the reliability of significance, it is of crucial importance to eliminate background noise which data from 3rd minutes to 17th minutes were selected for statistical analysis. Targeted metabolites, of which were annotated as known compounds with the peak area greater than 0.5 percent were highlighted and subsequently treated statistically. Volatiles produced contain hundreds to thousands of compounds; therefore, it will be optimized by chemometric analysis, such as principal component analysis (PCA) as a preliminary analysis before subjected to a pattern classifier for identification of VOC samples. The volatile organic compound profiling has shown to be significantly distinguished among cancerous and control group based on metabolite profiling.Keywords: in vitro cancer cell line, metabolite profiling, pattern recognition, volatile organic compounds
Procedia PDF Downloads 365210 Assessment of the Work-Related Stress and Associated Factors among Sanitation Workers in Public Hospitals during COVID-19, Addis Ababa, Ethiopia
Authors: Zerubabel Mihret
Abstract:
Background: Work-related stress is a pattern of reactions to work demands unmatched by worker’s knowledge, skills, or abilities. Healthcare institutions are considered high-risk and intensive work areas for work-related stress. However, there is the nonexistence of clear and strong data about the magnitude of work-related stress on sanitation workers in hospitals in Ethiopia. The aim of this study was to determine the magnitude of work-related stress among sanitation workers in public hospitals during COVID-19 in Addis Ababa, Ethiopia. Methods: Institution-based cross-sectional study was conducted from October 2021 to February 2022 among 494 sanitation workers who were selected from 4 hospitals. HSE (Health and Safety Executive of UK) standard data collection tool was used, and an interviewer-administered questionnaire was used to collect the data using KOBO collect application. The collected data were cleaned and analyzed using SPSS version 20.0. Both binary and multivariable logistic regression analyses were done to identify important factors having an association with work-related stress. Variables with p-value ≤ 0.25 in the bivariate analysis were entered into the multivariable logistic regression model. A statistically significant level was declared at a p-value ≤ 0.05. Results: This study revealed that the magnitude of work-related stress among sanitation workers was 49.2% (95% CI 45-54). Significant proportions (72.7%) of sanitation workers were dissatisfied with their current job. Sex, age, experience, and chewing khat were significantly associated with work-related stress. Conclusion: Work-related stress is significantly high among sanitation workers. Sex, age, experience, and chewing khat were identified as factors associated with work-related stress. Intervention program focusing on the prevention and control of stress is desired by hospitals.Keywords: work-related stress, sanitation workers, Likert scale, public hospitals, Ethiopia
Procedia PDF Downloads 83209 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery
Authors: Forouzan Salehi Fergeni
Abstract:
Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine
Procedia PDF Downloads 50208 A Mega-Analysis of the Predictive Power of Initial Contact within Minimal Social Network
Authors: Cathal Ffrench, Ryan Barrett, Mike Quayle
Abstract:
It is accepted in social psychology that categorization leads to ingroup favoritism, without further thought given to the processes that may co-occur or even precede categorization. These categorizations move away from the conceptualization of the self as a unique social being toward an increasingly collective identity. Subsequently, many individuals derive much of their self-evaluations from these collective identities. The seminal literature on this topic argues that it is primarily categorization that evokes instances of ingroup favoritism. Apropos to these theories, we argue that categorization acts to enhance and further intergroup processes rather than defining them. More accurately, we propose categorization aids initial ingroup contact and this first contact is predictive of subsequent favoritism on individual and collective levels. This analysis focuses on Virtual Interaction APPLication (VIAPPL) based studies, a software interface that builds on the flaws of the original minimal group studies. The VIAPPL allows the exchange of tokens in an intra and inter-group manner. This token exchange is how we classified the first contact. The study involves binary longitudinal analysis to better understand the subsequent exchanges of individuals based on who they first interacted with. Studies were selected on the criteria of evidence of explicit first interactions and two-group designs. Our findings paint a compelling picture in support of a motivated contact hypothesis, which suggests that an individual’s first motivated contact toward another has strong predictive capabilities for future behavior. This contact can lead to habit formation and specific favoritism towards individuals where contact has been established. This has important implications for understanding how group conflict occurs, and how intra-group individual bias can develop.Keywords: categorization, group dynamics, initial contact, minimal social networks, momentary contact
Procedia PDF Downloads 148207 Determinants of Rural Household Effective Demand for Biogas Technology in Southern Ethiopia
Authors: Mesfin Nigussie
Abstract:
The objectives of the study were to identify factors affecting rural households’ willingness to install biogas plant and amount willingness to pay in order to examine determinants of effective demand for biogas technology. A multistage sampling technique was employed to select 120 respondents for the study. The binary probit regression model was employed to identify factors affecting rural households’ decision to install biogas technology. The probit model result revealed that household size, total household income, access to extension services related to biogas, access to credit service, proximity to water sources, perception of households about the quality of biogas, perception index about attributes of biogas, perception of households about installation cost of biogas and availability of energy source were statistically significant in determining household’s decision to install biogas. Tobit model was employed to examine determinants of rural household’s amount of willingness to pay. Based on the model result, age of the household head, total annual income of the household, access to extension service and availability of other energy source were significant variables that influence willingness to pay. Providing due considerations for extension services, availability of credit or subsidy, improving the quality of biogas technology design and minimizing cost of installation by using locally available materials are the main suggestions of this research that help to create effective demand for biogas technology.Keywords: biogas technology, effective demand, probit model, tobit model, willingnes to pay
Procedia PDF Downloads 140206 A Framework for Auditing Multilevel Models Using Explainability Methods
Authors: Debarati Bhaumik, Diptish Dey
Abstract:
Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics
Procedia PDF Downloads 93205 The Determination of the Phosphorous Solubility in the Iron by the Function of the Other Components
Authors: Andras Dezső, Peter Baumli, George Kaptay
Abstract:
The phosphorous is the important components in the steels, because it makes the changing of the mechanical properties and possibly modifying the structure. The phosphorous can be create the Fe3P compounds, what is segregated in the ferrite grain boundary in the intervals of the nano-, or microscale. This intermetallic compound is decreasing the mechanical properties, for example it makes the blue brittleness which means that the brittle created by the segregated particles at 200 ... 300°C. This work describes the phosphide solubility by the other components effect. We make calculations for the Ni, Mo, Cu, S, V, C, Si, Mn, and the Cr elements by the Thermo-Calc software. We predict the effects by approximate functions. The binary Fe-P system has a solubility line, which has a determinating equation. The result is below: lnwo = -3,439 – 1.903/T where the w0 means the weight percent of the maximum soluted concentration of the phosphorous, and the T is the temperature in Kelvin. The equation show that the P more soluble element when the temperature increasing. The nickel, molybdenum, vanadium, silicon, manganese, and the chromium make dependence to the maximum soluted concentration. These functions are more dependent by the elements concentration, which are lower when we put these elements in our steels. The copper, sulphur and carbon do not make effect to the phosphorous solubility. We predict that all of cases the maximum solubility concentration increases when the temperature more and more high. Between 473K and 673 K, in the phase diagram, these systems contain mostly two or three phase eutectoid, and the singe phase, ferritic intervals. In the eutectoid areas the ferrite, the iron-phosphide, and the metal (III)-phospide are in the equilibrium. In these modelling we predicted that which elements are good for avoid the phosphide segregation or not. These datas are important when we make or choose the steels, where the phosphide segregation stopping our possibilities.Keywords: phosphorous, steel, segregation, thermo-calc software
Procedia PDF Downloads 625204 Effect of Dimensional Reinforcement Probability on Discrimination of Visual Compound Stimuli by Pigeons
Authors: O. V. Vyazovska
Abstract:
Behavioral efficiency is one of the main principles to be successful in nature. Accuracy of visual discrimination is determined by the attention, learning experience, and memory. In the experimental condition, pigeons’ responses to visual stimuli presented on the screen of the monitor are behaviorally manifested by pecking or not pecking the stimulus, by the number of pecking, reaction time, etc. The higher the probability of rewarding is, the more likely pigeons will respond to the stimulus. We trained 8 pigeons (Columba livia) on a stagewise go/no-go visual discrimination task.16 visual stimuli were created from all possible combinations of four binary dimensions: brightness (dark/bright), size (large/small), line orientation (vertical/horizontal), and shape (circle/square). In the first stage, we presented S+ and 4 S-stimuli: the first that differed in all 4-dimensional values from S+, the second with brightness dimension sharing with S+, the third sharing brightness and orientation with S+, the fourth sharing brightness, orientation and size. Then all 16 stimuli were added. Pigeons rejected correctly 6-8 of 11 new added S-stimuli at the beginning of the second stage. The results revealed that pigeons’ behavior at the beginning of the second stage was controlled by probabilities of rewarding for 4 dimensions learned in the first stage. More or fewer mistakes with dimension discrimination at the beginning of the second stage depended on the number S- stimuli sharing the dimension with S+ in the first stage. A significant inverse correlation between the number of S- stimuli sharing dimension values with S+ in the first stage and the dimensional learning rate at the beginning of the second stage was found. Pigeons were more confident in discrimination of shape and size dimensions. They made mistakes at the beginning of the second stage, which were not associated with these dimensions. Thus, the received results help elucidate the principles of dimensional stimulus control during learning compound multidimensional visual stimuli.Keywords: visual go/no go discrimination, selective attention, dimensional stimulus control, pigeon
Procedia PDF Downloads 141203 Global Positioning System Match Characteristics as a Predictor of Badminton Players’ Group Classification
Authors: Yahaya Abdullahi, Ben Coetzee, Linda Van Den Berg
Abstract:
The study aimed at establishing the global positioning system (GPS) determined singles match characteristics that act as predictors of successful and less-successful male singles badminton players’ group classification. Twenty-two (22) male single players (aged: 23.39 ± 3.92 years; body stature: 177.11 ± 3.06cm; body mass: 83.46 ± 14.59kg) who represented 10 African countries participated in the study. Players were categorised as successful and less-successful players according to the results of five championships’ of the 2014/2015 season. GPS units (MinimaxX V4.0), Polar Heart Rate Transmitter Belts and digital video cameras were used to collect match data. GPS-related variables were corrected for match duration and independent t-tests, a cluster analysis and a binary forward stepwise logistic regression were calculated. A Receiver Operating Characteristic Curve (ROC) was used to determine the validity of the group classification model. High-intensity accelerations per second were identified as the only GPS-determined variable that showed a significant difference between groups. Furthermore, only high-intensity accelerations per second (p=0.03) and low-intensity efforts per second (p=0.04) were identified as significant predictors of group classification with 76.88% of players that could be classified back into their original groups by making use of the GPS-based logistic regression formula. The ROC showed a value of 0.87. The identification of the last-mentioned GPS-related variables for the attainment of badminton performances, emphasizes the importance of using badminton drills and conditioning techniques to not only improve players’ physical fitness levels but also their abilities to accelerate at high intensities.Keywords: badminton, global positioning system, match analysis, inertial movement analysis, intensity, effort
Procedia PDF Downloads 191202 An Exploratory Study on 'Sub-Region Life Circle' in Chinese Big Cities Based on Human High-Probability Daily Activity: Characteristic and Formation Mechanism as a Case of Wuhan
Authors: Zhuoran Shan, Li Wan, Xianchun Zhang
Abstract:
With an increasing trend of regionalization and polycentricity in Chinese contemporary big cities, “sub-region life circle” turns to be an effective method on rational organization of urban function and spatial structure. By the method of questionnaire, network big data, route inversion on internet map, GIS spatial analysis and logistic regression, this article makes research on characteristic and formation mechanism of “sub-region life circle” based on human high-probability daily activity in Chinese big cities. Firstly, it shows that “sub-region life circle” has been a new general spatial sphere of residents' high-probability daily activity and mobility in China. Unlike the former analysis of the whole metropolitan or the micro community, “sub-region life circle” has its own characteristic on geographical sphere, functional element, spatial morphology and land distribution. Secondly, according to the analysis result with Binary Logistic Regression Model, the research also shows that seven factors including land-use mixed degree and bus station density impact the formation of “sub-region life circle” most, and then analyzes the index critical value of each factor. Finally, to establish a smarter “sub-region life circle”, this paper indicates that several strategies including jobs-housing fit, service cohesion and space reconstruction are the keys for its spatial organization optimization. This study expands the further understanding of cities' inner sub-region spatial structure based on human daily activity, and contributes to the theory of “life circle” in urban's meso-scale.Keywords: sub-region life circle, characteristic, formation mechanism, human activity, spatial structure
Procedia PDF Downloads 300201 Duplex Real-Time Loop-Mediated Isothermal Amplification Assay for Simultaneous Detection of Beef and Pork
Authors: Mi-Ju Kim, Hae-Yeong Kim
Abstract:
Product mislabeling and adulteration have been increasing the concerns in processed meat products. Relatively inexpensive pork meat compared to meat such as beef was adulterated for economic benefit. These food fraud incidents related to pork were concerned due to economic, religious and health reasons. In this study, a rapid on-site detection method using loop-mediated isothermal amplification (LAMP) was developed for the simultaneous identification of beef and pork. Each specific LAMP primer for beef and pork was designed targeting on mitochondrial D-loop region. The LAMP assay reaction was performed at 65 ℃ for 40 min. The specificity of each primer for beef and pork was evaluated using DNAs extracted from 13 animal species including beef and pork. The sensitivity of duplex LAMP assay was examined by serial dilution of beef and pork DNAs, and reference binary mixtures. This assay was applied to processed meat products including beef and pork meat for monitoring. Each set of primers amplified only the targeted species with no cross-reactivity with animal species. The limit of detection of duplex real-time LAMP was 1 pg for each DNA of beef and pork and 1% pork in a beef-meat mixture. Commercial meat products that declared the presence of beef and/or pork meat on the label showed positive results for those species. This method was successfully applied to detect simultaneous beef and pork meats in processed meat products. The optimized duplex LAMP assay can identify simultaneously beef and pork meat within less than 40 min. A portable real-time fluorescence device used in this study is applicable for on-site detection of beef and pork in processed meat products. Thus, this developed assay was considered to be an efficient tool for monitoring meat products.Keywords: beef, duplex real-time LAMP, meat identification, pork
Procedia PDF Downloads 224200 Blood Pressure Level, Targeted Blood Pressure Control Rate, and Factors Related to Blood Pressure Control in Post-Acute Ischemic Stroke Patients
Authors: Nannapus Saramad, Rewwadee Petsirasan, Jom Suwanno
Abstract:
Background: This retrospective study design was to describe average blood pressure, blood pressure level, target blood pressure control rate post-stroke BP control in the year following discharge from Sichon hospital, Sichon District, Nakhon Si Thammarat province. The secondary data analysis was employed from the patient’s health records with patient or caregiver interview. A total of 232 eligible post-acute ischemic strokes in the year following discharge (2017-2018) were recruited. Methods: Data analyses were applied to identify the relationship values of single variables were determined through univariate analyses: The Chi-square test, Fisher exact test, the variables found to have a p-value < 0.2 were analyzed by the binary logistic regression Results: Most of the patients in this study were men 61.6%, an average age of 65.4 ± 14.8 years. Systolic blood pressure levels were in the grade 1-2 hypertension and diastolic pressure at optimal and normal at all times during the initial treatment through the present. The results revealed 25% among the groups under the age of 60 achieved BP control; 36.3% for older than 60 years group; and 27.9% for diabetic group. The multivariate analysis revealed the final relationship of four significant variables: 1) receiving calcium-channel blocker (p =.027); 2) medication adherence of antihypertensive (p = .024) 3) medication adherence of antiplatelet ( p = .020); and 4) medication behavior ( p = . 010) . Conclusion: The medical nurse and health care provider should promote their adherence to behavior to improve their blood pressure control.Keywords: acute ischemic stroke, target blood pressure control, medication adherence, recurrence stroke
Procedia PDF Downloads 122199 Reconstruction of the 'Bakla' as an Identity
Authors: Oscar H. Malaco Jr.
Abstract:
Homosexuality has been adapted as the universal concept that defines the deviations from the heteronormative parameters of society. Sexual orientation and gender identities have been used in a concretely separate manner the same way as the dynamics between man and woman, male and female, gender and sex operate. These terms are all products of human beings’ utilization of language. Language has proven its power to define and determine the status and the categories of the subjects in society. This tool developed by human beings provides a definition of their own specific cultural community and their individual selves that either claim or oppugn their space in the social hierarchy. The label ‘bakla’ is reasoned as an identity which is a reaction to the spectral disposition of gender and sexuality in the Philippine society. To expose the Filipino constitutes of bakla is the major attempt of this study. Through the methods of Sikolohiyang Pilipino (Filipino Psychology), namely Pagtatanung-tanong (asking questions) and Pakikipagkuwentuhan (story-telling), the utterances of the bakla were gathered and analyzed in a rhetorical and ideological manner. Furthermore, the Dramatistic Pentad of Kenneth Burke was adapted as a methodology and also utilized as a perspective of analysis. The results suggest that the bakla as an identity carries the hurdles of class. The performativity of the bakla is proven to be a cycle propelled by their guilt to be identified and recognized as subjects in a society where heteronormative power contests their gender and sexual expressions as relatively aberrational to the binary gender and sexual roles. The labels, hence, are potent structures that control the disposition of the bakla in the society, reflecting an aspect of the disposition of Filipino identities. After all, performing kabaklaan in the Philippine society is interplay between resistance and conformity to the hegemonic dominions as a result of imperial attempts to universalize the concept of homosexuality between and among distant cultural communities.Keywords: gender identity, sexual orientation, rhetoric, performativity
Procedia PDF Downloads 444198 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech
Authors: Monica Gonzalez Machorro
Abstract:
Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment
Procedia PDF Downloads 127197 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups
Authors: Lily Ingsrisawang, Tasanee Nacharoen
Abstract:
Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors
Procedia PDF Downloads 434196 Multiplying Vulnerability of Child Health Outcome and Food Diversity in India
Authors: Mukesh Ravi Raushan
Abstract:
Despite consideration of obesity as a deadly public health issue contributing 2.6 million deaths worldwide every year developing country like India is facing malnutrition and it is more common than in Sub-Saharan Africa. About one in every three malnourished children in the world lives in India. The paper assess the nutritional health among children using data from total number of 43737 infant and young children aged 0-59 months (µ = 29.54; SD = 17.21) of the selected households by National Family Health Survey, 2005-06. The wasting was measured by a Z-score of standardized weight-for-height according to the WHO child growth standards. The impact of education with place of residence was found to be significantly associated with the complementary food diversity score (CFDS) in India. The education of mother was positively associated with the CFDS but the degree of performance was lower in rural India than their counterpart from urban. The result of binary logistic regression on wasting with WHO seven types of recommended food for children in India suggest that child who consumed the milk product food (OR: 0.87, p<0.0001) were less likely to be malnourished than their counterparts who did not consume, whereas, in case of other food items as the child who consumed food product of seed (OR: 0.75, p<0.0001) were less likely to be malnourished than those who did not. The nutritional status among children were negatively associated with the protein containing complementary food given the child as those child who received pulse in last 24 hour were less likely to be wasted (OR: 0.87, p<0.00001) as compared to the reference categories. The frequency to feed the indexed child increases by 10 per cent the expected change in child health outcome in terms of wasting decreases by 2 per cent in India when place of residence, education, religion, and birth order were controlled. The index gets improved as the risk for malnutrition among children in India decreases.Keywords: CFDS, food diversity index, India, logistic regression
Procedia PDF Downloads 261195 A Novel Method for Face Detection
Authors: H. Abas Nejad, A. R. Teymoori
Abstract:
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model
Procedia PDF Downloads 338194 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society
Authors: Irene Yi
Abstract:
Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.Keywords: gendered grammar, misogynistic language, natural language processing, neural networks
Procedia PDF Downloads 120193 Introduction of Mass Rapid Transit System and Its Impact on Para-Transit
Authors: Khalil Ahmad Kakar
Abstract:
In developing countries increasing the automobile and low capacity public transport (para-transit) which are creating congestion, pollution, noise, and traffic accident are the most critical quandary. These issues are under the analysis of assessors to break down the puzzle and propose sustainable urban public transport system. Kabul city is one of those urban areas that the inhabitants are suffering from lack of tolerable and friendly public transport system. The city is the most-populous and overcrowded with around 4.5 million population. The para-transit is the only dominant public transit system with a very poor level of services and low capacity vehicles (6-20 passengers). Therefore, this study after detailed investigations suggests bus rapid transit (BRT) system in Kabul City. It is aimed to mitigate the role of informal transport and decreases congestion. The research covers three parts. In the first part, aggregated travel demand modelling (four-step) is applied to determine the number of users for para-transit and assesses BRT network based on higher passenger demand for public transport mode. In the second part, state preference (SP) survey and binary logit model are exerted to figure out the utility of existing para-transit mode and planned BRT system. Finally, the impact of predicted BRT system on para-transit is evaluated. The extracted outcome based on high travel demand suggests 10 km network for the proposed BRT system, which is originated from the district tenth and it is ended at Kabul International Airport. As well as, the result from the disaggregate travel mode-choice model, based on SP and logit model indicates that the predicted mass rapid transit system has higher utility with the significant impact regarding the reduction of para-transit.Keywords: BRT, para-transit, travel demand modelling, Kabul City, logit model
Procedia PDF Downloads 183192 Automated Feature Extraction and Object-Based Detection from High-Resolution Aerial Photos Based on Machine Learning and Artificial Intelligence
Authors: Mohammed Al Sulaimani, Hamad Al Manhi
Abstract:
With the development of Remote Sensing technology, the resolution of optical Remote Sensing images has greatly improved, and images have become largely available. Numerous detectors have been developed for detecting different types of objects. In the past few years, Remote Sensing has benefited a lot from deep learning, particularly Deep Convolution Neural Networks (CNNs). Deep learning holds great promise to fulfill the challenging needs of Remote Sensing and solving various problems within different fields and applications. The use of Unmanned Aerial Systems in acquiring Aerial Photos has become highly used and preferred by most organizations to support their activities because of their high resolution and accuracy, which make the identification and detection of very small features much easier than Satellite Images. And this has opened an extreme era of Deep Learning in different applications not only in feature extraction and prediction but also in analysis. This work addresses the capacity of Machine Learning and Deep Learning in detecting and extracting Oil Leaks from Flowlines (Onshore) using High-Resolution Aerial Photos which have been acquired by UAS fixed with RGB Sensor to support early detection of these leaks and prevent the company from the leak’s losses and the most important thing environmental damage. Here, there are two different approaches and different methods of DL have been demonstrated. The first approach focuses on detecting the Oil Leaks from the RAW Aerial Photos (not processed) using a Deep Learning called Single Shoot Detector (SSD). The model draws bounding boxes around the leaks, and the results were extremely good. The second approach focuses on detecting the Oil Leaks from the Ortho-mosaiced Images (Georeferenced Images) by developing three Deep Learning Models using (MaskRCNN, U-Net and PSP-Net Classifier). Then, post-processing is performed to combine the results of these three Deep Learning Models to achieve a better detection result and improved accuracy. Although there is a relatively small amount of datasets available for training purposes, the Trained DL Models have shown good results in extracting the extent of the Oil Leaks and obtaining excellent and accurate detection.Keywords: GIS, remote sensing, oil leak detection, machine learning, aerial photos, unmanned aerial systems
Procedia PDF Downloads 32191 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs
Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa
Abstract:
Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.Keywords: classification models, egg weight, fertilised eggs, multiple linear regression
Procedia PDF Downloads 87190 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 297189 Stabilization of y-Sterilized Food, Packaging Materials by Synergistic Mixtures of Food-Contact Approval Stabilizers
Authors: Sameh A. S. Thabit Alariqi
Abstract:
Food is widely packaged with plastic materials to prevent microbial contamination and spoilage. Ionizing radiation is widely used to sterilize the food-packaging materials. Sterilization by γ-radiation causes degradation for the plastic packaging materials such as embrittlement, stiffening, softening, discoloration, odour generation, and decrease in molecular weight. Many antioxidants can prevent γ-degradation but most of them are toxic. The migration of antioxidants to its environment gives rise to major concerns in case of food packaging plastics. In this attempt, we have aimed to utilize synergistic mixtures of stabilizers which are approved for food-contact applications. Ethylene-propylene-diene terpolymer (EPDM) have been melt-mixed with hindered amine stabilizers (HAS), phenolic antioxidants and organo-phosphites (hydroperoxide decomposer). Results were discussed by comparing the stabilizing efficiency of mixtures with and without phenol system. Among phenol containing systems where we mostly observed discoloration due to the oxidation of hindered phenol, the combination of secondary HAS, tertiary HAS, organo-phosphite and hindered phenol exhibited improved stabilization efficiency than single or binary additive systems. The mixture of secondary HAS and tertiary HAS, has shown antagonistic effect of stabilization. However, the combination of organo-phosphite with secondary HAS, tertiary HAS and phenol antioxidants have been found to give synergistic even at higher doses of -sterilization. The effects have been explained through the interaction between the stabilizers. After γ-irradiation, the consumption of oligomeric stabilizer significantly depends on the components of stabilization mixture. The effect of the organo-phosphite antioxidant on the overall stability has been discussed.Keywords: ethylene-propylene-diene terpolymer, synergistic mixtures, gamma sterilization, gamma stabilization
Procedia PDF Downloads 440188 Prognostic Impact of Pre-transplant Ferritinemia: A Survival Analysis Among Allograft Patients
Authors: Mekni Sabrine, Nouira Mariem
Abstract:
Background and aim: Allogeneic hematopoietic stem cell transplantation is a curative treatment for several hematological diseases; however, it has a non-negligible morbidity and mortality depending on several prognostic factors, including pre-transplant hyperferritinemia. The aim of our study was to estimate the impact of hyperferritinemia on survivals and on the occurrence of post-transplant complications. Methods: It was a longitudinal study conducted over 8 years and including all patients who had a first allograft. The impact of pretransplant hyperferritinemia (ferritinemia ≥1500) on survivals was studied using the Kaplan Meier method and the COX model for uni- and multivariate analysis. The Khi-deux test and binary logistic regression were used to study the association between pretransplant ferritinemia and post-transplant complications. Results: One hundred forty patients were included with an average age of 26.6 years and a sex ratio (M/F)=1.4. Hyperferritinemia was found in 33% of patients. It had no significant impact on either overall survival (p=0.9) or event -free survival (p=0.6). In multivariate analysis, only the type of disease was independently associated with overall survival (p=0.04) and event-free survival (p=0.002). For post-allograft complications: The occurrence of early documented infections was independently associated with pretransplant hyperferritinemia (p=0.02) and the presence of acute graft versus host disease( GVHD) (p<10-3). The occurrence of acute GVHD was associated with early documented infection (p=0.002) and Cytomegalovirus reactivation (p<10-3). The occurrence of chronic GVHD was associated with the presence of Cytomegalovirus reactivation (p=0.006) and graft source (p=0.009). Conclusion: Our study showed the significant impact of pre-transplant hyperferritinemia on the occurrence of early infections but not on survivals. Early and more accurate assessment iron overload by other tests such as liver magnetic resonance imaging with initiation of chelating treatment could prevent the occurrence of such complications after transplantation.Keywords: allogeneic, transplants, ferritin, survival
Procedia PDF Downloads 66187 Improve Divers Tracking and Classification in Sonar Images Using Robust Diver Wake Detection Algorithm
Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy
Abstract:
Harbor protection systems are so important. The need for automatic protection systems has increased over the last years. Diver detection active sonar has great significance. It used to detect underwater threats such as divers and autonomous underwater vehicle. To automatically detect such threats the sonar image is processed by algorithms. These algorithms used to detect, track and classify of underwater objects. In this work, divers tracking and classification algorithm is improved be proposing a robust wake detection method. To detect objects the sonar images is normalized then segmented based on fixed threshold. Next, the centroids of the segments are found and clustered based on distance metric. Then to track the objects linear Kalman filter is applied. To reduce effect of noise and creation of false tracks, the Kalman tracker is fine tuned. The tuning is done based on our active sonar specifications. After the tracks are initialed and updated they are subjected to a filtering stage to eliminate the noisy and unstable tracks. Also to eliminate object with a speed out of the diver speed range such as buoys and fast boats. Afterwards the result tracks are subjected to a classification stage to deiced the type of the object been tracked. Here the classification stage is to deice wither if the tracked object is an open circuit diver or a close circuit diver. At the classification stage, a small area around the object is extracted and a novel wake detection method is applied. The morphological features of the object with his wake is extracted. We used support vector machine to find the best classifier. The sonar training images and the test images are collected by ARMELSAN Defense Technologies Company using the portable diver detection sonar ARAS-2023. After applying the algorithm to the test sonar data, we get fine and stable tracks of the divers. The total classification accuracy achieved with the diver type is 97%.Keywords: harbor protection, diver detection, active sonar, wake detection, diver classification
Procedia PDF Downloads 238