Search results for: curve feature
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2526

Search results for: curve feature

2076 Insulin Resistance in Children and Adolescents in Relation to Body Mass Index, Waist Circumference and Body Fat Weight

Authors: E. Vlachopapadopoulou, E. Dikaiakou, E. Anagnostou, I. Panagiotopoulos, E. Kaloumenou, M. Kafetzi, A. Fotinou, S. Michalacos

Abstract:

Aim: To investigate the relation and impact of Body Mass Index (BMI), Waist Circumference (WC) and Body Fat Weight (BFW) on insulin resistance (MATSUDA INDEX < 2.5) in children and adolescents. Methods: Data from 95 overweight and obese children (47 boys and 48 girls) with mean age 10.7 ± 2.2 years were analyzed. ROC analysis was used to investigate the predictive ability of BMI, WC and BFW for insulin resistance and find the optimal cut-offs. The overall performance of the ROC analysis was quantified by computing area under the curve (AUC). Results: ROC curve analysis indicated that the optimal-cut off of WC for the prediction of insulin resistance was 97 cm with sensitivity equal to 75% and specificity equal to 73.1%. AUC was 0.78 (95% CI: 0.63-0.92, p=0.001). The sensitivity and specificity of obesity for the discrimination of participants with insulin resistance from those without insulin resistance were equal to 58.3% and 75%, respectively (AUC=0.67). BFW had a borderline predictive ability for insulin resistance (AUC=0.58, 95% CI: 0.43-0.74, p=0.101). The predictive ability of WC was equivalent with the correspondence predictive ability of BMI (p=0.891). Obese subjects had 4.2 times greater odds for having insulin resistance (95% CI: 1.71-10.30, p < 0.001), while subjects with WC more than 97 had 8.1 times greater odds for having insulin resistance (95% CI: 2.14-30.86, p=0.002). Conclusion: BMI and WC are important clinical factors that have significant clinical relation with insulin resistance in children and adolescents. The cut off of 97 cm for WC can identify children with greater likelihood for insulin resistance.

Keywords: body fat weight, body mass index, insulin resistance, obese children, waist circumference

Procedia PDF Downloads 292
2075 Sensitivity Enhancement in Graphene Based Surface Plasmon Resonance (SPR) Biosensor

Authors: Angad S. Kushwaha, Rajeev Kumar, Monika Srivastava, S. K. Srivastava

Abstract:

A lot of research work is going on in the field of graphene based SPR biosensor. In the conventional SPR based biosensor, graphene is used as a biomolecular recognition element. Graphene adsorbs biomolecules due to carbon based ring structure through sp2 hybridization. The proposed SPR based biosensor configuration will open a new avenue for efficient biosensing by taking the advantage of Graphene and its fascinating nanofabrication properties. In the present study, we have studied an SPR biosensor based on graphene mediated by Zinc Oxide (ZnO) and Gold. In the proposed structure, prism (BK7) base is coated with Zinc Oxide followed by Gold and Graphene. Using the waveguide approach by transfer matrix method, the proposed structure has been investigated theoretically. We have analyzed the reflectance versus incidence angle curve using He-Ne laser of wavelength 632.8 nm. Angle, at which the reflectance is minimized, termed as SPR angle. The shift in SPR angle is responsible for biosensing. From the analysis of reflectivity curve, we have found that there is a shift in SPR angle as the biomolecules get attached on the graphene surface. This graphene layer also enhances the sensitivity of the SPR sensor as compare to the conventional sensor. The sensitivity also increases by increasing the no of graphene layer. So in our proposed biosensor we have found minimum possible reflectivity with optimum level of sensitivity.

Keywords: biosensor, sensitivity, surface plasmon resonance, transfer matrix method

Procedia PDF Downloads 400
2074 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 105
2073 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 174
2072 Russian Spatial Impersonal Sentence Models in Translation Perspective

Authors: Marina Fomina

Abstract:

The paper focuses on the category of semantic subject within the framework of a functional approach to linguistics. The semantic subject is related to similar notions such as the grammatical subject and the bearer of predicative feature. It is the multifaceted nature of the category of subject that 1) triggers a number of issues that, syntax-wise, remain to be dealt with (cf. semantic vs. syntactic functions / sentence parts vs. parts of speech issues, etc.); 2) results in a variety of approaches to the category of subject, such as formal grammatical, semantic/syntactic (functional), communicative approaches, etc. Many linguists consider the prototypical approach to the category of subject to be the most instrumental as it reveals the integrity of denotative and linguistic components of the conceptual category. This approach relates to subject as a source of non-passive predicative feature, an element of subject-predicate-object situation that can take on a variety of semantic roles, cf.: 1) an agent (He carefully surveyed the valley stretching before him), 2) an experiencer (I feel very bitter about this), 3) a recipient (I received this book as a gift), 4) a causee (The plane broke into three pieces), 5) a patient (This stove cleans easily), etc. It is believed that the variety of roles stems from the radial (prototypical) structure of the category with some members more central than others. Translation-wise, the most “treacherous” subject types are the peripheral ones. The paper 1) features a peripheral status of spatial impersonal sentence models such as U menia v ukhe zvenit (lit. I-Gen. in ear buzzes) within the category of semantic subject, 2) makes a structural and semantic analysis of the models, 3) focuses on their Russian-English translation patterns, 4) reveals non-prototypical features of subjects in the English equivalents.

Keywords: bearer of predicative feature, grammatical subject, impersonal sentence model, semantic subject

Procedia PDF Downloads 353
2071 Using Machine Learning Techniques for Autism Spectrum Disorder Analysis and Detection in Children

Authors: Norah Mohammed Alshahrani, Abdulaziz Almaleh

Abstract:

Autism Spectrum Disorder (ASD) is a condition related to issues with brain development that affects how a person recognises and communicates with others which results in difficulties with interaction and communication socially and it is constantly growing. Early recognition of ASD allows children to lead safe and healthy lives and helps doctors with accurate diagnoses and management of conditions. Therefore, it is crucial to develop a method that will achieve good results and with high accuracy for the measurement of ASD in children. In this paper, ASD datasets of toddlers and children have been analyzed. We employed the following machine learning techniques to attempt to explore ASD and they are Random Forest (RF), Decision Tree (DT), Na¨ıve Bayes (NB) and Support Vector Machine (SVM). Then Feature selection was used to provide fewer attributes from ASD datasets while preserving model performance. As a result, we found that the best result has been provided by the Support Vector Machine (SVM), achieving 0.98% in the toddler dataset and 0.99% in the children dataset.

Keywords: autism spectrum disorder, machine learning, feature selection, support vector machine

Procedia PDF Downloads 126
2070 Alternator Fault Detection Using Wigner-Ville Distribution

Authors: Amin Ranjbar, Amir Arsalan Jalili Zolfaghari, Amir Abolfazl Suratgar, Mehrdad Khajavi

Abstract:

This paper describes two stages of learning-based fault detection procedure in alternators. The procedure consists of three states of machine condition namely shortened brush, high impedance relay and maintaining a healthy condition in the alternator. The fault detection algorithm uses Wigner-Ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.

Keywords: alternator, artificial neural network, support vector machine, time-frequency analysis, Wigner-Ville distribution

Procedia PDF Downloads 350
2069 Reducing the Imbalance Penalty Through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey

Authors: Hayriye Anıl, Görkem Kar

Abstract:

In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations since geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning, and, time series methods, the total generation of the power plants belonging to Zorlu Natural Electricity Generation, which has a high installed capacity in terms of geothermal, was estimated for the first one and two weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.

Keywords: machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting

Procedia PDF Downloads 87
2068 A Dynamic Software Product Line Approach to Self-Adaptive Genetic Algorithms

Authors: Abdelghani Alidra, Mohamed Tahar Kimour

Abstract:

Genetic algorithm must adapt themselves at design time to cope with the search problem specific requirements and at runtime to balance exploration and convergence objectives. In a previous article, we have shown that modeling and implementing Genetic Algorithms (GA) using the software product line (SPL) paradigm is very appreciable because they constitute a product family sharing a common base of code. In the present article we propose to extend the use of the feature model of the genetic algorithms family to model the potential states of the GA in what is called a Dynamic Software Product Line. The objective of this paper is the systematic generation of a reconfigurable architecture that supports the dynamic of the GA and which is easily deduced from the feature model. The resultant GA is able to perform dynamic reconfiguration autonomously to fasten the convergence process while producing better solutions. Another important advantage of our approach is the exploitation of recent advances in the domain of dynamic SPLs to enhance the performance of the GAs.

Keywords: self-adaptive genetic algorithms, software engineering, dynamic software product lines, reconfigurable architecture

Procedia PDF Downloads 262
2067 Pharmacokinetic Study of Clarithromycin in Human Female of Pakistani Population

Authors: Atifa Mushtaq, Tanweer Khaliq, Hafiz Alam Sher, Asia Farid, Anila Kanwal, Maliha Sarfraz

Abstract:

The study was designed to assess the various pharmacokinetic parameters of a commercially available clarithromycin Tablet (Klaricid® 250 mg Abbot, Pakistan) in plasma sample of healthy adult female volunteers by applying a rapid, sensitive and accurate HPLC-UV analytical method. The human plasma samples were evaluated by using an isocratic High Performance Liquid Chromatography (HPLC) system of Sykam consisted of a pump with a column C18 column (250×4.6mn, 5µm) UV-detector. The mobile phase comprises of potassium dihydrogen phosphate (50 mM, pH 6.8, contained 0.7% triethylamine), methanol and acetonitrile (30:25:45, v/v/v) was delivered with injection volume of 20µL at flow rate of 1 mL/min. The detection was performed at λmax 275 nm. By applying this method, important pharmacokinetic parameters Cmax, Tmax, Area under curve (AUC), half-life (t1/2), , Volume of distribution (Vd) and Clearance (Cl) were measured. The parameters of pharmacokinetics of clarithromycin were calculated by software (APO) pharmacological analysis. Maximum plasma concentrations Cmax 2.78 ±0.33 µg/ml, time to reach maximum concentration tmax 2.82 ± 0.11 h and Area under curve AUC was 20.14 h.µg/ml. The mean ± SD values obtained for the pharmacokinetic parameters showed a significant difference in pharmacokinetic parameters observed in previous literature which emphasizes the need for dose adjustment of clarithromycin in Pakistani population.

Keywords: Pharmacokinetc, Clarothromycin, HPLC, Pakistan

Procedia PDF Downloads 88
2066 Improved of Elliptic Curves Cryptography over a Ring

Authors: Abdelhakim Chillali, Abdelhamid Tadmori, Muhammed Ziane

Abstract:

In this article we will study the elliptic curve defined over the ring An and we define the mathematical operations of ECC, which provides a high security and advantage for wireless applications compared to other asymmetric key cryptosystem.

Keywords: elliptic curves, finite ring, cryptography, study

Procedia PDF Downloads 354
2065 Tillage and Manure Effects on Water Retention and Van Genuchten Parameters in Western Iran

Authors: Azadeh Safadoust, Ali Akbar Mahboubi, Mohammad Reza Mosaddeghi, Bahram Gharabaghi

Abstract:

A study was conducted to evaluate hydraulic properties of a sandy loam soil and corn (Zea mays L.) crop production under a short-term tillage and manure combinations field experiment carried out in west of Iran. Treatments included composted cattle manure application rates [0, 30, and 60 Mg (dry weight) ha⁻¹] and tillage systems [no-tillage (NT), chisel plowing (CP), and moldboard plowing (MP)] arranged in a split-plot design. Soil water characteristic curve (SWCC) and saturated hydraulic conductivity (Ks) were significantly affected by manure and tillage treatments. At any matric suction, the soil water content was in the order of MP>CP>NT. At all matric suctions, the amount of water retained by the soil increased as manure application rate increased (i.e. 60>30>0 Mg ha⁻¹). Similar to the tillage effects, at high suctions the differences of water retained due to manure addition were less than that at low suctions. The change of SWCC from tillage methods and manure applications may attribute to the change of pore size and aggregate size distributions. Soil Ks was in the order of CP>MP>NT for the first two layers and in the order of MP>CP and NT for the deeper soil layer. The Ks also increased with increasing rates of manure application (i.e. 60>30>0 Mg ha⁻¹). This was due to the increase in the total pore size and continuity.

Keywords: corn, manure, saturated hydraulic conductivity, soil water characteristic curve, tillage

Procedia PDF Downloads 54
2064 A Framework for Auditing Multilevel Models Using Explainability Methods

Authors: Debarati Bhaumik, Diptish Dey

Abstract:

Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.

Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics

Procedia PDF Downloads 70
2063 Machine Vision System for Measuring the Quality of Bulk Sun-dried Organic Raisins

Authors: Navab Karimi, Tohid Alizadeh

Abstract:

An intelligent vision-based system was designed to measure the quality and purity of raisins. A machine vision setup was utilized to capture the images of bulk raisins in ranges of 5-50% mixed pure-impure berries. The textural features of bulk raisins were extracted using Grey-level Histograms, Co-occurrence Matrix, and Local Binary Pattern (a total of 108 features). Genetic Algorithm and neural network regression were used for selecting and ranking the best features (21 features). As a result, the GLCM features set was found to have the highest accuracy (92.4%) among the other sets. Followingly, multiple feature combinations of the previous stage were fed into the second regression (linear regression) to increase accuracy, wherein a combination of 16 features was found to be the optimum. Finally, a Support Vector Machine (SVM) classifier was used to differentiate the mixtures, producing the best efficiency and accuracy of 96.2% and 97.35%, respectively.

Keywords: sun-dried organic raisin, genetic algorithm, feature extraction, ann regression, linear regression, support vector machine, south azerbaijan.

Procedia PDF Downloads 55
2062 The Customization of 3D Last Form Design Based on Weighted Blending

Authors: Shih-Wen Hsiao, Chu-Hsuan Lee, Rong-Qi Chen

Abstract:

When it comes to last, it is regarded as the critical foundation of shoe design and development. Not only the last relates to the comfort of shoes wearing but also it aids the production of shoe styling and manufacturing. In order to enhance the efficiency and application of last development, a computer aided methodology for customized last form designs is proposed in this study. The reverse engineering is mainly applied to the process of scanning for the last form. Then the minimum energy is used for the revision of surface continuity, the surface of the last is reconstructed with the feature curves of the scanned last. When the surface of a last is reconstructed, based on the foundation of the proposed last form reconstruction module, the weighted arithmetic mean method is applied to the calculation on the shape morphing which differs from the grading for the control mesh of last, and the algorithm of subdivision is used to create the surface of last mesh, thus the feet-fitting 3D last form of different sizes is generated from its original form feature with functions remained. Finally, the practicability of the proposed methodology is verified through later case studies.

Keywords: 3D last design, customization, reverse engineering, weighted morphing, shape blending

Procedia PDF Downloads 321
2061 Analyzing Apposition and the Typology of Specific Reference in Newspaper Discourse in Nigeria

Authors: Monday Agbonica Bello Eje

Abstract:

The language of the print media is characterized by the use of apposition. This linguistic element function strategically in journalistic discourse where it is communicatively necessary to name individuals and provide information about them. Linguistic studies on the language of the print media with bias for apposition have largely dwelt on other areas but the examination of the typology of appositive reference in newspaper discourse. Yet, it is capable of revealing ways writers communicate and provide information necessary for readers to follow and understand the message. The study, therefore, analyses the patterns of appositional occurrences and the typology of reference in newspaper articles. The data were obtained from The Punch and Daily Trust Newspapers. A total of six editions of these newspapers were collected randomly spread over three months. News and feature articles were used in the analysis. Guided by the referential theory of meaning in discourse, the appositions identified were subjected to analysis. The findings show that the semantic relation of coreference and speaker coreference have the highest percentage and frequency of occurrence in the data. This is because the subject matter of news reports and feature articles focuses on humans and the events around them; as a result, readers need to be provided with some form of detail and background information in order to identify as well as follow the discourse. Also, the non-referential relation of absolute synonymy and speaker synonymy no doubt have fewer occurrences and percentages in the analysis. This is tied to a major feature of the language of the media: simplicity. The paper concludes that appositions is mainly used for the purpose of providing the reader with much detail. In this way, the writer transmits information which helps him not only to give detailed yet concise descriptions but also in some way help the reader to follow the discourse.

Keywords: apposition, discourse, newspaper, Nigeria, reference

Procedia PDF Downloads 136
2060 Destination Port Detection For Vessels: An Analytic Tool For Optimizing Port Authorities Resources

Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin

Abstract:

Port authorities have many challenges in congested ports to allocate their resources to provide a safe and secure loading/ unloading procedure for cargo vessels. Selecting a destination port is the decision of a vessel master based on many factors such as weather, wavelength and changes of priorities. Having access to a tool which leverages AIS messages to monitor vessel’s movements and accurately predict their next destination port promotes an effective resource allocation process for port authorities. In this research, we propose a method, namely, Reference Route of Trajectory (RRoT) to assist port authorities in predicting inflow and outflow traffic in their local environment by monitoring Automatic Identification System (AIS) messages. Our RRoT method creates a reference route based on historical AIS messages. It utilizes some of the best trajectory similarity measure to identify the destination of a vessel using their recent movement. We evaluated five different similarity measures such as Discrete Fr´echet Distance (DFD), Dynamic Time Warping (DTW), Partial Curve Mapping (PCM), Area between two curves (Area) and Curve length (CL). Our experiments show that our method identifies the destination port with an accuracy of 98.97% and an fmeasure of 99.08% using Dynamic Time Warping (DTW) similarity measure.

Keywords: spatial temporal data mining, trajectory mining, trajectory similarity, resource optimization

Procedia PDF Downloads 96
2059 Isolation and Characterization of an Ethanol Resistant Bacterium from Sap of Saccharum officinarum for Efficient Fermentation

Authors: Rukshika S Hewawasam, Sisira K. Weliwegamage, Sanath Rajapakse, Subramanium Sotheeswaran

Abstract:

Bio fuel is one of the emerging industries around the world due to arise of crisis in petroleum fuel. Fermentation is a cost effective and eco-friendly process in production of bio-fuel. So inventions in microbes, substrates, technologies in fermentation cause new modifications in fermentation. One major problem in microbial ethanol fermentation is the low resistance of conventional microorganisms to the high ethanol concentrations, which ultimately lead to decrease in the efficiency of the process. In the present investigation, an ethanol resistant bacterium was isolated from sap of Saccharum officinarum (sugar cane). The optimal cultural conditions such as pH, temperature, incubation period, and microbiological characteristics, morphological characteristics, biochemical characteristics, ethanol tolerance, sugar tolerance, growth curve assay were investigated. Isolated microorganism was tolerated to 18% (V/V) of ethanol concentration in the medium and 40% (V/V) glucose concentration in the medium. Biochemical characteristics have revealed as Gram negative, non-motile, negative for Indole test ,Methyl Red test, Voges- Proskauer`s test, Citrate Utilization test, and Urease test. Positive results for Oxidase test was shown by isolated bacterium. Sucrose, Glucose, Fructose, Maltose, Dextrose, Arabinose, Raffinose, Lactose, and Sachcharose can be utilized by this particular bacterium. It is a significant feature in effective fermentation. The fermentation process was carried out in glucose medium under optimum conditions; pH 4, temperature 30˚C, and incubated for 72 hours. Maximum ethanol production was recorded as 12.0±0.6% (V/V). Methanol was not detected in the final product of the fermentation process. This bacterium is especially useful in bio-fuel production due to high ethanol tolerance of this microorganism; it can be used to enhance the fermentation process over conventional microorganisms. Investigations are currently conducted on establishing the identity of the bacterium

Keywords: bacterium, bio-fuel, ethanol tolerance, fermentation

Procedia PDF Downloads 315
2058 Dynamic Effects of Energy Consumption, Economic Growth, International Trade and Urbanization on Environmental Degradation in Nigeria

Authors: Abdulkarim Yusuf

Abstract:

Motivation: A crucial but difficult goal for governments and policymakers in Nigeria in recent years has been the sustainability of economic growth. This goal must be accomplished by regulating or lowering greenhouse gas emissions, which calls for switching to a low- or zero-carbon production system. The lack of in-depth empirical studies on the environmental impact of socioeconomic variables on Nigeria and a number of unresolved issues from earlier research is what led to the current study. Objective: This study fills an important empirical gap by investigating the existence of an Environmental Kuznets Curve hypothesis and the long and short-run dynamic impact of socioeconomic variables on ecological sustainability in Nigeria. Data and method: Annual time series data covering the period 1980 to 2020 and the Autoregressive Distributed Lag technique in the presence of structural breaks were adopted for this study. Results: The empirical findings support the existence of the environmental Kuznets curve hypothesis for Nigeria in the long and short run. Energy consumption and total import exacerbate environmental deterioration in the long and short run, whereas total export improves environmental quality in the long and short run. Financial development, which contributed to a conspicuous decrease in the level of environmental destruction in the long run, escalated it in the short run. In contrast, urbanization caused a significant increase in environmental damage in the long run but motivated a decrease in biodiversity loss in the short run. Implications: The government, policymakers, and all energy stakeholders should take additional measures to ensure the implementation and diversification of energy sources to accommodate more renewable energy sources that emit less carbon in order to promote efficiency in Nigeria's production processes and lower carbon emissions. In order to promote the production and trade of environmentally friendly goods, they should also revise and strengthen environmental policies. With affordable, dependable, and sustainable energy use for higher productivity and inclusive growth, Nigeria will be able to achieve its long-term development goals of good health and wellbeing.

Keywords: economic growth, energy consumption, environmental degradation, environmental Kuznets curve, urbanization, Nigeria

Procedia PDF Downloads 33
2057 The Relationship between Human Neutrophil Elastase Levels and Acute Respiratory Distress Syndrome in Patients with Thoracic Trauma

Authors: Wahyu Purnama Putra, Artono Isharanto

Abstract:

Thoracic trauma is trauma that hits the thoracic wall or intrathoracic organs, either due to blunt trauma or sharp trauma. Thoracic trauma often causes impaired ventilation-perfusion due to damage to the lung parenchyma. This results in impaired tissue oxygenation, which is one of the causes of acute respiratory distress syndrome (ARDS). These changes are caused by the release of pro-inflammatory mediators, plasmatic proteins, and proteases into the alveolar space associated with ongoing edema, as well as oxidative products that ultimately result in severe inhibition of the surfactant system. This study aims to predict the incidence of acute respiratory distress syndrome (ARDS) through human neutrophil elastase levels. This study examines the relationship between plasma elastase levels as a predictor of the incidence of ARDS in thoracic trauma patients in Malang. This study is an observational cohort study. Data analysis uses the Pearson correlation test and ROC curve (receiver operating characteristic curve). It can be concluded that there is a significant (p= 0.000, r= -0.988) relationship between elastase levels and BGA-3. If the value of elastase levels is limited to 23.79 ± 3.95, the patient will experience mild ARDS. While if the value of elastase levels is limited to 57.68 ± 18.55, in the future, the patient will experience moderate ARDS. Meanwhile, if the elastase level is between 107.85 ± 5.04, the patient will likely experience severe ARDS. Neutrophil elastase levels correlate with the degree of severity of ARDS incidence.

Keywords: ARDS, human neutrophil elastase, severity, thoracic trauma

Procedia PDF Downloads 118
2056 Capturing the Stress States in Video Conferences by Photoplethysmographic Pulse Detection

Authors: Jarek Krajewski, David Daxberger

Abstract:

We propose a stress detection method based on an RGB camera using heart rate detection, also known as Photoplethysmography Imaging (PPGI). This technique focuses on the measurement of the small changes in skin colour caused by blood perfusion. A stationary lab setting with simulated video conferences is chosen using constant light conditions and a sampling rate of 30 fps. The ground truth measurement of heart rate is conducted with a common PPG system. The proposed approach for pulse peak detection is based on a machine learning-based approach, applying brute force feature extraction for the prediction of heart rate pulses. The statistical analysis showed good agreement (correlation r = .79, p<0.05) between the reference heart rate system and the proposed method. Based on these findings, the proposed method could provide a reliable, low-cost, and contactless way of measuring HR parameters in daily-life environments.

Keywords: heart rate, PPGI, machine learning, brute force feature extraction

Procedia PDF Downloads 107
2055 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: stacking, multi-layers, ensemble, multi-class

Procedia PDF Downloads 251
2054 Machine Learning for Feature Selection and Classification of Systemic Lupus Erythematosus

Authors: H. Zidoum, A. AlShareedah, S. Al Sawafi, A. Al-Ansari, B. Al Lawati

Abstract:

Systemic lupus erythematosus (SLE) is an autoimmune disease with genetic and environmental components. SLE is characterized by a wide variability of clinical manifestations and a course frequently subject to unpredictable flares. Despite recent progress in classification tools, the early diagnosis of SLE is still an unmet need for many patients. This study proposes an interpretable disease classification model that combines the high and efficient predictive performance of CatBoost and the model-agnostic interpretation tools of Shapley Additive exPlanations (SHAP). The CatBoost model was trained on a local cohort of 219 Omani patients with SLE as well as other control diseases. Furthermore, the SHAP library was used to generate individual explanations of the model's decisions as well as rank clinical features by contribution. Overall, we achieved an AUC score of 0.945, F1-score of 0.92 and identified four clinical features (alopecia, renal disorders, cutaneous lupus, and hemolytic anemia) along with the patient's age that was shown to have the greatest contribution on the prediction.

Keywords: feature selection, classification, systemic lupus erythematosus, model interpretation, SHAP, Catboost

Procedia PDF Downloads 62
2053 Vehicle Maneuverability on Horizontal Curves on Hilly Terrain: A Study on Shillong Highway

Authors: Surendra Choudhary, Sapan Tiwari

Abstract:

The driver has two fundamental duties i) controlling the position of the vehicle along the longitudinal and lateral direction of movement ii) roadway width. Both of these duties are interdependent and are concurrently referred to as two-dimensional driver behavior. One of the main problems facing driver behavior modeling is to identify the parameters for describing the exemplary driving conduct and car maneuver under distinct traffic circumstances. Still, to date, there is no well-accepted theory that can comprehensively model the 2-D driver conduct (longitudinal and lateral). The primary objective of this research is to explore the vehicle's lateral longitudinal behavior in the heterogeneous condition of traffic on horizontal curves as well as the effect of road geometry on dynamic traffic parameters, i.e., car velocity and lateral placement. In this research, with their interrelationship, a thorough assessment of dynamic car parameters, i.e., speed, lateral acceleration, and turn radius. Also, horizontal curve road parameters, i.e., curvature radius, pavement friction, are performed. The dynamic parameters of the various types of car drivers are gathered using a VBOX GPS-based tool with high precision. The connection between dynamic car parameters and curve geometry is created after the removal of noise from the GPS trajectories. The major findings of the research are that car maneuvers with higher than the design limits of speed, acceleration, and lateral deviation on the studied curves of the highway. It can become lethal if the weather changes from dry to wet.

Keywords: geometry, maneuverability, terrain, trajectory, VBOX

Procedia PDF Downloads 128
2052 Feature Extraction Based on Contourlet Transform and Log Gabor Filter for Detection of Ulcers in Wireless Capsule Endoscopy

Authors: Nimisha Elsa Koshy, Varun P. Gopi, V. I. Thajudin Ahamed

Abstract:

The entire visualization of GastroIntestinal (GI) tract is not possible with conventional endoscopic exams. Wireless Capsule Endoscopy (WCE) is a low risk, painless, noninvasive procedure for diagnosing diseases such as bleeding, polyps, ulcers, and Crohns disease within the human digestive tract, especially the small intestine that was unreachable using the traditional endoscopic methods. However, analysis of massive images of WCE detection is tedious and time consuming to physicians. Hence, researchers have developed software methods to detect these diseases automatically. Thus, the effectiveness of WCE can be improved. In this paper, a novel textural feature extraction method is proposed based on Contourlet transform and Log Gabor filter to distinguish ulcer regions from normal regions. The results show that the proposed method performs well with a high accuracy rate of 94.16% using Support Vector Machine (SVM) classifier in HSV colour space.

Keywords: contourlet transform, log gabor filter, ulcer, wireless capsule endoscopy

Procedia PDF Downloads 519
2051 Hybrid Deep Learning and FAST-BRISK 3D Object Detection Technique for Bin-Picking Application

Authors: Thanakrit Taweesoontorn, Sarucha Yanyong, Poom Konghuayrob

Abstract:

Robotic arms have gained popularity in various industries due to their accuracy and efficiency. This research proposes a method for bin-picking tasks using the Cobot, combining the YOLOv5 CNNs model for object detection and pose estimation with traditional feature detection (FAST), feature description (BRISK), and matching algorithms. By integrating these algorithms and utilizing a small-scale depth sensor camera for capturing depth and color images, the system achieves real-time object detection and accurate pose estimation, enabling the robotic arm to pick objects correctly in both position and orientation. Furthermore, the proposed method is implemented within the ROS framework to provide a seamless platform for robotic control and integration. This integration of robotics, cameras, and AI technology contributes to the development of industrial robotics, opening up new possibilities for automating challenging tasks and improving overall operational efficiency.

Keywords: robotic vision, image processing, applications of robotics, artificial intelligent

Procedia PDF Downloads 68
2050 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models

Authors: Danielle Shackley, Yetunde Folajimi

Abstract:

As more people turn to the internet seeking health-related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores to text, ranging from positive, neutral, and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing and tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial, and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced, and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process and substituting the Naive Bayes for a deep learning neural network model.

Keywords: sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model

Procedia PDF Downloads 76
2049 Algorithm Research on Traffic Sign Detection Based on Improved EfficientDet

Authors: Ma Lei-Lei, Zhou You

Abstract:

Aiming at the problems of low detection accuracy of deep learning algorithm in traffic sign detection, this paper proposes improved EfficientDet based traffic sign detection algorithm. Multi-head self-attention is introduced in the minimum resolution layer of the backbone of EfficientDet to achieve effective aggregation of local and global depth information, and this study proposes an improved feature fusion pyramid with increased vertical cross-layer connections, which improves the performance of the model while introducing a small amount of complexity, the Balanced L1 Loss is introduced to replace the original regression loss function Smooth L1 Loss, which solves the problem of balance in the loss function. Experimental results show, the algorithm proposed in this study is suitable for the task of traffic sign detection. Compared with other models, the improved EfficientDet has the best detection accuracy. Although the test speed is not completely dominant, it still meets the real-time requirement.

Keywords: convolutional neural network, transformer, feature pyramid networks, loss function

Procedia PDF Downloads 78
2048 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction

Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey

Abstract:

In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.

Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization

Procedia PDF Downloads 326
2047 Critical Study on the Sensitivity of Corrosion Fatigue Crack Growth Rate to Cyclic Waveform and Microstructure in Marine Steel

Authors: V. C. Igwemezie, A. N. Mehmanparast

Abstract:

The primary focus of this work is to understand how variations in the microstructure and cyclic waveform affect the corrosion fatigue crack growth (CFCG) in steel, especially in the Paris region of the da/dN vs. ΔK curve. This work is important because it provides fundamental information on the modelling, design, selection, and use of steels for various engineering applications in the marine environment. The corrosion fatigue tests data on normalized and thermomechanical control process (TMCP) ferritic-pearlitic steels by the authors were compared with several studies on different microstructures in the literature. The microstructures of these steels are radically different and general comparative fatigue crack growth resistance performance study on the effect of microstructure in these materials are very scarce and where available are limited to few studies. The results, for purposes of engineering application, in this study show less dependency of fatigue crack growth rate (FCGR) on yield strength, tensile strength, ductility, frequency and stress ratio in the range 0.1 – 0.7. The nature of the steel microstructure appears to be a major factor in determining the rate at which fatigue cracks propagate in the entire da/dN vs. ΔK sigmoidal curve. The study also shows that the sine wave shape is the most damaging fatigue waveform for ferritic-pearlitic steels. This tends to suggest that the test under sine waveform would be a conservative approach, regardless of the waveform for design of engineering structures.

Keywords: BS7910, corrosion-fatigue crack growth rate, cyclic waveform, microstructure, steel

Procedia PDF Downloads 131