Search results for: preposition error detection
2890 Genetic Diversity of Norovirus Strains in Outpatient Children from Rural Communities of Vhembe District, South Africa, 2014-2015
Authors: Jean Pierre Kabue, Emma Meader, Afsatou Ndama Traore, Paul R. Hunter, Natasha Potgieter
Abstract:
Norovirus is now considered the most common cause of outbreaks of nonbacterial gastroenteritis. Limited data are available for Norovirus strains in Africa, especially in rural and peri-urban areas. Despite the excessive burden of diarrhea disease in developing countries, Norovirus infections have been to date mostly reported in developed countries. There is a need to investigate intensively the role of viral agents associated with diarrhea in different settings in Africa continent. To determine the prevalence and genetic diversity of Norovirus strains circulating in the rural communities in the Limpopo Province, South Africa and investigate the genetic relationship between Norovirus strains, a cross-sectional study was performed on human stools collected from rural communities. Between July 2014 and April 2015, outpatient children under 5 years of age from rural communities of Vhembe District, South Africa, were recorded for the study. A total of 303 stool specimens were collected from those with diarrhea (n=253) and without (n=50) diarrhea. NoVs were identified using real-time one-step RT-PCR. Partial Sequence analyses were performed to genotype the strains. Phylogenetic analyses were performed to compare identified NoVs genotypes to the worldwide circulating strains. Norovirus detection rate was 41.1% (104/253) in children with diarrhea. There was no significant difference (OR=1.24; 95% CI 0.66-2.33) in Norovirus detection between symptomatic and asymptomatic children. Comparison of the median CT values for NoV in children with diarrhea and without diarrhea revealed significant statistical difference of estimated GII viral load from both groups, with a much higher viral burden in children with diarrhea. To our knowledge, this is the first study reporting on the differences in estimated viral load of GII and GI NoV positive cases and controls. GII.Pe (n=9) were the predominant genotypes followed by GII.Pe/GII.4 Sydney 2012 (n=8) suspected recombinant and GII.4 Sydney 2012 variants(n=7). Two unassigned GII.4 variants and an unusual RdRp genotype GII.P15 were found. With note, the rare GIIP15 identified in this study has a common ancestor with GIIP15 strain from Japan previously reported as GII/untypeable recombinant strain implicated in a gastroenteritis outbreak. To our knowledge, this is the first report of this unusual genotype in the African continent. Though not confirmed predictive of diarrhea disease in this study, the high detection rate of NoV is an indication of subsequent exposure of children from rural communities to enteric pathogens due to poor sanitation and hygiene practices. The results reveal that the difference between asymptomatic and symptomatic children with NoV may possibly be related to the NoV genogroups involved. The findings emphasize NoV genetic diversity and predominance of GII.Pe/GII.4 Sydney 2012, indicative of increased NoV activity. An uncommon GII.P15 and two unassigned GII.4 variants were also identified from rural settings of the Vhembe District/South Africa. NoV surveillance is required to help to inform investigations into NoV evolution, and to support vaccine development programmes in Africa.Keywords: asymptomatic, common, outpatients, norovirus genetic diversity, sporadic gastroenteritis, South African rural communities, symptomatic
Procedia PDF Downloads 2002889 Image Compression Using Block Power Method for SVD Decomposition
Authors: El Asnaoui Khalid, Chawki Youness, Aksasse Brahim, Ouanan Mohammed
Abstract:
In these recent decades, the important and fast growth in the development and demand of multimedia products is contributing to an insufficient in the bandwidth of device and network storage memory. Consequently, the theory of data compression becomes more significant for reducing the data redundancy in order to save more transfer and storage of data. In this context, this paper addresses the problem of the lossless and the near-lossless compression of images. This proposed method is based on Block SVD Power Method that overcomes the disadvantages of Matlab's SVD function. The experimental results show that the proposed algorithm has a better compression performance compared with the existing compression algorithms that use the Matlab's SVD function. In addition, the proposed approach is simple and can provide different degrees of error resilience, which gives, in a short execution time, a better image compression.Keywords: image compression, SVD, block SVD power method, lossless compression, near lossless
Procedia PDF Downloads 3912888 qPCR Method for Detection of Halal Food Adulteration
Authors: Gabriela Borilova, Monika Petrakova, Petr Kralik
Abstract:
Nowadays, European producers are increasingly interested in the production of halal meat products. Halal meat has been increasingly appearing in the EU's market network and meat products from European producers are being exported to Islamic countries. Halal criteria are mainly related to the origin of muscle used in production, and also to the way products are obtained and processed. Although the EU has legislatively addressed the question of food authenticity, the circumstances of previous years when products with undeclared horse or poultry meat content appeared on EU markets raised the question of the effectiveness of control mechanisms. Replacement of expensive or not-available types of meat for low-priced meat has been on a global scale for a long time. Likewise, halal products may be contaminated (falsified) by pork or food components obtained from pigs. These components include collagen, offal, pork fat, mechanically separated pork, emulsifier, blood, dried blood, dried blood plasma, gelatin, and others. These substances can influence sensory properties of the meat products - color, aroma, flavor, consistency and texture or they are added for preservation and stabilization. Food manufacturers sometimes access these substances mainly due to their dense availability and low prices. However, the use of these substances is not always declared on the product packaging. Verification of the presence of declared ingredients, including the detection of undeclared ingredients, are among the basic control procedures for determining the authenticity of food. Molecular biology methods, based on DNA analysis, offer rapid and sensitive testing. The PCR method and its modification can be successfully used to identify animal species in single- and multi-ingredient raw and processed foods and qPCR is the first choice for food analysis. Like all PCR-based methods, it is simple to implement and its greatest advantage is the absence of post-PCR visualization by electrophoresis. qPCR allows detection of trace amounts of nucleic acids, and by comparing an unknown sample with a calibration curve, it can also provide information on the absolute quantity of individual components in the sample. Our study addresses a problem that is related to the fact that the molecular biological approach of most of the work associated with the identification and quantification of animal species is based on the construction of specific primers amplifying the selected section of the mitochondrial genome. In addition, the sections amplified in conventional PCR are relatively long (hundreds of bp) and unsuitable for use in qPCR, because in DNA fragmentation, amplification of long target sequences is quite limited. Our study focuses on finding a suitable genomic DNA target and optimizing qPCR to reduce variability and distortion of results, which is necessary for the correct interpretation of quantification results. In halal products, the impact of falsification of meat products by the addition of components derived from pigs is all the greater that it is not just about the economic aspect but above all about the religious and social aspect. This work was supported by the Ministry of Agriculture of the Czech Republic (QJ1530107).Keywords: food fraud, halal food, pork, qPCR
Procedia PDF Downloads 2492887 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection
Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy
Abstract:
Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks
Procedia PDF Downloads 782886 Upon One Smoothing Problem in Project Management
Authors: Dimitri Golenko-Ginzburg
Abstract:
A CPM network project with deterministic activity durations, in which activities require homogenous resources with fixed capacities, is considered. The problem is to determine the optimal schedule of starting times for all network activities within their maximal allowable limits (in order not to exceed the network's critical time) to minimize the maximum required resources for the project at any point in time. In case when a non-critical activity may start only at discrete moments with the pregiven time span, the problem becomes NP-complete and an optimal solution may be obtained via a look-over algorithm. For the case when a look-over requires much computational time an approximate algorithm is suggested. The algorithm's performance ratio, i.e., the relative accuracy error, is determined. Experimentation has been undertaken to verify the suggested algorithm.Keywords: resource smoothing problem, CPM network, lookover algorithm, lexicographical order, approximate algorithm, accuracy estimate
Procedia PDF Downloads 3062885 Fuzzy Logic Based Sliding Mode Controller for a New Soft Switching Boost Converter
Authors: Azam Salimi, Majid Delshad
Abstract:
This paper presents a modified design of a sliding mode controller based on fuzzy logic for a New ZVThigh step up DC-DC Converter . Here a proportional - integral (PI)-type current mode control is employed and a sliding mode controller is designed utilizing fuzzy algorithm. Sliding mode controller guarantees robustness against all variations and fuzzy logic helps to reduce chattering phenomenon due to sliding controller, in that way efficiency increases and error, voltage and current ripples decreases. The proposed system is simulated using MATLAB / SIMULINK. This model is tested under variations of input and reference voltages and it was found that in comparison with conventional sliding mode controllers they perform better.Keywords: switching mode power supplies, DC-DC converters, sliding mode control, robustness, fuzzy control, current mode control, non-linear behavior
Procedia PDF Downloads 5442884 Evaluation of Antimicrobial Susceptibility Profile of Urinary Tract Infections in Massoud Medical Laboratory: 2018-2021
Authors: Ali Ghorbanipour
Abstract:
The aim of this study is to investigate the drug resistance pattern and the value of the MIC (minimum inhibitory concentration)method to reduce the impact of infectious diseases and the slow development of resistance. Method: The study was conducted on clinical specimens collected between 2018 to 2021. identification of isolates and antibiotic susceptibility testing were performed using conventional biochemical tests. Antibiotic resistance was determined using kibry-Bauer disk diffusion and MIC by E-test methods comparative with microdilution plate elisa method. Results were interpreted according to CLSI. Results: Out of 249600 different clinical specimens, 18720 different pathogenic bacteria by overall detection ratio 7.7% were detected. Among pathogen bacterial were Gram negative bacteria (70%,n=13000) and Gram positive bacteria(30%,n=5720).Medically relevant gram-negative bacteria include a multitude of species such as E.coli , Klebsiella .spp , Pseudomonas .aeroginosa , Acinetobacter .spp , Enterobacterspp ,and gram positive bacteria Staphylococcus.spp , Enterococcus .spp , Streptococcus .spp was isolated . Conclusion: Our results highlighted that the resistance ratio among Gram Negative bacteria and Gram positive bacteria with different infection is high it suggest constant screening and follow-up programs for the detection of antibiotic resistance and the value of MIC drug susceptibility reporting that provide a new way to the usage of resistant antibiotic in combination with other antibiotics or accurate weight of antibiotics that inhibit or kill bacteria. Evaluation of wrong medication in the expansion of resistance and side effects of over usage antibiotics are goals. Ali ghorbanipour presently working as a supervision at the microbiology department of Massoud medical laboratory. Iran. Earlier, he worked as head department of pulmonary infection in firoozgarhospital, Iran. He received master degree in 2012 from Fergusson College. His research prime objective is a biologic wound dressing .to his credit, he has Published10 articles in various international congresses by presenting posters.Keywords: antimicrobial profile, MIC & MBC Method, microplate antimicrobial assay, E-test
Procedia PDF Downloads 1362883 Experimental Study of Discharge with Sharp-Crested Weirs
Authors: E. Keramaris, V. Kanakoudis
Abstract:
In this study the water flow in an open channel over a sharp-crested weir is investigated experimentally. For this reason a series of laboratory experiments were performed in an open channel with a sharp-crested weir. The maximum head expected over the weir, the total upstream water height and the downstream water height of the impact in the constant bed of the open channel were measured. The discharge was measured using a tank put right after the open channel. In addition, the discharge and the upstream velocity were also calculated using already known equations. The main finding is that the relative error percentage for the majority of the experimental measurements is ± 4%, meaning that the calculation of the discharge with a sharp-crested weir gives very good results compared to the numerical results from known equations.Keywords: sharp-crested weir, weir height, flow measurement, open channel flow
Procedia PDF Downloads 1412882 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management
Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities
Procedia PDF Downloads 772881 Simulation Model of Induction Heating in COMSOL Multiphysics
Authors: K. Djellabi, M. E. H. Latreche
Abstract:
The induction heating phenomenon depends on various factors, making the problem highly nonlinear. The mathematical analysis of this problem in most cases is very difficult and it is reduced to simple cases. Another knowledge of induction heating systems is generated in production environments, but these trial-error procedures are long and expensive. The numerical models of induction heating problem are another approach to reduce abovementioned drawbacks. This paper deals with the simulation model of induction heating problem. The simulation model of induction heating system in COMSOL Multiphysics is created. In this work we present results of numerical simulations of induction heating process in pieces of cylindrical shapes, in an inductor with four coils. The modeling of the inducting heating process was made with the software COMSOL Multiphysics Version 4.2a, for the study we present the temperature charts.Keywords: induction heating, electromagnetic field, inductor, numerical simulation, finite element
Procedia PDF Downloads 3192880 The Impact of Bitcoin on Stock Market Performance
Authors: Oliver Takawira, Thembi Hope
Abstract:
This study will analyse the relationship between Bitcoin price movements and the Johannesburg stock exchange (JSE). The aim is to determine whether Bitcoin price movements affect the stock market performance. As crypto currencies continue to gain prominence as a safe asset during periods of economic distress, this raises the question of whether Bitcoin’s prosperity could affect investment in the stock market. To identify the existence of a short run and long run linear relationship, the study will apply the Autoregressive Distributed Lag Model (ARDL) bounds test and a Vector Error Correction Model (VECM) after testing the data for unit roots and cointegration using the Augmented Dicker Fuller (ADF) and Phillips-Perron (PP). The Non-Linear Auto Regressive Distributed Lag (NARDL) will then be used to check if there is a non-linear relationship between bitcoin prices and stock market prices.Keywords: bitcoin, stock market, interest rates, ARDL
Procedia PDF Downloads 1122879 Improved Performance of Cooperative Scheme in the Cellular and Broadcasting System
Authors: Hyun-Jee Yang, Bit-Na Kwon, Yong-Jun Kim, Hyoung-Kyu Song
Abstract:
In the cooperative transmission scheme, both the cellular system and broadcasting system are composed. Two cellular base stations (CBSs) communicating with a user in the cell edge use cooperative transmission scheme in the conventional scheme. In the case that the distance between two CBSs and the user is distant, the conventional scheme does not guarantee the quality of the communication because the channel condition is bad. Therefore, if the distance between CBSs and a user is distant, the performance of the conventional scheme is decreased. Also, the bad channel condition has bad effects on the performance. The proposed scheme uses two relays to communicate well with CBSs when the channel condition between CBSs and the user is poor. Using the relay in the high attenuation environment can obtain both advantages of the high bit error rate (BER) and throughput performance.Keywords: cooperative communications, diversity gain, OFDM, interworking system
Procedia PDF Downloads 5772878 Cubic Trigonometric B-Spline Approach to Numerical Solution of Wave Equation
Authors: Shazalina Mat Zin, Ahmad Abd. Majid, Ahmad Izani Md. Ismail, Muhammad Abbas
Abstract:
The generalized wave equation models various problems in sciences and engineering. In this paper, a new three-time level implicit approach based on cubic trigonometric B-spline for the approximate solution of wave equation is developed. The usual finite difference approach is used to discretize the time derivative while cubic trigonometric B-spline is applied as an interpolating function in the space dimension. Von Neumann stability analysis is used to analyze the proposed method. Two problems are discussed to exhibit the feasibility and capability of the method. The absolute errors and maximum error are computed to assess the performance of the proposed method. The results were found to be in good agreement with known solutions and with existing schemes in literature.Keywords: collocation method, cubic trigonometric B-spline, finite difference, wave equation
Procedia PDF Downloads 5462877 ANFIS Approach for Locating Faults in Underground Cables
Authors: Magdy B. Eteiba, Wael Ismael Wahba, Shimaa Barakat
Abstract:
This paper presents a fault identification, classification and fault location estimation method based on Discrete Wavelet Transform and Adaptive Network Fuzzy Inference System (ANFIS) for medium voltage cable in the distribution system. Different faults and locations are simulated by ATP/EMTP, and then certain selected features of the wavelet transformed signals are used as an input for a training process on the ANFIS. Then an accurate fault classifier and locator algorithm was designed, trained and tested using current samples only. The results obtained from ANFIS output were compared with the real output. From the results, it was found that the percentage error between ANFIS output and real output is less than three percent. Hence, it can be concluded that the proposed technique is able to offer high accuracy in both of the fault classification and fault location.Keywords: ANFIS, fault location, underground cable, wavelet transform
Procedia PDF Downloads 5182876 Empirical Mode Decomposition Based Denoising by Customized Thresholding
Authors: Wahiba Mohguen, Raïs El’hadi Bekka
Abstract:
This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding
Procedia PDF Downloads 3052875 The Fiscal-Monetary Policy and Economic Growth in Algeria: VECM Approach
Authors: K. Bokreta, D. Benanaya
Abstract:
The objective of this study is to examine the relative effectiveness of monetary and fiscal policy in Algeria using the econometric modelling techniques of cointegration and vector error correction modelling to analyse and draw policy inferences. The chosen variables of fiscal policy are government expenditure and net taxes on products, while the effect of monetary policy is presented by the inflation rate and the official exchange rate. From the results, we find that in the long-run, the impact of government expenditures is positive, while the effect of taxes is negative on growth. Additionally, we find that the inflation rate is found to have little effect on GDP per capita but the impact of the exchange rate is insignificant. We conclude that fiscal policy is more powerful then monetary policy in promoting economic growth in Algeria.Keywords: economic growth, monetary policy, fiscal policy, VECM
Procedia PDF Downloads 3152874 Secure Optical Communication System Using Quantum Cryptography
Authors: Ehab AbdulRazzaq Hussein
Abstract:
Quantum cryptography (QC) is an emerging technology for secure key distribution with single-photon transmissions. In contrast to classical cryptographic schemes, the security of QC schemes is guaranteed by the fundamental laws of nature. Their security stems from the impossibility to distinguish non-orthogonal quantum states with certainty. A potential eavesdropper introduces errors in the transmissions, which can later be discovered by the legitimate participants of the communication. In this paper, the modeling approach is proposed for QC protocol BB84 using polarization coding. The single-photon system is assumed to be used in the designed models. Thus, Eve cannot use beam-splitting strategy to eavesdrop on the quantum channel transmission. The only eavesdropping strategy possible to Eve is the intercept/resend strategy. After quantum transmission of the QC protocol, the quantum bit error rate (QBER) is estimated and compared with a threshold value. If it is above this value the procedure must be stopped and performed later again.Keywords: security, key distribution, cryptography, quantum protocols, Quantum Cryptography (QC), Quantum Key Distribution (QKD).
Procedia PDF Downloads 4122873 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra
Authors: Bitewulign Mekonnen
Abstract:
Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network
Procedia PDF Downloads 972872 Detection and Molecular Identification of Bacteria Forming Polyhydroxyalkanoate and Polyhydroxybutyrate Isolated from Soil in Saudi Arabia
Authors: Ali Bahkali, Rayan Yousef Booq, Mohammad Khiyami
Abstract:
Soil samples were collected from five different regions in the Kingdom of Saudi Arabia. Microbiological methods included dilution methods and pour plates to isolate and purify bacteria soil. The ability of isolates to develop biopolymer was investigated on petri dishes containing elements and substance concentrations stimulating developing biopolymer. Fluorescent stains, Nile red and Nile blue were used to stain the bacterial cells developing biopolymers. In addition, Sudan black was used to detect biopolymers in bacterial cells. The isolates which developed biopolymers were identified based on their gene sequence of 1 6sRNA and their ability to grow and synthesize PHAs on mineral medium supplemented with 1% dates molasses as the only carbon source under nitrogen limitation. During the study 293 bacterial isolates were isolated and detected. Through the initial survey on the petri dishes, 84 isolates showed the ability to develop biopolymers. These bacterial colonies developed a pink color due to accumulation of the biopolymers in the cells. Twenty-three isolates were able to grow on dates molasses, three strains of which showed the ability to accumulate biopolymers. These strains included Bacillus sp., Ralstonia sp. and Microbacterium sp. They were detected by Nile blue A stain with fluorescence microscopy (OLYMPUS IX 51). Among the isolated strains Ralstonia sp. was selected after its ability to grow on molasses dates in the presence of a limited nitrogen source was detected. The optimum conditions for formation of biopolymers by isolated strains were investigated. Conditions studied included, best incubation duration (2 days), temperature (30°C) and pH (7-8). The maximum PHB production was raised by 1% (v1v) when using concentrations of dates molasses 1, 2, 3, 4 and 5% in MSM. The best inoculated with 1% old inoculum (1= OD). The ideal extraction method of PHA and PHB proved to be 0.4% sodium hypochlorite solution, producing a quantity of polymer 98.79% of the cell's dry weight. The maximum PHB production was 1.79 g/L recorded by Ralstonia sp. after 48 h, while it was 1.40 g/L produced by R.eutropha ATCC 17697 after 48 h.Keywords: bacteria forming polyhydroxyalkanoate, detection, molecular, Saudi Arabia
Procedia PDF Downloads 3502871 Behavioral and EEG Reactions in Native Turkic-Speaking Inhabitants of Siberia and Siberian Russians during Recognition of Syntactic Errors in Sentences in Native and Foreign Languages
Authors: Tatiana N. Astakhova, Alexander E. Saprygin, Tatyana A. Golovko, Alexander N. Savostyanov, Mikhail S. Vlasov, Natalia V. Borisova, Alexandera G. Karpova, Urana N. Kavai-ool, Elena D. Mokur-ool, Nikolay A. Kolchanov, Lubomir I. Aftanas
Abstract:
The aim of the study is to compare behaviorally and EEG reactions in Turkic-speaking inhabitants of Siberia (Tuvinians and Yakuts) and Russians during the recognition of syntax errors in native and foreign languages. 63 healthy aboriginals of the Tyva Republic, 29 inhabitants of the Sakha (Yakutia) Republic, and 55 Russians from Novosibirsk participated in the study. All participants completed a linguistic task, in which they had to find a syntax error in the written sentences. Russian participants completed the task in Russian and in English. Tuvinian and Yakut participants completed the task in Russian, English, and Tuvinian or Yakut, respectively. EEG’s were recorded during the solving of tasks. For Russian participants, EEG's were recorded using 128-channels. The electrodes were placed according to the extended International 10-10 system, and the signals were amplified using ‘Neuroscan (USA)’ amplifiers. For Tuvinians and Yakuts EEG's were recorded using 64-channels and amplifiers Brain Products, Germany. In all groups 0.3-100 Hz analog filtering, sampling rate 1000 Hz were used. Response speed and the accuracy of recognition error were used as parameters of behavioral reactions. Event-related potentials (ERP) responses P300 and P600 were used as indicators of brain activity. The accuracy of solving tasks and response speed in Russians were higher for Russian than for English. The P300 amplitudes in Russians were higher for English; the P600 amplitudes in the left temporal cortex were higher for the Russian language. Both Tuvinians and Yakuts have no difference in accuracy of solving tasks in Russian and in their respective national languages (Tuvinian and Yakut). However, the response speed was faster for tasks in Russian than for tasks in their national language. Tuvinians and Yakuts showed bad accuracy in English, but the response speed was higher for English than for Russian and the national languages. With Tuvinians, there were no differences in the P300 and P600 amplitudes and in cortical topology for Russian and Tuvinian, but there was a difference for English. In Yakuts, the P300 and P600 amplitudes and topology of ERP for Russian were the same as Russians had for Russian. In Yakuts, brain reactions during Yakut and English comprehension had no difference and were reflected foreign language comprehension -while the Russian language comprehension was reflected native language comprehension. We found out that the Tuvinians recognized both Russian and Tuvinian as native languages, and English as a foreign language. The Yakuts recognized both English and Yakut as a foreign language, only Russian as a native language. According to the inquirer, both Tuvinians and Yakuts use the national language as a spoken language, whereas they don’t use it for writing. It can well be a reason that Yakuts perceive the Yakut writing language as a foreign language while writing Russian as their native.Keywords: EEG, language comprehension, native and foreign languages, Siberian inhabitants
Procedia PDF Downloads 5392870 Snapchat’s Scanning Feature
Authors: Reham Banwair, Lana Alshehri, Sara Hadrawi
Abstract:
The purpose of this project is to identify user satisfaction with the AI functions on Snapchat, in order to generate improvement proposals that allow its development within the app. To achieve this, a qualitative analysis was carried out through interviews to people who usually use the application, revealing their satisfaction or dissatisfaction with the usefulness of the AI. In addition, the background of the company and its introduction in these algorithms were analyzed. Furthermore, the characteristics of the three main functions of AI were explained: identify songs, solve mathematical problems, and recognize plants. As a result, it was obtained that 50% still do not know the characteristics of AI, 50% still believe song recognition is not always correct, 41.7% believe that math problems are usually accurate and 91.7% believes the plant detection tool is working properly.Keywords: artificial intelligence, scanning, Snapchat, machine learning
Procedia PDF Downloads 1392869 A More Powerful Test Procedure for Multiple Hypothesis Testing
Authors: Shunpu Zhang
Abstract:
We propose a new multiple test called the minPOP test for testing multiple hypotheses simultaneously. Under the assumption that the test statistics are independent, we show that the minPOP test has higher global power than the existing multiple testing methods. We further propose a stepwise multiple-testing procedure based on the minPOP test and two of its modified versions (the Double Truncated and Left Truncated minPOP tests). We show that these multiple tests have strong control of the family-wise error rate (FWER). A method for finding the p-values of the proposed tests after adjusting for multiplicity is also developed. Simulation results show that the Double Truncated and Left Truncated minPOP tests, in general, have a higher number of rejections than the existing multiple testing procedures.Keywords: multiple test, single-step procedure, stepwise procedure, p-value for multiple testing
Procedia PDF Downloads 882868 An Analysis of Classification of Imbalanced Datasets by Using Synthetic Minority Over-Sampling Technique
Authors: Ghada A. Alfattni
Abstract:
Analysing unbalanced datasets is one of the challenges that practitioners in machine learning field face. However, many researches have been carried out to determine the effectiveness of the use of the synthetic minority over-sampling technique (SMOTE) to address this issue. The aim of this study was therefore to compare the effectiveness of the SMOTE over different models on unbalanced datasets. Three classification models (Logistic Regression, Support Vector Machine and Nearest Neighbour) were tested with multiple datasets, then the same datasets were oversampled by using SMOTE and applied again to the three models to compare the differences in the performances. Results of experiments show that the highest number of nearest neighbours gives lower values of error rates.Keywords: imbalanced datasets, SMOTE, machine learning, logistic regression, support vector machine, nearest neighbour
Procedia PDF Downloads 3552867 Empirical Evaluation of Gradient-Based Training Algorithms for Ordinary Differential Equation Networks
Authors: Martin K. Steiger, Lukas Heisler, Hans-Georg Brachtendorf
Abstract:
Deep neural networks and their variants form the backbone of many AI applications. Based on the so-called residual networks, a continuous formulation of such models as ordinary differential equations (ODEs) has proven advantageous since different techniques may be applied that significantly increase the learning speed and enable controlled trade-offs with the resulting error at the same time. For the evaluation of such models, high-performance numerical differential equation solvers are used, which also provide the gradients required for training. However, whether classical gradient-based methods are even applicable or which one yields the best results has not been discussed yet. This paper aims to redeem this situation by providing empirical results for different applications.Keywords: deep neural networks, gradient-based learning, image processing, ordinary differential equation networks
Procedia PDF Downloads 1772866 Design of Membership Ranges for Fuzzy Logic Control of Refrigeration Cycle Driven by a Variable Speed Compressor
Authors: Changho Han, Jaemin Lee, Li Hua, Seokkwon Jeong
Abstract:
Design of membership function ranges in fuzzy logic control (FLC) is presented for robust control of a variable speed refrigeration system (VSRS). The criterion values of the membership function ranges can be carried out from the static experimental data, and two different values are offered to compare control performance. Some simulations and real experiments for the VSRS were conducted to verify the validity of the designed membership functions. The experimental results showed good agreement with the simulation results, and the error change rate and its sampling time strongly affected the control performance at transient state of the VSRS.Keywords: variable speed refrigeration system, fuzzy logic control, membership function range, control performance
Procedia PDF Downloads 2662865 Sensorless Controller of Induction Motor Using Backstepping Approach and Fuzzy MRAS
Authors: Ahmed Abbou
Abstract:
This paper present a sensorless controller designed by the backstepping approach for the speed control of induction motor. In this strategy of control, we also combined the method Fuzzy MRAS to estimate the rotor speed and the observer type Luenburger to observe Rotor flux. The control model involves a division by the flux variable that may lead to unbounded solutions. Such a risk is avoided by basing the controller design on Lyapunov function that accounts for the model singularity. On the other hand, this mixed method gives better results in Sensorless operation and especially at low speed. The response time at 5% of the flux is 20ms while the error between the speed with sensor and the estimated speed remains in the range of ±0.8 rad/s for the rated functioning and ±1.5 rad/s for low speed.Keywords: backstepping approach, fuzzy logic, induction motor, luenburger observer, sensorless MRAS
Procedia PDF Downloads 3762864 Mean Velocity Modeling of Open-Channel Flow with Submerged Vegetation
Authors: Mabrouka Morri, Amel Soualmia, Philippe Belleudy
Abstract:
Vegetation affects the mean and turbulent flow structure. It may increase flood risks and sediment transport. Therefore, it is important to develop analytical approaches for the bed shear stress on vegetated bed, to predict resistance caused by vegetation. In the recent years, experimental and numerical models have both been developed to model the effects of submerged vegetation on open-channel flow. In this paper, different analytic models are compared and tested using the criteria of deviation, to explore their capacity for predicting the mean velocity and select the suitable one that will be applied in real case of rivers. The comparison between the measured data in vegetated flume and simulated mean velocities indicated, a good performance, in the case of rigid vegetation, whereas, Huthoff model shows the best agreement with a high coefficient of determination (R2=80%) and the smallest error in the prediction of the average velocities.Keywords: analytic models, comparison, mean velocity, vegetation
Procedia PDF Downloads 2792863 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 1182862 Capacity Estimation of Hybrid Automated Repeat Request Protocol for Low Earth Orbit Mega-Constellations
Authors: Arif Armagan Gozutok, Alper Kule, Burak Tos, Selman Demirel
Abstract:
Wireless communication chain requires effective ways to keep throughput efficiency high while it suffers location-dependent, time-varying burst errors. Several techniques are developed in order to assure that the receiver recovers the transmitted information without errors. The most fundamental approaches are error checking and correction besides re-transmission of the non-acknowledged packets. In this paper, stop & wait (SAW) and chase combined (CC) hybrid automated repeat request (HARQ) protocols are compared and analyzed in terms of throughput and average delay for the usage of low earth orbit (LEO) mega-constellations case. Several assumptions and technological implementations are considered as well as usage of low-density parity check (LDPC) codes together with several constellation orbit configurations.Keywords: HARQ, LEO, satellite constellation, throughput
Procedia PDF Downloads 1482861 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets
Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu
Abstract:
Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.Keywords: GEO SAR, radar, simulation, ship
Procedia PDF Downloads 181