Search results for: linear multistep methods
15772 A Diagnostic Accuracy Study: Comparison of Two Different Molecular-Based Tests (Genotype HelicoDR and Seeplex Clar-H. pylori ACE Detection), in the Diagnosis of Helicobacter pylori Infections
Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor
Abstract:
Aim: The aim of this study was to compare diagnostic values of two different molecular-based tests (GenoType® HelicoDR ve Seeplex® H. pylori-ClaR- ACE Detection) in detection presence of the H. pylori from gastric biopsy specimens. In addition to this also was aimed to determine resistance ratios of H. pylori strains against to clarytromycine and quinolone isolated from gastric biopsy material cultures by using both the genotypic (GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection) and phenotypic (gradient strip, E-test) methods. Material and methods: A total of 266 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-June 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in all the biopsy samples was investigated by five differnt dignostic methods together: culture (C) (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (H) (Giemsa, Hematoxylin and Eosin staining), rapid urease test (RUT) (CLOtest, Cimberly-Clark, USA), and two different molecular tests; GenoType® HelicoDR, Hain, Germany, based on DNA strip assay, and Seeplex ® H. pylori -ClaR- ACE Detection, Seegene, South Korea, based on multiplex PCR. Antimicrobial resistance of H. pylori isolates against clarithromycin and levofloxacin was determined by GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, and gradient strip (E-test, bioMerieux, France) methods. Culture positivity alone or positivities of both histology and RUT together was accepted as the gold standard for H. pylori positivity. Sensitivity and specificity rates of two molecular methods used in the study were calculated by taking the two gold standards previously mentioned. Results: A total of 266 patients between 16-83 years old who 144 (54.1 %) were female, 122 (45.9 %) were male were included in the study. 144 patients were found as culture positive, and 157 were H and RUT were positive together. 179 patients were found as positive with GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection together. Sensitivity and specificity rates of studied five different methods were found as follows: C were 80.9 % and 84.4 %, H + RUT were 88.2 % and 75.4 %, GenoType® HelicoDR were 100 % and 71.3 %, and Seeplex ® H. pylori -ClaR- ACE Detection were, 100 % and 71.3 %. A strong correlation was found between C and H+RUT, C and GenoType® HelicoDR, and C and Seeplex ® H. pylori -ClaR- ACE Detection (r:0.644 and p:0.000, r:0.757 and p:0.000, r:0.757 and p:0.000, respectively). Of all the isolated 144 H. pylori strains 24 (16.6 %) were detected as resistant to claritromycine, and 18 (12.5 %) were levofloxacin. Genotypic claritromycine resistance was detected only in 15 cases with GenoType® HelicoDR, and 6 cases with Seeplex ® H. pylori -ClaR- ACE Detection. Conclusion: In our study, it was concluded that; GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection was found as the most sensitive diagnostic methods when comparing all the investigated other ones (C, H, and RUT).Keywords: Helicobacter pylori, GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, antimicrobial resistance
Procedia PDF Downloads 16815771 Augmented Reality in Teaching Children with Autism
Authors: Azadeh Afrasyabi, Ali Khaleghi, Aliakbar Alijarahi
Abstract:
Training at an early age is so important, because of tremendous changes in adolescence, including the formation of character, physical changes and other factors. One of the most sensitive sectors in this field is the children with a disability and are somehow special children who have trouble in communicating with their environment. One of the emerging technologies in the field of education that can be effectively profitable called augmented reality, where the combination of real world and virtual images in real time produces new concepts that can facilitate learning. The purpose of this paper is to propose an effective training method for special and disabled children based on augmented reality. Of course, in particular, the efficiency of augmented reality in teaching children with autism will consider, also examine the various aspect of this disease and different learning methods in this area.Keywords: technology in education, augmented reality, special education, teaching methods
Procedia PDF Downloads 37115770 Indoor and Outdoor Forest Farming for Year-Round Food and Medicine Production, Carbon Sequestration, Soil-Building, and Climate Change Mitigation
Authors: Jerome Osentowski
Abstract:
The objective at Central Rocky Mountain Permaculture Institute has been to put in practice a sustainable way of life while growing food, medicine, and providing education. This has been done by applying methods of farming such as agroforestry, forest farming, and perennial polycultures. These methods have been found to be regenerative to the environment through carbon sequestration, soil-building, climate change mitigation, and the provision of food security. After 30 years of implementing carbon farming methods, the results are agro-diversity, self-sustaining systems, and a consistent provision of food and medicine. These results are exhibited through polyculture plantings in an outdoor forest garden spanning roughly an acre containing about 200 varieties of fruits, nuts, nitrogen-fixing trees, and medicinal herbs, and two indoor forest garden greenhouses (one Mediterranean and one Tropical) containing about 50 varieties of tropical fruits, beans, herbaceous plants and more. While the climate zone outside the greenhouse is 6, the tropical forest garden greenhouse retains an indoor climate zone of 11 with near-net-zero energy consumption through the use of a climate battery, allowing the greenhouse to serve as a year-round food producer. The effort to source food from the forest gardens is minimal compared to annual crop production. The findings at Central Rocky Mountain Permaculture Institute conclude that agroecological methods are not only beneficial but necessary in order to revive and regenerate the environment and food security.Keywords: agroecology, agroforestry, carbon farming, carbon sequestration, climate battery, food security, forest farming, forest garden, greenhouse, near-net-zero, perennial polycultures
Procedia PDF Downloads 44215769 Modeling and Simulation of a CMOS-Based Analog Function Generator
Authors: Madina Hamiane
Abstract:
Modelling and simulation of an analogy function generator is presented based on a polynomial expansion model. The proposed function generator model is based on a 10th order polynomial approximation of any of the required functions. The polynomial approximations of these functions can then be implemented using basic CMOS circuit blocks. In this paper, a circuit model is proposed that can simultaneously generate many different mathematical functions. The circuit model is designed and simulated with HSPICE and its performance is demonstrated through the simulation of a number of non-linear functions.Keywords: modelling and simulation, analog function generator, polynomial approximation, CMOS transistors
Procedia PDF Downloads 45915768 Experimental Chevreul’s Salt Production Methods on Copper Recovery
Authors: Turan Çalban, Oral Laçin, Abdüsselam Kurtbaş
Abstract:
The experimental production methods Chevreul’s salt being a intermediate stage product for copper recovery were investigated by dealing with the articles written on this topic. Chevreul’s salt, Cu2SO3.CuSO3.2H2O, being a mixed valence copper sulphite compound has been obtained by using different methods and reagents. Chevreul’s salt has a intense brick-red color. It is a highly stable and expensive salt. The production of Chevreul’s salt plays a key role in hiydrometallurgy. In recent years, researchs on this compound have been intensified. Silva et al. reported that this salt is thermally stable up to 200oC. Çolak et al. precipitated the Chevreul’s salt by using ammonia and sulphur dioxide. Çalban et al. obtained at the optimum conditions by passing SO2 from leach solutions with NH3-(NH4)2SO4. Yeşiryurt and Çalban investigated the optimum precipitation conditions of Chevreul’s salt from synthetic CuSO4 solutions including Na2SO3. Çalban et al. achieved the precipitation of Chevreul’s salt at the optimum conditions by passing SO2 from synthetic CuSO4 solutions. Çalban et al. examined the precipitation conditions of Chevreul’s salt using (NH4)2SO3 from synthetic aqueous CuSO4 solutions. In light of these studies, it can be said that Chevreul’s salt can be produced practically from both a leach solutions including copper and synthetic CuSO4 solutions.Keywords: Chevreul’s salt, ammonia, copper sulpfite, sodium sülfite, optimum conditions
Procedia PDF Downloads 26815767 Factorial Design Analysis for Quality of Video on MANET
Authors: Hyoup-Sang Yoon
Abstract:
The quality of video transmitted by mobile ad hoc networks (MANETs) can be influenced by several factors, including protocol layers; parameter settings of each protocol. In this paper, we are concerned with understanding the functional relationship between these influential factors and objective video quality in MANETs. We illustrate a systematic statistical design of experiments (DOE) strategy can be used to analyse MANET parameters and performance. Using a 2k factorial design, we quantify the main and interactive effects of 7 factors on a response metric (i.e., mean opinion score (MOS) calculated by PSNR with Evalvid package) we then develop a first-order linear regression model between the influential factors and the performance metric.Keywords: evalvid, full factorial design, mobile ad hoc networks, ns-2
Procedia PDF Downloads 41315766 Speckle Noise Reduction Using Anisotropic Filter Based on Wavelets
Authors: Kritika Bansal, Akwinder Kaur, Shruti Gujral
Abstract:
In this paper, the approach of denoising is solved by using a new hybrid technique which associates the different denoising methods. Wavelet thresholding and anisotropic diffusion filter are the two different filters in our hybrid techniques. The Wavelet thresholding removes the noise by removing the high frequency components with lesser edge preservation, whereas an anisotropic diffusion filters is based on partial differential equation, (PDE) to remove the speckle noise. This PDE approach is used to preserve the edges and provides better smoothing. So our new method proposes a combination of these two filtering methods which performs better results in terms of peak signal to noise ratio (PSNR), coefficient of correlation (COC) and equivalent no of looks (ENL).Keywords: denoising, anisotropic diffusion filter, multiplicative noise, speckle, wavelets
Procedia PDF Downloads 51215765 35 MHz Coherent Plane Wave Compounding High Frequency Ultrasound Imaging
Authors: Chih-Chung Huang, Po-Hsun Peng
Abstract:
Ultrasound transient elastography has become a valuable tool for many clinical diagnoses, such as liver diseases and breast cancer. The pathological tissue can be distinguished by elastography due to its stiffness is different from surrounding normal tissues. An ultrafast frame rate of ultrasound imaging is needed for transient elastography modality. The elastography obtained in the ultrafast system suffers from a low quality for resolution, and affects the robustness of the transient elastography. In order to overcome these problems, a coherent plane wave compounding technique has been proposed for conventional ultrasound system which the operating frequency is around 3-15 MHz. The purpose of this study is to develop a novel beamforming technique for high frequency ultrasound coherent plane-wave compounding imaging and the simulated results will provide the standards for hardware developments. Plane-wave compounding imaging produces a series of low-resolution images, which fires whole elements of an array transducer in one shot with different inclination angles and receives the echoes by conventional beamforming, and compounds them coherently. Simulations of plane-wave compounding image and focused transmit image were performed using Field II. All images were produced by point spread functions (PSFs) and cyst phantoms with a 64-element linear array working at 35MHz center frequency, 55% bandwidth, and pitch of 0.05 mm. The F number is 1.55 in all the simulations. The simulated results of PSFs and cyst phantom which were obtained using single, 17, 43 angles plane wave transmission (angle of each plane wave is separated by 0.75 degree), and focused transmission. The resolution and contrast of image were improved with the number of angles of firing plane wave. The lateral resolutions for different methods were measured by -10 dB lateral beam width. Comparison of the plane-wave compounding image and focused transmit image, both images exhibited the same lateral resolution of 70 um as 37 angles were performed. The lateral resolution can reach 55 um as the plane-wave was compounded 47 angles. All the results show the potential of using high-frequency plane-wave compound imaging for realizing the elastic properties of the microstructure tissue, such as eye, skin and vessel walls in the future.Keywords: plane wave imaging, high frequency ultrasound, elastography, beamforming
Procedia PDF Downloads 53915764 Influence of Processing Regime and Contaminants on the Properties of Postconsumer Thermoplastics
Authors: Fares Alsewailem
Abstract:
Material recycling of thermoplastic waste offers practical solution for municipal solid waste reduction. Post-consumer plastics such as polyethylene (PE), polyethyleneterephtalate (PET), and polystyrene (PS) may be separated from each other by physical methods such as density difference and hence processed as single plastic, however one should be cautious about the contaminants presence in the waste stream inform of paper, glue, etc. since these articles even in trace amount may deteriorate properties of the recycled plastics especially the mechanical properties. furthermore, melt processing methods used to recycle thermoplastics such as extrusion and compression molding may induce degradation of some of the recycled plastics such as PET and PS. In this research, it is shown that care should be taken when processing recycled plastics by melt processing means in two directions, first contaminants should be extremely minimized, and secondly melt processing steps should also be minimum.Keywords: Recycling, PET, PS, HDPE, mechanical
Procedia PDF Downloads 28415763 Copper Oxide Doped Carbon Catalyst for Anodic Half-Cell of Vanadium Redox Flow Battery
Authors: Irshad U. Khan, Tanmay Paul, Murali Mohan Seepana
Abstract:
This paper presents a study on synthesizing and characterizing a Copper oxide doped Carbon (CuO-C) electrocatalyst for the negative half-cell reactions of Vanadium Redox Flow Battery (VRFB). The CuO was synthesized using a microreactor. The electrocatalyst was characterized using X-ray Diffraction (XRD), Fourier Transform Infrared Spectroscopy (FTIR), and Field Emission Scanning Electron Microscopy (SEM). The electrochemical performance was assessed by linear sweep voltammetry (LSV). The findings suggest that the synthesized CuO exhibited favorable crystallinity, morphology, and surface area, which reflects improved cell performance.Keywords: ECSA, electrocatalyst, energy storage, Tafel
Procedia PDF Downloads 9015762 Assessing the Competence of Oral Surgery Trainees: A Systematic Review
Authors: Chana Pavneet
Abstract:
Background: In more recent years in dentistry, a greater emphasis has been placed on competency-based education (CBE) programmes. Undergraduate and postgraduate curriculums have been reformed to reflect these changes, and adopting a CBE approach has shown to be beneficial to trainees and places an emphasis on continuous lifelong learning. The literature is vast; however, very little work has been done specifically to the assessment of competence in dentistry and even less so in oral surgery. The majority of the literature tends to opinion pieces. Some small-scale studies have been undertaken in this area researching assessment tools which can be used to assess competence in oral surgery. However, there is a lack of general consensus on the preferable assessment methods. The aim of this review is to identify the assessment methods available and their usefulness. Methods: Electronic databases (Medline, Embase, and the Cochrane Database of systematic reviews) were searched. PRISMA guidelines were followed to identify relevant papers. Abstracts of studies were reviewed, and if they met the inclusion criteria, they were included in the review. Papers were reviewed against the critical appraisal skills programme (CASP) checklist and medical education research quality instrument (MERQSI) to assess their quality and identify any bias in a systematic manner. The validity and reliability of each assessment method or tool were assessed. Results: A number of assessment methods were identified, including self-assessment, peer assessment, and direct observation of skills by someone senior. Senior assessment tended to be the preferred method, followed by self-assessment and, finally, peer assessment. The level of training was shown to affect the preferred assessment method, with one study finding peer assessment more useful in postgraduate trainees as opposed to undergraduate trainees. Numerous tools for assessment were identified, including a checklist scale and a global rating scale. Both had their strengths and weaknesses, but the evidence was more favourable for global rating scales in terms of reliability, applicability to more clinical situations, and easier to use for examiners. Studies also looked into trainees’ opinions on assessment tools. Logbooks were not found to be significant in measuring the competence of trainees. Conclusion: There is limited literature exploring the methods and tools which assess the competence of oral surgery trainees. Current evidence shows that the most favourable assessment method and tool may differ depending on the stage of training. More research is required in this area to streamline assessment methods and tools.Keywords: competence, oral surgery, assessment, trainees, education
Procedia PDF Downloads 13415761 Stability Analysis of Three-Lobe Journal Bearing Lubricated with a Micropolar Fluids
Authors: Boualem Chetti
Abstract:
The dynamic characteristics of a three-lobe journal bearing lubricated with micropolar fluids are determined by the linear stability theory. Lubricating oil containing additives and contaminants is modeled as micropolar fluid. The modified Reynolds equation is obtained using the micropolar lubrication theory and the finite difference technique has been used to solve it. The dynamic characteristics in terms of stiffness, damping coefficients, the critical mass and whirl ratio are determined for various values of size of material characteristic length and the coupling number. The computed results show compared with Newtonian fluids, that micropolar fluid exhibits better stability.Keywords: three-lobe bearings, micropolar fluid, dynamic characteristics, stability analysis
Procedia PDF Downloads 36115760 Nurse-Patient Assignment: Case of Pediatrics Department
Authors: Jihene Jlassi, Ahmed Frikha, Wazna Kortli
Abstract:
The objectives of Nurse-Patient Assignment are the minimization of the overall hospital cost and the maximization of nurses ‘preferences. This paper aims to assess nurses' satisfaction related to the implementation of patient acuity tool-based assignments. So, we used an integer linear program that assigns patients to nurses while balancing nurse workloads. Then, the proposed model is applied to the Paediatrics Department at Kasserine Hospital Tunisia. Where patients need special acuities and high-level nursing skills and care. Hence, numerical results suggested that proposed nurse-patient assignment models can achieve a balanced assignmentKeywords: nurse-patient assignment, mathematical model, logistics, pediatrics department, balanced assignment
Procedia PDF Downloads 14815759 Effect of Different Processing Methods on the Proximate, Functional, Sensory, and Nutritional Properties of Weaning Foods Formulated from Maize (Zea mays) and Soybean (Glycine max) Flour Blends
Authors: C. O. Agu, C. C. Okafor
Abstract:
Maize and soybean flours were produced using different methods of processing which include fermentation (FWF), roasting (RWF) and malting (MWF). Products from the different methods were mixed in the ratio 60:40 maize/soybean, respectively. These composites mixed with other ingredients such as sugar, vegetable oil, vanilla flavour and vitamin mix were analyzed for proximate composition, physical/functional, sensory and nutritional properties. The results for the protein content ranged between 6.25% and 16.65% with sample RWF having the highest value. Crude fibre values ranged from 3.72 to 10.0%, carbohydrate from 58.98% to 64.2%, ash from 1.27 to 2.45%. Physical and functional properties such as bulk density, wettability, gelation capacity have values between 0.74 and 0.76g/ml, 20.33 and 46.33 min and 0.73 to 0.93g/ml, respectively. On the sensory quality colour, flavour, taste, texture and general acceptability were determined. In terms of colour and flavour there was no significant difference (P < 0.05) while the values for taste ranged between 4.89 and 7.1 l, texture 5.50 to 8.38 and general acceptability 6.09 and 7.89. Nutritionally there is no significant difference (P < 0.05) between sample RWF and the control in all parameters considered. Samples FWF and MWF showed significantly (P < 0.5) lower values in all parameters determined. In the light of the above findings, roasting method is highly recommend in the production of weaning foods.Keywords: fermentation, malting, ratio, roasting, wettability
Procedia PDF Downloads 30415758 Efficient Chiller Plant Control Using Modern Reinforcement Learning
Authors: Jingwei Du
Abstract:
The need of optimizing air conditioning systems for existing buildings calls for control methods designed with energy-efficiency as a primary goal. The majority of current control methods boil down to two categories: empirical and model-based. To be effective, the former heavily relies on engineering expertise and the latter requires extensive historical data. Reinforcement Learning (RL), on the other hand, is a model-free approach that explores the environment to obtain an optimal control strategy often referred to as “policy”. This research adopts Proximal Policy Optimization (PPO) to improve chiller plant control, and enable the RL agent to collaborate with experienced engineers. It exploits the fact that while the industry lacks historical data, abundant operational data is available and allows the agent to learn and evolve safely under human supervision. Thanks to the development of language models, renewed interest in RL has led to modern, online, policy-based RL algorithms such as the PPO. This research took inspiration from “alignment”, a process that utilizes human feedback to finetune the pretrained model in case of unsafe content. The methodology can be summarized into three steps. First, an initial policy model is generated based on minimal prior knowledge. Next, the prepared PPO agent is deployed so feedback from both critic model and human experts can be collected for future finetuning. Finally, the agent learns and adapts itself to the specific chiller plant, updates the policy model and is ready for the next iteration. Besides the proposed approach, this study also used traditional RL methods to optimize the same simulated chiller plants for comparison, and it turns out that the proposed method is safe and effective at the same time and needs less to no historical data to start up.Keywords: chiller plant, control methods, energy efficiency, proximal policy optimization, reinforcement learning
Procedia PDF Downloads 2815757 Soil Degradati̇on Mapping Using Geographic Information System, Remote Sensing and Laboratory Analysis in the Oum Er Rbia High Basin, Middle Atlas, Morocco
Authors: Aafaf El Jazouli, Ahmed Barakat, Rida Khellouk
Abstract:
Mapping of soil degradation is derived from field observations, laboratory measurements, and remote sensing data, integrated quantitative methods to map the spatial characteristics of soil properties at different spatial and temporal scales to provide up-to-date information on the field. Since soil salinity, texture and organic matter play a vital role in assessing topsoil characteristics and soil quality, remote sensing can be considered an effective method for studying these properties. The main objective of this research is to asses soil degradation by combining remote sensing data and laboratory analysis. In order to achieve this goal, the required study of soil samples was taken at 50 locations in the upper basin of Oum Er Rbia in the Middle Atlas in Morocco. These samples were dried, sieved to 2 mm and analyzed in the laboratory. Landsat 8 OLI imagery was analyzed using physical or empirical methods to derive soil properties. In addition, remote sensing can serve as a supporting data source. Deterministic potential (Spline and Inverse Distance weighting) and probabilistic interpolation methods (ordinary kriging and universal kriging) were used to produce maps of each grain size class and soil properties using GIS software. As a result, a correlation was found between soil texture and soil organic matter content. This approach developed in ongoing research will improve the prospects for the use of remote sensing data for mapping soil degradation in arid and semi-arid environments.Keywords: Soil degradation, GIS, interpolation methods (spline, IDW, kriging), Landsat 8 OLI, Oum Er Rbia high basin
Procedia PDF Downloads 16515756 Mathematical Model for Defection between Two Political Parties
Authors: Abdullahi Mohammed Auwal
Abstract:
Formation and change or decamping from one political party to another have now become a common trend in Nigeria. Many of the parties’ members who could not secure positions and or win elections in their parties or are not very much satisfied with the trends occurring in the party’s internal democratic principles and mechanisms, change their respective parties. This paper developed/presented and analyzed the used of non linear mathematical model for defections between two political parties using epidemiological approach. The whole population was assumed to be a constant and homogeneously mixed. Equilibria have been analytically obtained and their local and global stability discussed. Conditions for the co-existence of both the political parties have been determined, in the study of defections between People Democratic Party (PDP) and All Progressive Congress (APC) in Nigeria using numerical simulations to support the analytical results.Keywords: model, political parties, deffection, stability, equilibrium, epidemiology
Procedia PDF Downloads 63715755 Neural Networks with Different Initialization Methods for Depression Detection
Authors: Tianle Yang
Abstract:
As a common mental disorder, depression is a leading cause of various diseases worldwide. Early detection and treatment of depression can dramatically promote remission and prevent relapse. However, conventional ways of depression diagnosis require considerable human effort and cause economic burden, while still being prone to misdiagnosis. On the other hand, recent studies report that physical characteristics are major contributors to the diagnosis of depression, which inspires us to mine the internal relationship by neural networks instead of relying on clinical experiences. In this paper, neural networks are constructed to predict depression from physical characteristics. Two initialization methods are examined - Xaiver and Kaiming initialization. Experimental results show that a 3-layers neural network with Kaiming initialization achieves 83% accuracy.Keywords: depression, neural network, Xavier initialization, Kaiming initialization
Procedia PDF Downloads 12815754 A Method to Saturation Modeling of Synchronous Machines in d-q Axes
Authors: Mohamed Arbi Khlifi, Badr M. Alshammari
Abstract:
This paper discusses the general methods to saturation in the steady-state, two axis (d & q) frame models of synchronous machines. In particular, the important role of the magnetic coupling between the d-q axes (cross-magnetizing phenomenon), is demonstrated. For that purpose, distinct methods of saturation modeling of dumper synchronous machine with cross-saturation are identified, and detailed models synthesis in d-q axes. A number of models are given in the final developed form. The procedure and the novel models are verified by a critical application to prove the validity of the method and the equivalence between all developed models is reported. Advantages of some of the models over the existing ones and their applicability are discussed.Keywords: cross-magnetizing, models synthesis, synchronous machine, saturated modeling, state-space vectors
Procedia PDF Downloads 45415753 Gene Prediction in DNA Sequences Using an Ensemble Algorithm Based on Goertzel Algorithm and Anti-Notch Filter
Authors: Hamidreza Saberkari, Mousa Shamsi, Hossein Ahmadi, Saeed Vaali, , MohammadHossein Sedaaghi
Abstract:
In the recent years, using signal processing tools for accurate identification of the protein coding regions has become a challenge in bioinformatics. Most of the genomic signal processing methods is based on the period-3 characteristics of the nucleoids in DNA strands and consequently, spectral analysis is applied to the numerical sequences of DNA to find the location of periodical components. In this paper, a novel ensemble algorithm for gene selection in DNA sequences has been presented which is based on the combination of Goertzel algorithm and anti-notch filter (ANF). The proposed algorithm has many advantages when compared to other conventional methods. Firstly, it leads to identify the coding protein regions more accurate due to using the Goertzel algorithm which is tuned at the desired frequency. Secondly, faster detection time is achieved. The proposed algorithm is applied on several genes, including genes available in databases BG570 and HMR195 and their results are compared to other methods based on the nucleotide level evaluation criteria. Implementation results show the excellent performance of the proposed algorithm in identifying protein coding regions, specifically in identification of small-scale gene areas.Keywords: protein coding regions, period-3, anti-notch filter, Goertzel algorithm
Procedia PDF Downloads 38715752 Generator Subgraphs of the Wheel
Authors: Neil M. Mame
Abstract:
We consider only finite graphs without loops nor multiple edges. Let G be a graph with E(G) = {e1, e2, …., em}. The edge space of G, denoted by ε(G), is a vector space over the field Z2. The elements of ε(G) are all the subsets of E(G). Vector addition is defined as X+Y = X Δ Y, the symmetric difference of sets X and Y, for X, Y ∈ ε(G). Scalar multiplication is defined as 1.X =X and 0.X = Ø for X ∈ ε(G). The set S ⊆ ε(G) is called a generating set if every element ε(G) is a linear combination of the elements of S. For a non-empty set X ∈ ε(G), the smallest subgraph with edge set X is called edge-induced subgraph of G, denoted by G[X]. The set EH(G) = { A ∈ ε(G) : G[A] ≅ H } denotes the uniform set of H with respect to G and εH(G) denotes the subspace of ε(G) generated by EH(G). If εH(G) is generating set, then we call H a generator subgraph of G. This paper gives the characterization for the generator subgraphs of the wheel that contain cycles and gives the necessary conditions for the acyclic generator subgraphs of the wheel.Keywords: edge space, edge-induced subgraph, generator subgraph, wheel
Procedia PDF Downloads 46415751 Adaptive Motion Planning for 6-DOF Robots Based on Trigonometric Functions
Authors: Jincan Li, Mingyu Gao, Zhiwei He, Yuxiang Yang, Zhongfei Yu, Yuanyuan Liu
Abstract:
Building an appropriate motion model is crucial for trajectory planning of robots and determines the operational quality directly. An adaptive acceleration and deceleration motion planning based on trigonometric functions for the end-effector of 6-DOF robots in Cartesian coordinate system is proposed in this paper. This method not only achieves the smooth translation motion and rotation motion by constructing a continuous jerk model, but also automatically adjusts the parameters of trigonometric functions according to the variable inputs and the kinematic constraints. The results of computer simulation show that this method is correct and effective to achieve the adaptive motion planning for linear trajectories.Keywords: kinematic constraints, motion planning, trigonometric function, 6-DOF robots
Procedia PDF Downloads 27115750 Transverse Vibration of Non-Homogeneous Rectangular Plates of Variable Thickness Using GDQ
Abstract:
The effect of non-homogeneity on the free transverse vibration of thin rectangular plates of bilinearly varying thickness has been analyzed using generalized differential quadrature (GDQ) method. The non-homogeneity of the plate material is assumed to arise due to linear variations in Young’s modulus and density of the plate material with the in-plane coordinates x and y. Numerical results have been computed for fully clamped and fully simply supported boundary conditions. The solution procedure by means of GDQ method has been implemented in a MATLAB code. The effect of various plate parameters has been investigated for the first three modes of vibration. A comparison of results with those available in literature has been presented.Keywords: rectangular, non-homogeneous, bilinear thickness, generalized differential quadrature (GDQ)
Procedia PDF Downloads 38415749 Sampling Effects on Secondary Voltage Control of Microgrids Based on Network of Multiagent
Authors: M. J. Park, S. H. Lee, C. H. Lee, O. M. Kwon
Abstract:
This paper studies a secondary voltage control framework of the microgrids based on the consensus for a communication network of multiagent. The proposed control is designed by the communication network with one-way links. The communication network is modeled by a directed graph. At this time, the concept of sampling is considered as the communication constraint among each distributed generator in the microgrids. To analyze the sampling effects on the secondary voltage control of the microgrids, by using Lyapunov theory and some mathematical techniques, the sufficient condition for such problem will be established regarding linear matrix inequality (LMI). Finally, some simulation results are given to illustrate the necessity of the consideration of the sampling effects on the secondary voltage control of the microgrids.Keywords: microgrids, secondary control, multiagent, sampling, LMI
Procedia PDF Downloads 33315748 Electrochemical Response Transductions of Graphenated-Polyaniline Nanosensor for Environmental Anthracene
Authors: O. Tovide, N. Jahed, N. Mohammed, C. E. Sunday, H. R. Makelane, R. F. Ajayi, K. M. Molapo, A. Tsegaye, M. Masikini, S. Mailu, A. Baleg, T. Waryo, P. G. Baker, E. I. Iwuoha
Abstract:
A graphenated–polyaniline (GR-PANI) nanocomposite sensor was constructed and used for the determination of anthracene. The direct electro-oxidation behavior of anthracene on the GR-PANI modified glassy carbon electrode (GCE) was used as the sensing principle. The results indicate thatthe response profile of the oxidation of anthracene on GR-PANI-modified GCE provides for the construction of sensor systems based onamperometric and potentiometric signal transductions. A dynamic linear range of 0.12- 100 µM anthracene and a detection limit of 0.044 µM anthracene were established for the sensor system.Keywords: electrochemical sensors, environmental pollutants, graphenated-polymers, polyaromatic hydrocarbon
Procedia PDF Downloads 35615747 Climate Changes Impact on Artificial Wetlands
Authors: Carla Idely Palencia-Aguilar
Abstract:
Artificial wetlands play an important role at Guasca Municipality in Colombia, not only because they are used for the agroindustry, but also because more than 45 species were found, some of which are endemic and migratory birds. Remote sensing was used to determine the changes in the area occupied by water of artificial wetlands by means of Aster and Modis images for different time periods. Evapotranspiration was also determined by three methods: Surface Energy Balance System-Su (SEBS) algorithm, Surface Energy Balance- Bastiaanssen (SEBAL) algorithm, and Potential Evapotranspiration- FAO. Empirical equations were also developed to determine the relationship between Normalized Difference Vegetation Index (NDVI) versus net radiation, ambient temperature and rain with an obtained R2 of 0.83. Groundwater level fluctuations on a daily basis were studied as well. Data from a piezometer placed next to the wetland were fitted with rain changes (with two weather stations located at the proximities of the wetlands) by means of multiple regression and time series analysis, the R2 from the calculated and measured values resulted was higher than 0.98. Information from nearby weather stations provided information for ordinary kriging as well as the results for the Digital Elevation Model (DEM) developed by using PCI software. Standard models (exponential, spherical, circular, gaussian, linear) to describe spatial variation were tested. Ordinary Cokriging between height and rain variables were also tested, to determine if the accuracy of the interpolation would increase. The results showed no significant differences giving the fact that the mean result of the spherical function for the rain samples after ordinary kriging was 58.06 and a standard deviation of 18.06. The cokriging using for the variable rain, a spherical function; for height variable, the power function and for the cross variable (rain and height), the spherical function had a mean of 57.58 and a standard deviation of 18.36. Threatens of eutrophication were also studied, given the unconsciousness of neighbours and government deficiency. Water quality was determined over the years; different parameters were studied to determine the chemical characteristics of water. In addition, 600 pesticides were studied by gas and liquid chromatography. Results showed that coliforms, nitrogen, phosphorous and prochloraz were the most significant contaminants.Keywords: DEM, evapotranspiration, geostatistics, NDVI
Procedia PDF Downloads 12015746 Sentiment Classification Using Enhanced Contextual Valence Shifters
Authors: Vo Ngoc Phu, Phan Thi Tuoi
Abstract:
We have explored different methods of improving the accuracy of sentiment classification. The sentiment orientation of a document can be positive (+), negative (-), or neutral (0). We combine five dictionaries from [2, 3, 4, 5, 6] into the new one with 21137 entries. The new dictionary has many verbs, adverbs, phrases and idioms, that are not in five ones before. The paper shows that our proposed method based on the combination of Term-Counting method and Enhanced Contextual Valence Shifters method has improved the accuracy of sentiment classification. The combined method has accuracy 68.984% on the testing dataset, and 69.224% on the training dataset. All of these methods are implemented to classify the reviews based on our new dictionary and the Internet Movie data set.Keywords: sentiment classification, sentiment orientation, valence shifters, contextual, valence shifters, term counting
Procedia PDF Downloads 50415745 Quintic Spline Solution of Fourth-Order Parabolic Equations Arising in Beam Theory
Authors: Reza Mohammadi, Mahdieh Sahebi
Abstract:
We develop a method based on polynomial quintic spline for numerical solution of fourth-order non-homogeneous parabolic partial differential equation with variable coefficient. By using polynomial quintic spline in off-step points in space and finite difference in time directions, we obtained two three level implicit methods. Stability analysis of the presented method has been carried out. We solve four test problems numerically to validate the derived method. Numerical comparison with other methods shows the superiority of presented scheme.Keywords: fourth-order parabolic equation, variable coefficient, polynomial quintic spline, off-step points
Procedia PDF Downloads 35215744 Applying the Extreme-Based Teaching Model in Post-Secondary Online Classroom Setting: A Field Experiment
Authors: Leon Pan
Abstract:
The first programming course within post-secondary education has long been recognized as a challenging endeavor for both educators and students alike. Historically, these courses have exhibited high failure rates and a notable number of dropouts. Instructors often lament students' lack of effort in their coursework, and students often express frustration that the teaching methods employed are not effective. Drawing inspiration from the successful principles of Extreme Programming, this study introduces an approach—the Extremes-based teaching model — aimed at enhancing the teaching of introductory programming courses. To empirically determine the effectiveness of the model, a comparison was made between a section taught using the extreme-based model and another utilizing traditional teaching methods. Notably, the extreme-based teaching class required students to work collaboratively on projects while also demanding continuous assessment and performance enhancement within groups. This paper details the application of the extreme-based model within the post-secondary online classroom context and presents the compelling results that emphasize its effectiveness in advancing the teaching and learning experiences. The extreme-based model led to a significant increase of 13.46 points in the weighted total average and a commendable 10% reduction in the failure rate.Keywords: extreme-based teaching model, innovative pedagogical methods, project-based learning, team-based learning
Procedia PDF Downloads 5915743 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 134