Search results for: operator methods
14694 Grid Computing for Multi-Objective Optimization Problems
Authors: Aouaouche Elmaouhab, Hassina Beggar
Abstract:
Solving multi-objective discrete optimization applications has always been limited by the resources of one machine: By computing power or by memory, most often both. To speed up the calculations, the grid computing represents a primary solution for the treatment of these applications through the parallelization of these resolution methods. In this work, we are interested in the study of some methods for solving multiple objective integer linear programming problem based on Branch-and-Bound and the study of grid computing technology. This study allowed us to propose an implementation of the method of Abbas and Al on the grid by reducing the execution time. To enhance our contribution, the main results are presented.Keywords: multi-objective optimization, integer linear programming, grid computing, parallel computing
Procedia PDF Downloads 48614693 Research on the Online Learning Activities Design and Students’ Experience Based on APT Model
Authors: Wang Yanli, Cheng Yun, Yang Jiarui
Abstract:
Due to the separation of teachers and students, online teaching during the COVID-19 epidemic was faced with many problems, such as low enthusiasm of students, distraction, low learning atmosphere, and insufficient interaction between teachers and students. The essay designed the elaborate online learning activities of the course 'Research Methods of Educational Science' based on the APT model from three aspects of multiple assessment methods, a variety of teaching methods, and online learning environment and technology. Student's online learning experience was examined from the perception of online course, the perception of the online learning environment, and satisfaction after the course’s implementation. The research results showed that students have a positive overall evaluation of online courses, a high degree of engagement in learning, positive acceptance of online learning, and high satisfaction with it, but students hold a relatively neutral attitude toward online learning. And some dimensions in online learning experience were found to have positive influence on students' satisfaction with online learning. We suggest making the good design of online courses, selecting proper learning platforms, and conducting blended learning to improve students’ learning experience. This study has both theoretical and practical significance for the design, implementation, effect feedback, and sustainable development of online teaching in the post-epidemic era.Keywords: APT model, online learning, online learning activities, learning experience
Procedia PDF Downloads 13914692 Voices and Pictures from an Online Course and a Face to Face Course
Authors: Eti Gilad, Shosh Millet
Abstract:
In light of the technological development and its introduction into the field of education, an online course was designed in parallel to the 'conventional' course for teaching the ''Qualitative Research Methods''. This course aimed to characterize learning-teaching processes in a 'Qualitative Research Methods' course studied in two different frameworks. Moreover its objective was to explore the difference between the culture of a physical learning environment and that of online learning. The research monitored four learner groups, a total of 72 students, for two years, two groups from the two course frameworks each year. The courses were obligatory for M.Ed. students at an academic college of education and were given by one female-lecturer. The research was conducted in the qualitative method as a case study in order to attain insights about occurrences in the actual contexts and sites in which they transpire. The research tools were open-ended questionnaire and reflections in the form of vignettes (meaningful short pictures) to all students as well as an interview with the lecturer. The tools facilitated not only triangulation but also collecting data consisting of voices and pictures of teaching and learning. The most prominent findings are: differences between the two courses in the change features of the learning environment culture for the acquisition of contents and qualitative research tools. They were manifested by teaching methods, illustration aids, lecturer's profile and students' profile.Keywords: face to face course, online course, qualitative research, vignettes
Procedia PDF Downloads 41814691 A Review of Lortie’s Schoolteacher
Authors: Tsai-Hsiu Lin
Abstract:
Dan C. Lortie’s Schoolteacher: A sociological study is one of the best works on the sociology of teaching since W. Waller’s classic study. It is a book worthy of review. Following the tradition of symbolic interactionists, Lortie demonstrated the qualities who studied the occupation of teaching. Using several methods to gather effective data, Lortie has portrayed the ethos of the teaching profession. Therefore, the work is an important book on the teaching profession and teacher culture. Though outstanding, Lortie’s work is also flawed in that his perspectives and methodology were adopted largely from symbolic interactionism. First, Lortie in his work analyzed many points regarding teacher culture; for example, he was interested in exploring “sentiment,” “cathexis,” and “ethos.” Thus, he was more a psychologist than a sociologist. Second, symbolic interactionism led him to discern the teacher culture from a micro view, thereby missing the structural aspects. For example, he did not fully discuss the issue of gender and he ignored the issue of race. Finally, following the qualitative sociological tradition, Lortie employed many qualitative methods to gather data but only foucused on obtaining and presenting interview data. Moreover, he used measurement methods that were too simplistic for analyzing quantitative data fully.Keywords: education reform, teacher culture, teaching profession, Lortie’s Schoolteacher
Procedia PDF Downloads 23014690 Comparison of Sensitivity and Specificity of Pap Smear and Polymerase Chain Reaction Methods for Detection of Human Papillomavirus: A Review of Literature
Authors: M. Malekian, M. E. Heydari, M. Irani Estyar
Abstract:
Human papillomavirus (HPV) is one of the most common sexually transmitted infection, which may lead to cervical cancer as the main cause of it. With early diagnosis and treatment in health care services, cervical cancer and its complications are considered to be preventable. This study was aimed to compare the efficiency, sensitivity, and specificity of Pap smear and polymerase chain reaction (PCR) in detecting HPV. A literature search was performed in Google Scholar, PubMed and SID databases using the keywords 'human papillomavirus', 'pap smear' and 'polymerase change reaction' to identify studies comparing Pap smear and PCR methods for the detection. No restrictions were considered.10 studies were included in this review. All samples that were positive by pop smear were also positive by PCR. However, there were positive samples detected by PCR which was negative by pop smear and in all studies, many positive samples were missed by pop smear technique. Although The Pap smear had high specificity, PCR based HPV detection was more sensitive method and had the highest sensitivity. In order to promote the quality of detection and high achievement of the maximum results, PCR diagnostic methods in addition to the Pap smear are needed and Pap smear method should be combined with PCR techniques according to the high error rate of Pap smear in detection.Keywords: human papillomavirus, cervical cancer, pap smear, polymerase chain reaction
Procedia PDF Downloads 13114689 Pricing European Options under Jump Diffusion Models with Fast L-stable Padé Scheme
Authors: Salah Alrabeei, Mohammad Yousuf
Abstract:
The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. Modeling option pricing by Black-School models with jumps guarantees to consider the market movement. However, only numerical methods can solve this model. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, the exponential time differencing (ETD) method is applied for solving partial integrodifferential equations arising in pricing European options under Merton’s and Kou’s jump-diffusion models. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). A partial fraction form of Pad`e schemes is used to overcome the complexity of inverting polynomial of matrices. These two tools guarantee to get efficient and accurate numerical solutions. We construct a parallel and easy to implement a version of the numerical scheme. Numerical experiments are given to show how fast and accurate is our scheme.Keywords: Integral differential equations, , L-stable methods, pricing European options, Jump–diffusion model
Procedia PDF Downloads 15314688 A Review on Bone Grafting, Artificial Bone Substitutes and Bone Tissue Engineering
Authors: Kasun Gayashan Samarawickrama
Abstract:
Bone diseases, defects, and fractions are commonly seen in modern life. Since bone is regenerating dynamic living tissue, it will undergo healing process naturally, it cannot recover from major bone injuries, diseases and defects. In order to overcome them, bone grafting technique was introduced. Gold standard was the best method for bone grafting for the past decades. Due to limitations of gold standard, alternative methods have been implemented. Apart from them artificial bone substitutes and bone tissue engineering have become the emerging methods with technology for bone grafting. Many bone diseases and defects will be healed permanently with these promising techniques in future.Keywords: bone grafting, gold standard, bone substitutes, bone tissue engineering
Procedia PDF Downloads 29914687 Fuzzy Expert Approach for Risk Mitigation on Functional Urban Areas Affected by Anthropogenic Ground Movements
Authors: Agnieszka A. Malinowska, R. Hejmanowski
Abstract:
A number of European cities are strongly affected by ground movements caused by anthropogenic activities or post-anthropogenic metamorphosis. Those are mainly water pumping, current mining operation, the collapse of post-mining underground voids or mining-induced earthquakes. These activities lead to large and small-scale ground displacements and a ground ruptures. The ground movements occurring in urban areas could considerably affect stability and safety of structures and infrastructures. The complexity of the ground deformation phenomenon in relation to the structures and infrastructures vulnerability leads to considerable constraints in assessing the threat of those objects. However, the increase of access to the free software and satellite data could pave the way for developing new methods and strategies for environmental risk mitigation and management. Open source geographical information systems (OS GIS), may support data integration, management, and risk analysis. Lately, developed methods based on fuzzy logic and experts methods for buildings and infrastructure damage risk assessment could be integrated into OS GIS. Those methods were verified base on back analysis proving their accuracy. Moreover, those methods could be supported by ground displacement observation. Based on freely available data from European Space Agency and free software, ground deformation could be estimated. The main innovation presented in the paper is the application of open source software (OS GIS) for integration developed models and assessment of the threat of urban areas. Those approaches will be reinforced by analysis of ground movement based on free satellite data. Those data would support the verification of ground movement prediction models. Moreover, satellite data will enable our mapping of ground deformation in urbanized areas. Developed models and methods have been implemented in one of the urban areas hazarded by underground mining activity. Vulnerability maps supported by satellite ground movement observation would mitigate the hazards of land displacements in urban areas close to mines.Keywords: fuzzy logic, open source geographic information science (OS GIS), risk assessment on urbanized areas, satellite interferometry (InSAR)
Procedia PDF Downloads 16014686 An Authentic Algorithm for Ciphering and Deciphering Called Latin Djokovic
Authors: Diogen Babuc
Abstract:
The question that is a motivation of writing is how many devote themselves to discovering something in the world of science where much is discerned and revealed, but at the same time, much is unknown. Methods: The insightful elements of this algorithm are the ciphering and deciphering algorithms of Playfair, Caesar, and Vigenère. Only a few of their main properties are taken and modified, with the aim of forming a specific functionality of the algorithm called Latin Djokovic. Specifically, a string is entered as input data. A key k is given, with a random value between the values a and b = a+3. The obtained value is stored in a variable with the aim of being constant during the run of the algorithm. In correlation to the given key, the string is divided into several groups of substrings, and each substring has a length of k characters. The next step involves encoding each substring from the list of existing substrings. Encoding is performed using the basis of Caesar algorithm, i.e., shifting with k characters. However, that k is incremented by 1 when moving to the next substring in that list. When the value of k becomes greater than b+1, it’ll return to its initial value. The algorithm is executed, following the same procedure, until the last substring in the list is traversed. Results: Using this polyalphabetic method, ciphering and deciphering of strings are achieved. The algorithm also works for a 100-character string. The x character isn’t used when the number of characters in a substring is incompatible with the expected length. The algorithm is simple to implement, but it’s questionable if it works better than the other methods from the point of view of execution time and storage space.Keywords: ciphering, deciphering, authentic, algorithm, polyalphabetic cipher, random key, methods comparison
Procedia PDF Downloads 10314685 An Advanced Match-Up Scheduling Under Single Machine Breakdown
Abstract:
When a machine breakdown forces a Modified Flow Shop (MFS) out of the prescribed state, the proposed strategy reschedules part of the initial schedule to match up with the preschedule at some point. The objective is to create a new schedule that is consistent with the other production planning decisions like material flow, tooling and purchasing by utilizing the time critical decision making concept. We propose a new rescheduling strategy and a match-up point determination procedure through a feedback mechanism to increase both the schedule quality and stability. The proposed approach is compared with alternative reactive scheduling methods under different experimental settings.Keywords: advanced critical task methods modified flow shop (MFS), Manufacturing, experiment, determination
Procedia PDF Downloads 40614684 Heuristic Methods for the Capacitated Location- Allocation Problem with Stochastic Demand
Authors: Salinee Thumronglaohapun
Abstract:
The proper number and appropriate locations of service centers can save cost, raise revenue and gain more satisfaction from customers. Establishing service centers is high-cost and difficult to relocate. In long-term planning periods, several factors may affect the service. One of the most critical factors is uncertain demand of customers. The opened service centers need to be capable of serving customers and making a profit although the demand in each period is changed. In this work, the capacitated location-allocation problem with stochastic demand is considered. A mathematical model is formulated to determine suitable locations of service centers and their allocation to maximize total profit for multiple planning periods. Two heuristic methods, a local search and genetic algorithm, are used to solve this problem. For the local search, five different chances to choose each type of moves are applied. For the genetic algorithm, three different replacement strategies are considered. The results of applying each method to solve numerical examples are compared. Both methods reach to the same best found solution in most examples but the genetic algorithm provides better solutions in some cases.Keywords: location-allocation problem, stochastic demand, local search, genetic algorithm
Procedia PDF Downloads 12514683 Towards a Standardization in Scheduling Models: Assessing the Variety of Homonyms
Authors: Marcel Rojahn, Edzard Weber, Norbert Gronau
Abstract:
Terminology is a critical instrument for each researcher. Different terminologies for the same research object may arise in different research communities. By this inconsistency, many synergistic effects get lost. Theories and models will be more understandable and reusable if a common terminology is applied. This paper examines the terminological (in) consistency for the research field of job-shop scheduling through a literature review. There is an enormous variety in the choice of terms and mathematical notation for the same concept. The comparability, reusability, and combinability of scheduling methods are unnecessarily hampered by the arbitrary use of homonyms and synonyms. The acceptance in the community of used variables and notation forms is shown by means of a compliance quotient. This is proven by the evaluation of 240 scientific publications on planning methods.Keywords: job-shop scheduling, terminology, notation, standardization
Procedia PDF Downloads 10914682 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying
Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra
Abstract:
Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.Keywords: FT-NIR, pasta, moisture determination, food engineering
Procedia PDF Downloads 25814681 Computer Aided Analysis of Breast Based Diagnostic Problems from Mammograms Using Image Processing and Deep Learning Methods
Authors: Ali Berkan Ural
Abstract:
This paper presents the analysis, evaluation, and pre-diagnosis of early stage breast based diagnostic problems (breast cancer, nodulesorlumps) by Computer Aided Diagnosing (CAD) system from mammogram radiological images. According to the statistics, the time factor is crucial to discover the disease in the patient (especially in women) as possible as early and fast. In the study, a new algorithm is developed using advanced image processing and deep learning method to detect and classify the problem at earlystagewithmoreaccuracy. This system first works with image processing methods (Image acquisition, Noiseremoval, Region Growing Segmentation, Morphological Operations, Breast BorderExtraction, Advanced Segmentation, ObtainingRegion Of Interests (ROIs), etc.) and segments the area of interest of the breast and then analyzes these partly obtained area for cancer detection/lumps in order to diagnosis the disease. After segmentation, with using the Spectrogramimages, 5 different deep learning based methods (specified Convolutional Neural Network (CNN) basedAlexNet, ResNet50, VGG16, DenseNet, Xception) are applied to classify the breast based problems.Keywords: computer aided diagnosis, breast cancer, region growing, segmentation, deep learning
Procedia PDF Downloads 9614680 Advances in Artificial intelligence Using Speech Recognition
Authors: Khaled M. Alhawiti
Abstract:
This research study aims to present a retrospective study about speech recognition systems and artificial intelligence. Speech recognition has become one of the widely used technologies, as it offers great opportunity to interact and communicate with automated machines. Precisely, it can be affirmed that speech recognition facilitates its users and helps them to perform their daily routine tasks, in a more convenient and effective manner. This research intends to present the illustration of recent technological advancements, which are associated with artificial intelligence. Recent researches have revealed the fact that speech recognition is found to be the utmost issue, which affects the decoding of speech. In order to overcome these issues, different statistical models were developed by the researchers. Some of the most prominent statistical models include acoustic model (AM), language model (LM), lexicon model, and hidden Markov models (HMM). The research will help in understanding all of these statistical models of speech recognition. Researchers have also formulated different decoding methods, which are being utilized for realistic decoding tasks and constrained artificial languages. These decoding methods include pattern recognition, acoustic phonetic, and artificial intelligence. It has been recognized that artificial intelligence is the most efficient and reliable methods, which are being used in speech recognition.Keywords: speech recognition, acoustic phonetic, artificial intelligence, hidden markov models (HMM), statistical models of speech recognition, human machine performance
Procedia PDF Downloads 47814679 Diagnosis of Avian Pathology in the East of Algeria
Authors: Khenenou Tarek, Benzaoui Hassina, Melizi Mohamed
Abstract:
The diagnosis requires a background of current knowledge in the field and also complementary means in which the laboratory occupies the central place for a better investigation. A correct diagnosis allows to establish the most appropriate treatment as soon as possible and avoids both the economic losses associated with mortality and growth retardation often observed in poultry furthermore it may reduce the high cost of treatment. Epedemiologic survey, hematologic and histopathologic study’s are three aspects of diagnosis heavily used in both human and veterinary pathology and the advanced researches in human medicine would be exploited to be applied in veterinary medicine with given modification .Whereas, the diagnostic methods in the east of Algeria are limited to the clinical signs and necropsy finding. Therefore, the diagnosis is based simply on the success or the failure of the therapeutic methods (therapeutic diagnosis).Keywords: chicken, diagnosis, hematology, histopathology
Procedia PDF Downloads 63114678 Human Factors Interventions for Risk and Reliability Management of Defence Systems
Authors: Chitra Rajagopal, Indra Deo Kumar, Ila Chauhan, Ruchi Joshi, Binoy Bhargavan
Abstract:
Reliability and safety are essential for the success of mission-critical and safety-critical defense systems. Humans are part of the entire life cycle of defense systems development and deployment. The majority of industrial accidents or disasters are attributed to human errors. Therefore, considerations of human performance and human reliability are critical in all complex systems, including defense systems. Defense systems are operating from the ground, naval and aerial platforms in diverse conditions impose unique physical and psychological challenges to the human operators. Some of the safety and mission-critical defense systems with human-machine interactions are fighter planes, submarines, warships, combat vehicles, aerial and naval platforms based missiles, etc. Human roles and responsibilities are also going through a transition due to the infusion of artificial intelligence and cyber technologies. Human operators, not accustomed to such challenges, are more likely to commit errors, which may lead to accidents or loss events. In such a scenario, it is imperative to understand the human factors in defense systems for better systems performance, safety, and cost-effectiveness. A case study using Task Analysis (TA) based methodology for assessment and reduction of human errors in the Air and Missile Defense System in the context of emerging technologies were presented. Action-oriented task analysis techniques such as Hierarchical Task Analysis (HTA) and Operator Action Event Tree (OAET) along with Critical Action and Decision Event Tree (CADET) for cognitive task analysis was used. Human factors assessment based on the task analysis helps in realizing safe and reliable defense systems. These techniques helped in the identification of human errors during different phases of Air and Missile Defence operations, leading to meet the requirement of a safe, reliable and cost-effective mission.Keywords: defence systems, reliability, risk, safety
Procedia PDF Downloads 13614677 Monte Carlo Methods and Statistical Inference of Multitype Branching Processes
Authors: Ana Staneva, Vessela Stoimenova
Abstract:
A parametric estimation of the MBP with Power Series offspring distribution family is considered in this paper. The MLE for the parameters is obtained in the case when the observable data are incomplete and consist only with the generation sizes of the family tree of MBP. The parameter estimation is calculated by using the Monte Carlo EM algorithm. The estimation for the posterior distribution and for the offspring distribution parameters are calculated by using the Bayesian approach and the Gibbs sampler. The article proposes various examples with bivariate branching processes together with computational results, simulation and an implementation using R.Keywords: Bayesian, branching processes, EM algorithm, Gibbs sampler, Monte Carlo methods, statistical estimation
Procedia PDF Downloads 42114676 Effect of Distance Education Students Motivation with the Turkish Language and Literature Course
Authors: Meva Apaydin, Fatih Apaydin
Abstract:
Role of education in the development of society is great. Teaching and training started with the beginning of the history and different methods and techniques which have been applied as the time passed and changed everything with the aim of raising the level of learning. In addition to the traditional teaching methods, technology has been used in recent years. With the beginning of the use of internet in education, some problems which could not be soluted till that time has been dealt and it is inferred that it is possible to educate the learners by using contemporary methods as well as traditional methods. As an advantage of technological developments, distance education is a system which paves the way for the students to be educated individually wherever and whenever they like without the needs of physical school environment. Distance education has become prevalent because of the physical inadequacies in education institutions, as a result; disadvantageous circumstances such as social complexities, individual differences and especially geographical distance disappear. What’s more, the high-speed of the feedbacks between teachers and learners, improvement in student motivation because there is no limitation of time, low-cost, the objective measuring and evaluation are on foreground. In spite of the fact that there is teaching beneficences in distance education, there are also limitations. Some of the most important problems are that : Some problems which are highly possible to come across may not be solved in time, lack of eye-contact between the teacher and the learner, so trust-worthy feedback cannot be got or the problems stemming from the inadequate technological background are merely some of them. Courses are conducted via distance education in many departments of the universities in our country. In recent years, giving lectures such as Turkish Language, English, and History in the first grades of the academic departments in the universities is an application which is constantly becoming prevalent. In this study, the application of Turkish Language course via distance education system by analyzing advantages and disadvantages of the distance education system which is based on internet.Keywords: distance education, Turkish language, motivation, benefits
Procedia PDF Downloads 43714675 TDApplied: An R Package for Machine Learning and Inference with Persistence Diagrams
Authors: Shael Brown, Reza Farivar
Abstract:
Persistence diagrams capture valuable topological features of datasets that other methods cannot uncover. Still, their adoption in data pipelines has been limited due to the lack of publicly available tools in R (and python) for analyzing groups of them with machine learning and statistical inference. In an easy-to-use and scalable R package called TDApplied, we implement several applied analysis methods tailored to groups of persistence diagrams. The two main contributions of our package are comprehensiveness (most functions do not have implementations elsewhere) and speed (shown through benchmarking against other R packages). We demonstrate applications of the tools on simulated data to illustrate how easily practical analyses of any dataset can be enhanced with topological information.Keywords: machine learning, persistence diagrams, R, statistical inference
Procedia PDF Downloads 8714674 Coagulase Negative Staphylococci: Phenotypic Characterization and Antimicrobial Susceptibility Pattern
Authors: Lok Bahadur Shrestha, Narayan Raj Bhattarai, Basudha Khanal
Abstract:
Introduction: Coagulase-negative staphylococci (CoNS) are the normal commensal of human skin and mucous membranes. The study was carried out to study the prevalence of CoNS among clinical isolates, to characterize them up to species level and to compare the three conventional methods for detection of biofilm formation. Objectives: to characterize the clinically significant coagulase-negative staphylococci up to species level, to compare the three phenotypic methods for the detection of biofilm formation and to study the antimicrobial susceptibility pattern of the isolates. Methods: CoNS isolates were obtained from various clinical samples during the period of 1 year. Characterization up to species level was done using biochemical test and study of biofilm formation was done by tube adherence, congo red agar, and tissue culture plate method. Results: Among 71 CoNS isolates, seven species were identified. S. epidermidis was the most common species followed by S. saprophyticus, S. haemolyticus. Antimicrobial susceptibility pattern of CoNS documented resistance of 90% to ampicillin. Resistance to cefoxitin and ceftriaxone was observed in 55% of the isolates. We detected biofilm formation in 71.8% of isolates. The sensitivity of tube adherence method was 82% while that of congo red agar method was 78%. Conclusion: Among 71 CoNS isolated, S. epidermidis was the most common isolates followed by S. saprophyticus and S. haemolyticus. Biofilm formation was detected in 71.8% of the isolates. All of the methods were effective at detecting biofilm-producing CoNS strains. Biofilm former strains are more resistant to antibiotics as compared to biofilm non-formers.Keywords: CoNS, congo red agar, bloodstream infections, foreign body-related infections, tissue culture plate
Procedia PDF Downloads 19914673 Comparing Community Detection Algorithms in Bipartite Networks
Authors: Ehsan Khademi, Mahdi Jalili
Abstract:
Despite the special features of bipartite networks, they are common in many systems. Real-world bipartite networks may show community structure, similar to what one can find in one-mode networks. However, the interpretation of the community structure in bipartite networks is different as compared to one-mode networks. In this manuscript, we compare a number of available methods that are frequently used to discover community structure of bipartite networks. These networks are categorized into two broad classes. One class is the methods that, first, transfer the network into a one-mode network, and then apply community detection algorithms. The other class is the algorithms that have been developed specifically for bipartite networks. These algorithms are applied on a model network with prescribed community structure.Keywords: community detection, bipartite networks, co-clustering, modularity, network projection, complex networks
Procedia PDF Downloads 62714672 Multichannel Surface Electromyography Trajectories for Hand Movement Recognition Using Intrasubject and Intersubject Evaluations
Authors: Christina Adly, Meena Abdelmeseeh, Tamer Basha
Abstract:
This paper proposes a system for hand movement recognition using multichannel surface EMG(sEMG) signals obtained from 40 subjects using 40 different exercises, which are available on the Ninapro(Non-Invasive Adaptive Prosthetics) database. First, we applied processing methods to the raw sEMG signals to convert them to their amplitudes. Second, we used deep learning methods to solve our problem by passing the preprocessed signals to Fully connected neural networks(FCNN) and recurrent neural networks(RNN) with Long Short Term Memory(LSTM). Using intrasubject evaluation, The accuracy using the FCNN is 72%, with a processing time for training around 76 minutes, and for RNN's accuracy is 79.9%, with 8 minutes and 22 seconds processing time. Third, we applied some postprocessing methods to improve the accuracy, like majority voting(MV) and Movement Error Rate(MER). The accuracy after applying MV is 75% and 86% for FCNN and RNN, respectively. The MER value has an inverse relationship with the prediction delay while varying the window length for measuring the MV. The different part uses the RNN with the intersubject evaluation. The experimental results showed that to get a good accuracy for testing with reasonable processing time, we should use around 20 subjects.Keywords: hand movement recognition, recurrent neural network, movement error rate, intrasubject evaluation, intersubject evaluation
Procedia PDF Downloads 14514671 Utilization of Long Acting Reversible Contraceptive Methods, and Associated Factors among Female College Students in Gondar Town, Northwest Ethiopia, 2018
Authors: Woledegebrieal Aregay
Abstract:
Introduction: Family planning is defined as the ability of individuals and couples to anticipate and attain their desired number of children and the spacing and timing of their births. It is part of a strategy to reduce poverty, maternal, infant and child mortality; empowers women by lightening the burden of excessive childbearing. Family planning is achieved through the use of different contraceptive methods among which the most effective method is modern family planning methods like Long-Acting Reversible Contraceptive (LARCs) which are IUCD and Implant and these methods have multiple advantages over other reversible methods. Most importantly, once in place, they do not require maintenance and their duration of action is long, ranging from 3 to10 years. Methods: An institutional-based cross-sectional study was conducted in Gondar town among female college students from April-May. A simple random sampling technique was employed to recruit a total of 1166 study subjects. Descriptive variables were computed for all predictors & dependent variables. The presence of an association between covariates & LARC use was observed by two tables’ findings using the chi-square test. Bivariate logistic regression was conducted to identify all possible factors affecting LARC utilization & its crude Odds Ratio, 95% Confidence Interval (CI) & P-value was observed. A multivariable logistic regression model was developed to control possible confounding variables. Adjusted Odds Ratio (AOR) with 95% Confidence Interval (CI) &P-values will be computed to identify significantly associated factors (P < 0.05) with LARC utilization. Result: Utilization of LARCs was 20.4%, the most common is Implant 86(96.5%), and followed by Intra-Uterine Contraceptive Device (IUCD) 3(3.5%). The result of the multivariate analysis revealed that the significant association of marital status of the respondent on utilization of LARC [AOR 3.965(2.051-7.665)], discussion of the respondent about LARC utilization with the husband/boyfriend [AOR 2.198(1.191-4.058)], and attitude of the respondent on implant was found to be associated [AOR 0.365(0.143-0.933)].Conclusion: The level of knowledge and attitude in this study was not satisfactory, the utilization of long-acting reversible contraceptives among college students was relatively satisfactory but if the knowledge and attitude of the participant has improved the prevalence of LARC were increased.Keywords: utilization, long-acting reversible contraceptive, Ethiopia, Gondar
Procedia PDF Downloads 22614670 Ultrasonic Treatment of Baker’s Yeast Effluent
Authors: Emine Yılmaz, Serap Fındık
Abstract:
Baker’s yeast industry uses molasses as a raw material. Molasses is end product of sugar industry. Wastewater from molasses processing presents large amount of coloured substances that give dark brown color and high organic load to the effluents. The main coloured compounds are known as melanoidins. Melanoidins are product of Maillard reaction between amino acid and carbonyl groups in molasses. Dark colour prevents sunlight penetration and reduces photosynthetic activity and dissolved oxygen level of surface waters. Various methods like biological processes (aerobic and anaerobic), ozonation, wet air oxidation, coagulation/flocculation are used to treatment of baker’s yeast effluent. Before effluent is discharged adequate treatment is imperative. In addition to this, increasingly stringent environmental regulations are forcing distilleries to improve existing treatment and also to find alternative methods of effluent management or combination of treatment methods. Sonochemical oxidation is one of the alternative methods. Sonochemical oxidation employs ultrasound resulting in cavitation phenomena. In this study, decolorization of baker’s yeast effluent was investigated by using ultrasound. Baker’s yeast effluent was supplied from a factory which is located in the north of Turkey. An ultrasonic homogenizator used for this study. Its operating frequency is 20 kHz. TiO2-ZnO catalyst has been used as sonocatalyst. The effects of molar proportion of TiO2-ZnO, calcination temperature and time, catalyst amount were investigated on the decolorization of baker’s yeast effluent. The results showed that prepared composite TiO2-ZnO with 4:1 molar proportion treated at 700°C for 90 min provides better result. Initial decolorization rate at 15 min is 3% without catalyst, 14,5% with catalyst treated at 700°C for 90 min respectively.Keywords: baker’s yeast effluent, decolorization, sonocatalyst, ultrasound
Procedia PDF Downloads 47414669 Stock Movement Prediction Using Price Factor and Deep Learning
Abstract:
The development of machine learning methods and techniques has opened doors for investigation in many areas such as medicines, economics, finance, etc. One active research area involving machine learning is stock market prediction. This research paper tries to consider multiple techniques and methods for stock movement prediction using historical price or price factors. The paper explores the effectiveness of some deep learning frameworks for forecasting stock. Moreover, an architecture (TimeStock) is proposed which takes the representation of time into account apart from the price information itself. Our model achieves a promising result that shows a potential approach for the stock movement prediction problem.Keywords: classification, machine learning, time representation, stock prediction
Procedia PDF Downloads 14714668 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm
Authors: Ghada Badr, Arwa Alturki
Abstract:
The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.Keywords: alignment, RNA secondary structure, pairwise, component-based, data mining
Procedia PDF Downloads 45814667 Detection the Ice Formation Processes Using Multiple High Order Ultrasonic Guided Wave Modes
Authors: Regina Rekuviene, Vykintas Samaitis, Liudas Mažeika, Audrius Jankauskas, Virginija Jankauskaitė, Laura Gegeckienė, Abdolali Sadaghiani, Shaghayegh Saeidiharzand
Abstract:
Icing brings significant damage to aviation and renewable energy installations. Air-conditioning, refrigeration, wind turbine blades, airplane and helicopter blades often suffer from icing phenomena, which cause severe energy losses and impair aerodynamic performance. The icing process is a complex phenomenon with many different causes and types. Icing mechanisms, distributions, and patterns are still relevant to research topics. The adhesion strength between ice and surfaces differs in different icing environments. This makes the task of anti-icing very challenging. The techniques for various icing environments must satisfy different demands and requirements (e.g., efficient, lightweight, low power consumption, low maintenance and manufacturing costs, reliable operation). It is noticeable that most methods are oriented toward a particular sector and adapting them to or suggesting them for other areas is quite problematic. These methods often use various technologies and have different specifications, sometimes with no clear indication of their efficiency. There are two major groups of anti-icing methods: passive and active. Active techniques have high efficiency but, at the same time, quite high energy consumption and require intervention in the structure’s design. It’s noticeable that vast majority of these methods require specific knowledge and personnel skills. The main effect of passive methods (ice-phobic, superhydrophobic surfaces) is to delay ice formation and growth or reduce the adhesion strength between the ice and the surface. These methods are time-consuming and depend on forecasting. They can be applied on small surfaces only for specific targets, and most are non-biodegradable (except for anti-freezing proteins). There is some quite promising information on ultrasonic ice mitigation methods that employ UGW (Ultrasonic Guided Wave). These methods are have the characteristics of low energy consumption, low cost, lightweight, and easy replacement and maintenance. However, fundamental knowledge of ultrasonic de-icing methodology is still limited. The objective of this work was to identify the ice formation processes and its progress by employing ultrasonic guided wave technique. Throughout this research, the universal set-up for acoustic measurement of ice formation in a real condition (temperature range from +240 C to -230 C) was developed. Ultrasonic measurements were performed by using high frequency 5 MHz transducers in a pitch-catch configuration. The selection of wave modes suitable for detection of ice formation phenomenon on copper metal surface was performed. Interaction between the selected wave modes and ice formation processes was investigated. It was found that selected wave modes are sensitive to temperature changes. It was demonstrated that proposed ultrasonic technique could be successfully used for the detection of ice layer formation on a metal surface.Keywords: ice formation processes, ultrasonic GW, detection of ice formation, ultrasonic testing
Procedia PDF Downloads 6414666 Reinforced Concrete Bridge Deck Condition Assessment Methods Using Ground Penetrating Radar and Infrared Thermography
Authors: Nicole M. Martino
Abstract:
Reinforced concrete bridge deck condition assessments primarily use visual inspection methods, where an inspector looks for and records locations of cracks, potholes, efflorescence and other signs of probable deterioration. Sounding is another technique used to diagnose the condition of a bridge deck, however this method listens for damage within the subsurface as the surface is struck with a hammer or chain. Even though extensive procedures are in place for using these inspection techniques, neither one provides the inspector with a comprehensive understanding of the internal condition of a bridge deck – the location where damage originates from. In order to make accurate estimates of repair locations and quantities, in addition to allocating the necessary funding, a total understanding of the deck’s deteriorated state is key. The research presented in this paper collected infrared thermography and ground penetrating radar data from reinforced concrete bridge decks without an asphalt overlay. These decks were of various ages and their condition varied from brand new, to in need of replacement. The goals of this work were to first verify that these nondestructive evaluation methods could identify similar areas of healthy and damaged concrete, and then to see if combining the results of both methods would provide a higher confidence than if the condition assessment was completed using only one method. The results from each method were presented as plan view color contour plots. The results from one of the decks assessed as a part of this research, including these plan view plots, are presented in this paper. Furthermore, in order to answer the interest of transportation agencies throughout the United States, this research developed a step-by-step guide which demonstrates how to collect and assess a bridge deck using these nondestructive evaluation methods. This guide addresses setup procedures on the deck during the day of data collection, system setups and settings for different bridge decks, data post-processing for each method, and data visualization and quantification.Keywords: bridge deck deterioration, ground penetrating radar, infrared thermography, NDT of bridge decks
Procedia PDF Downloads 15514665 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration
Authors: Danny Barash
Abstract:
Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods
Procedia PDF Downloads 235