Search results for: bagging ensemble methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15422

Search results for: bagging ensemble methods

14582 Comparing Community Detection Algorithms in Bipartite Networks

Authors: Ehsan Khademi, Mahdi Jalili

Abstract:

Despite the special features of bipartite networks, they are common in many systems. Real-world bipartite networks may show community structure, similar to what one can find in one-mode networks. However, the interpretation of the community structure in bipartite networks is different as compared to one-mode networks. In this manuscript, we compare a number of available methods that are frequently used to discover community structure of bipartite networks. These networks are categorized into two broad classes. One class is the methods that, first, transfer the network into a one-mode network, and then apply community detection algorithms. The other class is the algorithms that have been developed specifically for bipartite networks. These algorithms are applied on a model network with prescribed community structure.

Keywords: community detection, bipartite networks, co-clustering, modularity, network projection, complex networks

Procedia PDF Downloads 625
14581 Multichannel Surface Electromyography Trajectories for Hand Movement Recognition Using Intrasubject and Intersubject Evaluations

Authors: Christina Adly, Meena Abdelmeseeh, Tamer Basha

Abstract:

This paper proposes a system for hand movement recognition using multichannel surface EMG(sEMG) signals obtained from 40 subjects using 40 different exercises, which are available on the Ninapro(Non-Invasive Adaptive Prosthetics) database. First, we applied processing methods to the raw sEMG signals to convert them to their amplitudes. Second, we used deep learning methods to solve our problem by passing the preprocessed signals to Fully connected neural networks(FCNN) and recurrent neural networks(RNN) with Long Short Term Memory(LSTM). Using intrasubject evaluation, The accuracy using the FCNN is 72%, with a processing time for training around 76 minutes, and for RNN's accuracy is 79.9%, with 8 minutes and 22 seconds processing time. Third, we applied some postprocessing methods to improve the accuracy, like majority voting(MV) and Movement Error Rate(MER). The accuracy after applying MV is 75% and 86% for FCNN and RNN, respectively. The MER value has an inverse relationship with the prediction delay while varying the window length for measuring the MV. The different part uses the RNN with the intersubject evaluation. The experimental results showed that to get a good accuracy for testing with reasonable processing time, we should use around 20 subjects.

Keywords: hand movement recognition, recurrent neural network, movement error rate, intrasubject evaluation, intersubject evaluation

Procedia PDF Downloads 142
14580 Utilization of Long Acting Reversible Contraceptive Methods, and Associated Factors among Female College Students in Gondar Town, Northwest Ethiopia, 2018

Authors: Woledegebrieal Aregay

Abstract:

Introduction: Family planning is defined as the ability of individuals and couples to anticipate and attain their desired number of children and the spacing and timing of their births. It is part of a strategy to reduce poverty, maternal, infant and child mortality; empowers women by lightening the burden of excessive childbearing. Family planning is achieved through the use of different contraceptive methods among which the most effective method is modern family planning methods like Long-Acting Reversible Contraceptive (LARCs) which are IUCD and Implant and these methods have multiple advantages over other reversible methods. Most importantly, once in place, they do not require maintenance and their duration of action is long, ranging from 3 to10 years. Methods: An institutional-based cross-sectional study was conducted in Gondar town among female college students from April-May. A simple random sampling technique was employed to recruit a total of 1166 study subjects. Descriptive variables were computed for all predictors & dependent variables. The presence of an association between covariates & LARC use was observed by two tables’ findings using the chi-square test. Bivariate logistic regression was conducted to identify all possible factors affecting LARC utilization & its crude Odds Ratio, 95% Confidence Interval (CI) & P-value was observed. A multivariable logistic regression model was developed to control possible confounding variables. Adjusted Odds Ratio (AOR) with 95% Confidence Interval (CI) &P-values will be computed to identify significantly associated factors (P < 0.05) with LARC utilization. Result: Utilization of LARCs was 20.4%, the most common is Implant 86(96.5%), and followed by Intra-Uterine Contraceptive Device (IUCD) 3(3.5%). The result of the multivariate analysis revealed that the significant association of marital status of the respondent on utilization of LARC [AOR 3.965(2.051-7.665)], discussion of the respondent about LARC utilization with the husband/boyfriend [AOR 2.198(1.191-4.058)], and attitude of the respondent on implant was found to be associated [AOR 0.365(0.143-0.933)].Conclusion: The level of knowledge and attitude in this study was not satisfactory, the utilization of long-acting reversible contraceptives among college students was relatively satisfactory but if the knowledge and attitude of the participant has improved the prevalence of LARC were increased.

Keywords: utilization, long-acting reversible contraceptive, Ethiopia, Gondar

Procedia PDF Downloads 224
14579 Ultrasonic Treatment of Baker’s Yeast Effluent

Authors: Emine Yılmaz, Serap Fındık

Abstract:

Baker’s yeast industry uses molasses as a raw material. Molasses is end product of sugar industry. Wastewater from molasses processing presents large amount of coloured substances that give dark brown color and high organic load to the effluents. The main coloured compounds are known as melanoidins. Melanoidins are product of Maillard reaction between amino acid and carbonyl groups in molasses. Dark colour prevents sunlight penetration and reduces photosynthetic activity and dissolved oxygen level of surface waters. Various methods like biological processes (aerobic and anaerobic), ozonation, wet air oxidation, coagulation/flocculation are used to treatment of baker’s yeast effluent. Before effluent is discharged adequate treatment is imperative. In addition to this, increasingly stringent environmental regulations are forcing distilleries to improve existing treatment and also to find alternative methods of effluent management or combination of treatment methods. Sonochemical oxidation is one of the alternative methods. Sonochemical oxidation employs ultrasound resulting in cavitation phenomena. In this study, decolorization of baker’s yeast effluent was investigated by using ultrasound. Baker’s yeast effluent was supplied from a factory which is located in the north of Turkey. An ultrasonic homogenizator used for this study. Its operating frequency is 20 kHz. TiO2-ZnO catalyst has been used as sonocatalyst. The effects of molar proportion of TiO2-ZnO, calcination temperature and time, catalyst amount were investigated on the decolorization of baker’s yeast effluent. The results showed that prepared composite TiO2-ZnO with 4:1 molar proportion treated at 700°C for 90 min provides better result. Initial decolorization rate at 15 min is 3% without catalyst, 14,5% with catalyst treated at 700°C for 90 min respectively.

Keywords: baker’s yeast effluent, decolorization, sonocatalyst, ultrasound

Procedia PDF Downloads 474
14578 Stock Movement Prediction Using Price Factor and Deep Learning

Authors: Hy Dang, Bo Mei

Abstract:

The development of machine learning methods and techniques has opened doors for investigation in many areas such as medicines, economics, finance, etc. One active research area involving machine learning is stock market prediction. This research paper tries to consider multiple techniques and methods for stock movement prediction using historical price or price factors. The paper explores the effectiveness of some deep learning frameworks for forecasting stock. Moreover, an architecture (TimeStock) is proposed which takes the representation of time into account apart from the price information itself. Our model achieves a promising result that shows a potential approach for the stock movement prediction problem.

Keywords: classification, machine learning, time representation, stock prediction

Procedia PDF Downloads 147
14577 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm

Authors: Ghada Badr, Arwa Alturki

Abstract:

The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.

Keywords: alignment, RNA secondary structure, pairwise, component-based, data mining

Procedia PDF Downloads 458
14576 Detection the Ice Formation Processes Using Multiple High Order Ultrasonic Guided Wave Modes

Authors: Regina Rekuviene, Vykintas Samaitis, Liudas Mažeika, Audrius Jankauskas, Virginija Jankauskaitė, Laura Gegeckienė, Abdolali Sadaghiani, Shaghayegh Saeidiharzand

Abstract:

Icing brings significant damage to aviation and renewable energy installations. Air-conditioning, refrigeration, wind turbine blades, airplane and helicopter blades often suffer from icing phenomena, which cause severe energy losses and impair aerodynamic performance. The icing process is a complex phenomenon with many different causes and types. Icing mechanisms, distributions, and patterns are still relevant to research topics. The adhesion strength between ice and surfaces differs in different icing environments. This makes the task of anti-icing very challenging. The techniques for various icing environments must satisfy different demands and requirements (e.g., efficient, lightweight, low power consumption, low maintenance and manufacturing costs, reliable operation). It is noticeable that most methods are oriented toward a particular sector and adapting them to or suggesting them for other areas is quite problematic. These methods often use various technologies and have different specifications, sometimes with no clear indication of their efficiency. There are two major groups of anti-icing methods: passive and active. Active techniques have high efficiency but, at the same time, quite high energy consumption and require intervention in the structure’s design. It’s noticeable that vast majority of these methods require specific knowledge and personnel skills. The main effect of passive methods (ice-phobic, superhydrophobic surfaces) is to delay ice formation and growth or reduce the adhesion strength between the ice and the surface. These methods are time-consuming and depend on forecasting. They can be applied on small surfaces only for specific targets, and most are non-biodegradable (except for anti-freezing proteins). There is some quite promising information on ultrasonic ice mitigation methods that employ UGW (Ultrasonic Guided Wave). These methods are have the characteristics of low energy consumption, low cost, lightweight, and easy replacement and maintenance. However, fundamental knowledge of ultrasonic de-icing methodology is still limited. The objective of this work was to identify the ice formation processes and its progress by employing ultrasonic guided wave technique. Throughout this research, the universal set-up for acoustic measurement of ice formation in a real condition (temperature range from +240 C to -230 C) was developed. Ultrasonic measurements were performed by using high frequency 5 MHz transducers in a pitch-catch configuration. The selection of wave modes suitable for detection of ice formation phenomenon on copper metal surface was performed. Interaction between the selected wave modes and ice formation processes was investigated. It was found that selected wave modes are sensitive to temperature changes. It was demonstrated that proposed ultrasonic technique could be successfully used for the detection of ice layer formation on a metal surface.

Keywords: ice formation processes, ultrasonic GW, detection of ice formation, ultrasonic testing

Procedia PDF Downloads 64
14575 Reinforced Concrete Bridge Deck Condition Assessment Methods Using Ground Penetrating Radar and Infrared Thermography

Authors: Nicole M. Martino

Abstract:

Reinforced concrete bridge deck condition assessments primarily use visual inspection methods, where an inspector looks for and records locations of cracks, potholes, efflorescence and other signs of probable deterioration. Sounding is another technique used to diagnose the condition of a bridge deck, however this method listens for damage within the subsurface as the surface is struck with a hammer or chain. Even though extensive procedures are in place for using these inspection techniques, neither one provides the inspector with a comprehensive understanding of the internal condition of a bridge deck – the location where damage originates from.  In order to make accurate estimates of repair locations and quantities, in addition to allocating the necessary funding, a total understanding of the deck’s deteriorated state is key. The research presented in this paper collected infrared thermography and ground penetrating radar data from reinforced concrete bridge decks without an asphalt overlay. These decks were of various ages and their condition varied from brand new, to in need of replacement. The goals of this work were to first verify that these nondestructive evaluation methods could identify similar areas of healthy and damaged concrete, and then to see if combining the results of both methods would provide a higher confidence than if the condition assessment was completed using only one method. The results from each method were presented as plan view color contour plots. The results from one of the decks assessed as a part of this research, including these plan view plots, are presented in this paper. Furthermore, in order to answer the interest of transportation agencies throughout the United States, this research developed a step-by-step guide which demonstrates how to collect and assess a bridge deck using these nondestructive evaluation methods. This guide addresses setup procedures on the deck during the day of data collection, system setups and settings for different bridge decks, data post-processing for each method, and data visualization and quantification.

Keywords: bridge deck deterioration, ground penetrating radar, infrared thermography, NDT of bridge decks

Procedia PDF Downloads 154
14574 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration

Authors: Danny Barash

Abstract:

Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.

Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods

Procedia PDF Downloads 234
14573 The Effectiveness of Cathodic Protection on Microbiologically Influenced Corrosion Control

Authors: S. Taghavi Kalajahi, A. Koerdt, T. Lund Skovhus

Abstract:

Cathodic protection (CP) is an electrochemical method to control and manage corrosion in different industries and environments. CP which is widely used, especially in buried and sub-merged environments, which both environments are susceptible to microbiologically influenced corrosion (MIC). Most of the standards recommend performing CP using -800 mV, however, if MIC threats are high or sulfate reducing bacteria (SRB) is present, the recommendation is to use more negative potentials for adequate protection of the metal. Due to the lack of knowledge and research on the effectiveness of CP on MIC, to the author’s best knowledge, there is no information about what MIC threat is and how much more negative potentials should be used enabling adequate protection and not overprotection (due to hydrogen embrittlement risk). Recently, the development and cheaper price of molecular microbial methods (MMMs) open the door for more effective investigations on the corrosion in the presence of microorganisms, along with other electrochemical methods and surface analysis. In this work, using MMMs, the gene expression of SRB biofilm under different potentials of CP will be investigated. The specific genes, such as pH buffering, metal oxidizing, etc., will be compared at different potentials, enabling to determine the precise potential that protect the metal effectively from SRB. This work is the initial step to be able to standardize the recommended potential under MIC condition, resulting better protection for the infrastructures.

Keywords: cathodic protection, microbiologically influenced corrosion, molecular microbial methods, sulfate reducing bacteria

Procedia PDF Downloads 92
14572 Assessing Significance of Correlation with Binomial Distribution

Authors: Vijay Kumar Singh, Pooja Kushwaha, Prabhat Ranjan, Krishna Kumar Ojha, Jitendra Kumar

Abstract:

Present day high-throughput genomic technologies, NGS/microarrays, are producing large volume of data that require improved analysis methods to make sense of the data. The correlation between genes and samples has been regularly used to gain insight into many biological phenomena including, but not limited to, co-expression/co-regulation, gene regulatory networks, clustering and pattern identification. However, presence of outliers and violation of assumptions underlying Pearson correlation is frequent and may distort the actual correlation between the genes and lead to spurious conclusions. Here, we report a method to measure the strength of association between genes. The method assumes that the expression values of a gene are Bernoulli random variables whose outcome depends on the sample being probed. The method considers the two genes as uncorrelated if the number of sample with same outcome for both the genes (Ns) is equal to certainly expected number (Es). The extent of correlation depends on how far Ns can deviate from the Es. The method does not assume normality for the parent population, fairly unaffected by the presence of outliers, can be applied to qualitative data and it uses the binomial distribution to assess the significance of association. At this stage, we would not claim about the superiority of the method over other existing correlation methods, but our method could be another way of calculating correlation in addition to existing methods. The method uses binomial distribution, which has not been used until yet, to assess the significance of association between two variables. We are evaluating the performance of our method on NGS/microarray data, which is noisy and pierce by the outliers, to see if our method can differentiate between spurious and actual correlation. While working with the method, it has not escaped our notice that the method could also be generalized to measure the association of more than two variables which has been proven difficult with the existing methods.

Keywords: binomial distribution, correlation, microarray, outliers, transcriptome

Procedia PDF Downloads 415
14571 Modern Methods of Construction (MMC): The Potentials and Challenges of Using Prefabrication Technology for Building Modern Houses in Afghanistan

Authors: Latif Karimi, Yasuhide Mochida

Abstract:

The purpose of this paper is to study Modern Methods of Construction (MMC); specifically, the prefabrication technology and check the applicability, suitability, and benefits of this construction technique over conventional methods for building new houses in Afghanistan. Construction industry and house building sector are a key contributor to Afghanistan’s economy. However, this sector is challenged with lack of innovation and severe impacts that it has on the environment due to huge amount of construction waste from building, demolition and or renovation activities. This paper studies the prefabrication technology, a popular MMC that is becoming more common, improving in quality and being available in a variety of budgets. Several feasibility studies worldwide have revealed that this method is the way forward in improving construction industry performance as it has been proven to reduce construction time, construction wastes and improve the environmental performance of the construction processes. In addition, this study emphasizes on 'sustainability' in-house building, since it is a common challenge in housing construction projects on a global scale. This challenge becomes more severe in the case of under-developed countries, like Afghanistan. Because, most of the houses are being built in the absence of a serious quality control mechanism and dismissive to basic requirements of sustainable houses; well-being, cost-effectiveness, minimization - prevention of wastes production during construction and use, and severe environmental impacts in view of a life cycle assessment. Methodology: A literature review and study of the conventional practices of building houses in urban areas of Afghanistan. A survey is also being completed to study the potentials and challenges of using prefabrication technology for building modern houses in the cities across the country. A residential housing project is selected for case study to determine the drawbacks of current construction methods vs. prefabrication technique for building a new house. Originality: There are little previous research available about MMC considering its specific impacts on sustainability related to house building practices. This study will be specifically of interest to a broad range of people, including planners, construction managers, builders, and house owners.

Keywords: modern methods of construction (MMC), prefabrication, prefab houses, sustainable construction, modern houses

Procedia PDF Downloads 243
14570 Challenges of Implementing Zero Trust Security Based on NIST SP 800-207

Authors: Mazhar Hamayun

Abstract:

Organizations need to take a holistic approach to their Zero Trust strategic and tactical security needs. This includes using a framework-agnostic model that will ensure all enterprise resources are being accessed securely, regardless of their location. Such can be achieved through the implementation of a security posture, monitoring the posture, and adjusting the posture through the Identify, Detect, Protect, Respond, and Recover Methods, The target audience of this document includes those involved in the management and operational functions of risk, information security, and information technology. This audience consists of the chief information security officer, chief information officer, chief technology officer, and those leading digital transformation initiatives where Zero Trust methods can help protect an organization’s data assets.

Keywords: ZTNA, zerotrust architecture, microsegmentation, NIST SP 800-207

Procedia PDF Downloads 87
14569 Experimental Studies on the Effect of Premixing Methods in Anaerobic Digestor with Corn Stover

Authors: M. Sagarika, M. Chandra Sekhar

Abstract:

Agricultural residues are producing in large quantities in India and account for abundant but underutilized source of renewable biomass in agriculture. In India, the amount of crop residues available is estimated to be approximately 686 million tons. Anaerobic digestion is a promising option to utilize the surplus agricultural residues and can produce biogas and digestate. Biogas is mainly methane (CH4), which can be utilized as an energy source in replacement for fossil fuels such as natural gas, oil, in other hand, digestate contains high amounts of nutrients, can be employed as fertilizer. Solid state anaerobic digestion (total solids ≥ 15%) is suitable for agricultural residues, as it reduces the problems like stratification and floating issues that occur in liquid anaerobic digestion (total solids < 15%). The major concern in solid-state anaerobic digestion is the low mass transfer of feedstock and inoculum that resulting in low performance. To resolve this low mass transfer issue, effective mixing of feedstock and inoculum is required. Mechanical mixing using stirrer at the time of digestion process can be done, but it is difficult to operate the stirring of feedstock with high solids percentage and high viscosity. Complete premixing of feedstock and inoculum is an alternative method, which is usual in lab scale studies but may not be affordable due to high energy demand in large-scale digesters. Developing partial premixing methods may reduce this problem. Current study is to improve the performance of solid-state anaerobic digestion of corn stover at feedstock to inoculum ratios 3 and 5, by applying partial premixing methods and to compare the complete premixing method with two partial premixing methods which are two alternative layers of feedstock and inoculum and three alternative layers of feedstock and inoculum where higher inoculum ratios in the top layers. From experimental studies it is observed that, partial premixing method with three alternative layers of feedstock and inoculum yielded good methane.

Keywords: anaerobic digestion, premixing methods, methane yield, corn stover, volatile solids

Procedia PDF Downloads 234
14568 Creativity in Industrial Design as an Instrument for the Achievement of the Proper and Necessary Balance between Intuition and Reason, Design and Science

Authors: Juan Carlos Quiñones

Abstract:

Time has passed since the industrial design has put murder on a mass-production basis. The industrial design applies methods from different disciplines with a strategic approach, to place humans at the centers of the design process and to deliver solutions that are meaningful and desirable for users and for the market. This analysis summarizes some of the discussions that occurred in the 6th International Forum of Design as a Process, June 2016, Valencia. The aims of this conference were finding new linkages between systems and design interactions in order to define the social consequences. Through knowledge management we are able to transform the intangible aspect by using design as a transforming function capable of converting intangible knowledge into tangible solutions (i.e. products and services demanded by society). Industrial designers use knowledge consciously as a starting point for the ideation of the product. The handling of the intangible becomes more and more relevant over time as different methods emerge for knowledge extraction and subsequent organization. The different methodologies applied to the industrial design discipline and the evolution of the same discipline methods underpin the cultural and scientific background knowledge as a starting point of thought as a response to the needs; the whole thing coming through the instrument of creativity for the achievement of the proper and necessary balance between intuition and reason, design and science.

Keywords: creative process, creativity, industrial design, intangible

Procedia PDF Downloads 287
14567 Active Cyber Defense within the Concept of NATO’s Protection of Critical Infrastructures

Authors: Serkan Yağlı, Selçuk Dal

Abstract:

Cyber-attacks pose a serious threat to all states. Therefore, states constantly seek for various methods to encounter those threats. In addition, recent changes in the nature of cyber-attacks and their more complicated methods have created a new concept: active cyber defence (ACD). This article tries to answer firstly why ACD is important to NATO and find out the viewpoint of NATO towards ACD. Secondly, infrastructure protection is essential to cyber defence. Critical infrastructure protection with ACD means is even more important. It is assumed that by implementing active cyber defence, NATO may not only be able to repel the attacks but also be deterrent. Hence, the use of ACD has a direct positive effect in all international organizations’ future including NATO.

Keywords: active cyber defence, advanced persistent treat, critical infrastructure, NATO

Procedia PDF Downloads 244
14566 Sterilization of Potato Explants for in vitro Propagation

Authors: D. R. Masvodza, G. Coetzer, E. van der Watt

Abstract:

Microorganisms usually have a prolific growth nature and may cause major problems on in-vitro cultures. For in vitro propagation to be successful explants need to be sterile. In order to determine the best sterilization method for potato explants cv. Amerthyst, five sterilization methods were applied separately to 24 shoots. The first sterilization method was the use of 20% sodium hypochlorite with 1 ml Tween 20 for 15 minutes. The second, third and fourth sterilization methods were the immersion of explants in 70% ethanol in a beaker for either 30 seconds, 1 minute or 2 minutes, followed by 1% sodium hypochlorite with 1 ml Tween 20 for 5 minutes. For the control treatment, no chemicals were used. Finally, all the explants were rinsed three times with autoclaved distilled water and trimmed to 1-2 cm. Explants were then cultured on MS medium with 0.01 mg L-1 NAA and 0.1 mg L-1 GA3 and supplemented with 2 mg L-1 D-calcium pentothenate. The trial was laid out as a complete randomized design, and each treatment combination was replicated 24 times. At 7, 14 and 21 days after culture, data on explant color, survival, and presence or absence of contamination was recorded. Best results were obtained when 20% sodium hypochlorite was used with 1 ml Tween 20 for 15 minutes which is sterilization method 1. Method 2 was comparable to method 1 when explants were cultured in glass vessels. Explants in glass vessels were significantly less contaminated than explants in polypropylene vessel. Therefore at times, ideal methods for sterilization should be coupled with ideal culture conditions such as good quality culture vessel, rather than the addition of more stringent sterilants.

Keywords: culture containers, explants, sodium hypochlororite, sterilization

Procedia PDF Downloads 332
14565 Comparisons between Student Leaning Achievements and Their Problem Solving Skills on Stoichiometry Issue with the Think-Pair-Share Model and Stem Education Method

Authors: P. Thachitasing, N. Jansawang, W. Rakrai, T. Santiboon

Abstract:

The aim of this study is to investigate of the comparing the instructional design models between the Think-Pair-Share and Conventional Learning (5E Inquiry Model) Processes to enhance students’ learning achievements and their problem solving skills on stoichiometry issue for concerning the 2-instructional method with a sample consisted of 80 students in 2 classes at the 11th grade level in Chaturaphak Phiman Ratchadaphisek School. Students’ different learning outcomes in chemistry classes with the cluster random sampling technique were used. Instructional Methods designed with the 40-experimenl student group by Think-Pair-Share process and the 40-controlling student group by the conventional learning (5E Inquiry Model) method. These learning different groups were obtained using the 5 instruments; the 5-lesson instructional plans of Think-Pair-Share and STEM Education Method, students’ learning achievements and their problem solving skills were assessed with the pretest and posttest techniques, students’ outcomes of their instructional the Think-Pair-Share (TPSM) and the STEM Education Methods were compared. Statistically significant was differences with the paired t-test and F-test between posttest and pretest technique of the whole students in chemistry classes were found, significantly. Associations between student learning outcomes in chemistry and two methods of their learning to students’ learning achievements and their problem solving skills also were found. The use of two methods for this study is revealed that the students perceive their learning achievements to their problem solving skills to be differently learning achievements in different groups are guiding practical improvements in chemistry classrooms to assist teacher in implementing effective approaches for improving instructional methods. Students’ learning achievements of mean average scores to their controlling group with the Think-Pair-Share Model (TPSM) are lower than experimental student group for the STEM education method, evidence significantly. The E1/E2 process were revealed evidence of 82.56/80.44, and 83.02/81.65 which results based on criteria are higher than of 80/80 standard level with the IOC, consequently. The predictive efficiency (R2) values indicate that 61% and 67% and indicate that 63% and 67% of the variances in chemistry classes to their learning achievements on posttest in chemistry classes of the variances in students’ problem solving skills to their learning achievements to their chemistry classrooms on Stoichiometry issue with the posttest were attributable to their different learning outcomes for the TPSM and STEMe instructional methods.

Keywords: comparisons, students’ learning achievements, think-pare-share model (TPSM), stem education, problem solving skills, chemistry classes, stoichiometry issue

Procedia PDF Downloads 249
14564 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations

Authors: Kuniyoshi Abe

Abstract:

Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.

Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant

Procedia PDF Downloads 164
14563 A Framework for Auditing Multilevel Models Using Explainability Methods

Authors: Debarati Bhaumik, Diptish Dey

Abstract:

Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.

Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics

Procedia PDF Downloads 94
14562 Regional Flood Frequency Analysis in Narmada Basin: A Case Study

Authors: Ankit Shah, R. K. Shrivastava

Abstract:

Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.

Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency

Procedia PDF Downloads 419
14561 Stress Corrosion Cracking, Parameters Affecting It, Problems Caused by It and Suggested Methods for Treatment: State of the Art

Authors: Adnan Zaid

Abstract:

Stress corrosion cracking (SCC) may be defined as a degradation of the mechanical properties of a material under the combined action of a tensile stress and corrosive environment of the susceptible material. It is a harmful phenomenon which might cause catastrophic fracture without a sign of prior warning. In this paper, the stress corrosion cracking, SCC, process, the parameters affecting it, and the different damages caused by it are given and discussed. Utilization of shot peening as a mean of enhancing the resistance of materials to SCC is given and discussed. Finally, a method for improving materials resistance to SCC by grain refining its structure by some refining elements prior to usage is suggested.

Keywords: stress corrosion cracking, parameters, damages, treatment methods

Procedia PDF Downloads 330
14560 Studies on the Proximate Composition and Functional Properties of Extracted Cocoyam Starch Flour

Authors: Adebola Ajayi, Francis B. Aiyeleye, Olakunke M. Makanjuola, Olalekan J. Adebowale

Abstract:

Cocoyam, a generic term for both xanthoma and colocasia, is a traditional staple root crop in many developing countries in Africa, Asia and the Pacific. It is mostly cultivated as food crop which is very rich in vitamin B6, magnesium and also in dietary fiber. The cocoyam starch is easily digested and often used for baby food. Drying food is a method of food preservation that removes enough moisture from the food so bacteria, yeast and molds cannot grow. It is a one of the oldest methods of preserving food. The effect of drying methods on the proximate composition and functional properties of extracted cocoyam starch flour were studied. Freshly harvested cocoyam cultivars at matured level were washed with portable water, peeled, washed and grated. The starch in the grated cocoyam was extracted, dried using sun drying, oven and cabinet dryers. The extracted starch flour was milled into flour using Apex mill and packed and sealed in low-density polyethylene film (LDPE) 75 micron thickness with Nylon sealing machine QN5-3200HI and kept for three months under ambient temperature before analysis. The result showed that the moisture content, ash, crude fiber, fat, protein and carbohydrate ranged from 6.28% to 12.8% 2.32% to 3.2%, 0.89% to 2.24%%, 1.89% to 2.91%, 7.30% to 10.2% and 69% to 83% respectively. The functional properties of the cocoyam starch flour ranged from 2.65ml/g to 4.84ml/g water absorption capacity, 1.95ml/g to 3.12ml/g oil absorption capacity, 0.66ml/g to 7.82ml/g bulk density and 3.82% to 5.30ml/g swelling capacity. Significant difference (P≥0.5) was not obtained across the various drying methods used. The drying methods provide extension to the shelf-life of the extracted cocoyam starch flour.

Keywords: cocoyam, extraction, oven dryer, cabinet dryer

Procedia PDF Downloads 295
14559 Patents as Indicators of Innovative Environment

Authors: S. Karklina, I. Erins

Abstract:

The main problem is that there is a very low innovation performance in Latvia. Since Latvia is a Member State of European Union, it also shall have to fulfill the set targets and to improve innovative results. Universities are one of the main performers to provide innovative capacity of country. University, industry and government need to cooperate for getting best results. The intellectual property is one of the indicators to determine innovation level in the country or organization and patents are one of the characteristics of intellectual property. The objective of the article is to determine indicators characterizing innovative environment in Latvia and influence of the development of universities on them. The methods that will be used in the article to achieve the objectives are quantitative and qualitative analysis of the literature, statistical data analysis, and graphical analysis methods.

Keywords: HEI, innovations, Latvia, patents

Procedia PDF Downloads 315
14558 The Effect of the Acquisition and Reconstruction Parameters in Quality of Spect Tomographic Images with Attenuation and Scatter Correction

Authors: N. Boutaghane, F. Z. Tounsi

Abstract:

Many physical and technological factors degrade the SPECT images, both qualitatively and quantitatively. For this, it is not always put into leading technological advances to improve the performance of tomographic gamma camera in terms of detection, collimation, reconstruction and correction of tomographic images methods. We have to master firstly the choice of various acquisition and reconstruction parameters, accessible to clinical cases and using the attenuation and scatter correction methods to always optimize quality image and minimized to the maximum dose received by the patient. In this work, an evaluation of qualitative and quantitative tomographic images is performed based on the acquisition parameters (counts per projection) and reconstruction parameters (filter type, associated cutoff frequency). In addition, methods for correcting physical effects such as attenuation and scatter degrading the image quality and preventing precise quantitative of the reconstructed slices are also presented. Two approaches of attenuation and scatter correction are implemented: the attenuation correction by CHANG method with a filtered back projection reconstruction algorithm and scatter correction by the subtraction JASZCZAK method. Our results are considered as such recommandation, which permits to determine the origin of the different artifacts observed both in quality control tests and in clinical images.

Keywords: attenuation, scatter, reconstruction filter, image quality, acquisition and reconstruction parameters, SPECT

Procedia PDF Downloads 453
14557 Aerodynamic Design an UAV and Stability Analysis with Method of Genetic Algorithm Optimization

Authors: Saul A. Torres Z., Eduardo Liceaga C., Alfredo Arias M.

Abstract:

We seek to develop a UAV for agricultural spraying at a maximum altitude of 5000 meters above sea level, with a payload of 100 liters of fumigant. For the developing the aerodynamic design of the aircraft is using computational tools such as the "Vortex Lattice Athena" software, "MATLAB", "ANSYS FLUENT", "XFoil" package among others. Also methods are being used structured programming, exhaustive analysis of optimization methods and search. The results have a very low margin of error, and the multi-objective problems can be helpful for future developments. Also we developed method for Stability Analysis (Lateral-Directional and Longitudinal).

Keywords: aerodynamics design, optimization, algorithm genetic, multi-objective problem, longitudinal stability, lateral-directional stability

Procedia PDF Downloads 594
14556 Evaluating the Performance of Color Constancy Algorithm

Authors: Damanjit Kaur, Avani Bhatia

Abstract:

Color constancy is significant for human vision since color is a pictorial cue that helps in solving different visions tasks such as tracking, object recognition, or categorization. Therefore, several computational methods have tried to simulate human color constancy abilities to stabilize machine color representations. Two different kinds of methods have been used, i.e., normalization and constancy. While color normalization creates a new representation of the image by canceling illuminant effects, color constancy directly estimates the color of the illuminant in order to map the image colors to a canonical version. Color constancy is the capability to determine colors of objects independent of the color of the light source. This research work studies the most of the well-known color constancy algorithms like white point and gray world.

Keywords: color constancy, gray world, white patch, modified white patch

Procedia PDF Downloads 319
14555 Variable Selection in a Data Envelopment Analysis Model by Multiple Proportions Comparison

Authors: Jirawan Jitthavech, Vichit Lorchirachoonkul

Abstract:

A statistical procedure using multiple comparisons test for proportions is proposed for variable selection in a data envelopment analysis (DEA) model. The test statistic in the multiple comparisons is the proportion of efficient decision making units (DMUs) in a DEA model. Three methods of multiple comparisons test for proportions: multiple Z tests with Bonferroni correction, multiple tests in 2Xc crosstabulation and the Marascuilo procedure, are used in the proposed statistical procedure of iteratively eliminating the variables in a backward manner. Two simulation populations of moderately and lowly correlated variables are used to compare the results of the statistical procedure using three methods of multiple comparisons test for proportions with the hypothesis testing of the efficiency contribution measure. From the simulation results, it can be concluded that the proposed statistical procedure using multiple Z tests for proportions with Bonferroni correction clearly outperforms the proposed statistical procedure using the remaining two methods of multiple comparisons and the hypothesis testing of the efficiency contribution measure.

Keywords: Bonferroni correction, efficient DMUs, Marascuilo procedure, Pastor et al. method, 2xc crosstabulation

Procedia PDF Downloads 310
14554 Field Scale Simulation Study of Miscible Water Alternating CO2 Injection Process in Fractured Reservoirs

Authors: Hooman Fallah

Abstract:

Vast amounts of world oil reservoirs are in natural fractured reservoirs. There are different methods for increasing recovery from fractured reservoirs. Miscible injection of water alternating CO2 is a good choice among this methods. In this method, water and CO2 slugs are injected alternatively in reservoir as miscible agent into reservoir. This paper studies water injection scenario and miscible injection of water and CO2 in a two dimensional, inhomogeneous fractured reservoir. The results show that miscible water alternating CO2¬ gas injection leads to 3.95% increase in final oil recovery and total water production decrease of 3.89% comparing to water injection scenario.

Keywords: simulation study, CO2, water alternating gas injection, fractured reservoirs

Procedia PDF Downloads 291
14553 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of weights of elements. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research and designing of optimal structure systems are carried out.

Keywords: Complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability, weight of element

Procedia PDF Downloads 73