Search results for: computational mathematics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2443

Search results for: computational mathematics

733 Chaos Fuzzy Genetic Algorithm

Authors: Mohammad Jalali Varnamkhasti

Abstract:

The genetic algorithms have been very successful in handling difficult optimization problems. The fundamental problem in genetic algorithms is premature convergence. This paper, present a new fuzzy genetic algorithm based on chaotic values instead of the random values in genetic algorithm processes. In this algorithm, for initial population is used chaotic sequences and then a new sexual selection proposed for selection mechanism. In this technique, the population is divided such that the male and female would be selected in an alternate way. The layout of the male and female chromosomes in each generation is different. A female chromosome is selected by tournament selection size from the female group. Then, the male chromosome is selected, in order of preference based on the maximum Hamming distance between the male chromosome and the female chromosome or The highest fitness value of male chromosome (if more than one male chromosome is having the maximum Hamming distance existed), or Random selection. The selections of crossover and mutation operators are achieved by running the fuzzy logic controllers, the crossover and mutation probabilities are varied on the basis of the phenotype and genotype characteristics of the chromosome population. Computational experiments are conducted on the proposed techniques and the results are compared with some other operators, heuristic and local search algorithms commonly used for solving p-median problems published in the literature.

Keywords: genetic algorithm, fuzzy system, chaos, sexual selection

Procedia PDF Downloads 363
732 Prediction of Temperature Distribution during Drilling Process Using Artificial Neural Network

Authors: Ali Reza Tahavvor, Saeed Hosseini, Nazli Jowkar, Afshin Karimzadeh Fard

Abstract:

Experimental & numeral study of temperature distribution during milling process, is important in milling quality and tools life aspects. In the present study the milling cross-section temperature is determined by using Artificial Neural Networks (ANN) according to the temperature of certain points of the work piece and the points specifications and the milling rotational speed of the blade. In the present work, at first three-dimensional model of the work piece is provided and then by using the Computational Heat Transfer (CHT) simulations, temperature in different nods of the work piece are specified in steady-state conditions. Results obtained from CHT are used for training and testing the ANN approach. Using reverse engineering and setting the desired x, y, z and the milling rotational speed of the blade as input data to the network, the milling surface temperature determined by neural network is presented as output data. The desired points temperature for different milling blade rotational speed are obtained experimentally and by extrapolation method for the milling surface temperature is obtained and a comparison is performed among the soft programming ANN, CHT results and experimental data and it is observed that ANN soft programming code can be used more efficiently to determine the temperature in a milling process.

Keywords: artificial neural networks, milling process, rotational speed, temperature

Procedia PDF Downloads 377
731 Risk Assessment of Radiation Hazard for a Typical WWER1000: Cancer Risk Analysis during a Hypothetical Accident

Authors: R. Gharari, N. Kojouri, R. Hosseini Aghdam, E. Alibeigi, B. Salmasian

Abstract:

In this research, the WWER1000/V446 (a PWR Russian type reactor) is chosen as the case study. It is assumed that radioactive materials that release into the environment are more than allowable limit due to a complete failure of the ventilation system (reactor stack). In the following, the HOTSPOT and the RASCAL computational codes have been used and coupled with a developed program using MATLAB software to evaluate Total effective dose equivalent (TEDE) and cancer risk according to the BEIR equations for various human organs. In addition, effects of the containment spray system and climate conditions on the TEDE have been investigated. According to the obtained results, there is an inverse correlation between the received dose and the wind speed; the amount of the TEDE for wind speed 2 m/s and is more than wind speed for 14 m/s during the class A of the climate (2.168 and 0.444 mSv, respectively). Also, containment spray system can effect and reduce the amount of the fission products and TEDE. Furthermore, the probability of the cancer risk for women is more than men, and for children is more than adults. In addition, a specific emergency zonal planning is proposed. Results are promising in which the site selection of the WWER1000/V446 were considered safe for the public in this situation.

Keywords: TEDE, total effective dose equivalent, RASCAL and HOTSPOT codes, BEIR equations, cancer risk

Procedia PDF Downloads 145
730 Long Waves Inundating through and around an Array of Circular Cylinders

Authors: Christian Klettner, Ian Eames, Tristan Robinson

Abstract:

Tsunami is characterised by their very long time periods and can have devastating consequences when these inundate through built-up coastal regions as in the 2004 Indian Ocean and 2011 Tohoku Tsunami. This work aims to investigate the effect of these long waves on the flow through and around a group of buildings, which are abstracted to circular cylinders. The research approach used in this study was using experiments and numerical simulations. Large-scale experiments were carried out at HR Wallingford. The novelty of these experiments is (I) the number of bodies present (up to 64), (II) the long wavelength of the input waves (80 seconds) and (III) the width of the tank (4m) which gives the unique opportunity to investigate three length scales, namely the diameter of the building, the diameter of the array and the width of the tank. To complement the experiments, dam break flow past the same arrays is investigated using three-dimensional numerical simulations in OpenFOAM. Dam break flow was chosen as it is often used as a surrogate for the tsunami in previous research and is used here as there are well defined initial conditions and high quality previous experimental data for the case of a single cylinder is available. The focus of this work is to better understand the effect of the solid void fraction on the force and flow through and around the array. New qualitative and quantitative diagnostics are developed and tested to analyse the complex coupled interaction between the cylinders.

Keywords: computational fluid dynamics, tsunami, forces, complex geometry

Procedia PDF Downloads 172
729 Malware Beaconing Detection by Mining Large-scale DNS Logs for Targeted Attack Identification

Authors: Andrii Shalaginov, Katrin Franke, Xiongwei Huang

Abstract:

One of the leading problems in Cyber Security today is the emergence of targeted attacks conducted by adversaries with access to sophisticated tools. These attacks usually steal senior level employee system privileges, in order to gain unauthorized access to confidential knowledge and valuable intellectual property. Malware used for initial compromise of the systems are sophisticated and may target zero-day vulnerabilities. In this work we utilize common behaviour of malware called ”beacon”, which implies that infected hosts communicate to Command and Control servers at regular intervals that have relatively small time variations. By analysing such beacon activity through passive network monitoring, it is possible to detect potential malware infections. So, we focus on time gaps as indicators of possible C2 activity in targeted enterprise networks. We represent DNS log files as a graph, whose vertices are destination domains and edges are timestamps. Then by using four periodicity detection algorithms for each pair of internal-external communications, we check timestamp sequences to identify the beacon activities. Finally, based on the graph structure, we infer the existence of other infected hosts and malicious domains enrolled in the attack activities.

Keywords: malware detection, network security, targeted attack, computational intelligence

Procedia PDF Downloads 236
728 A Simple Computational Method for the Gravitational and Seismic Soil-Structure-Interaction between New and Existent Buildings Sites

Authors: Nicolae Daniel Stoica, Ion Mierlus Mazilu

Abstract:

This work is one of numerical research and aims to address the issue of the design of new buildings in a 3D location of existing buildings. In today's continuous development and congestion of urban centers is a big question about the influence of the new buildings on an already existent vicinity site. Thus, in this study, we tried to focus on how existent buildings may be affected by any newly constructed buildings and in how far this influence is really decreased. The problem of modeling the influence of interaction between buildings is not simple in any area in the world, and neither in Romania. Unfortunately, most often the designers not done calculations that can determine how close to reality these 3D influences nor the simplified method and the more superior methods. In the most literature making a "shield" (the pilots or molded walls) is absolutely sufficient to stop the influence between the buildings, and so often the soil under the structure is ignored in the calculation models. The main causes for which the soil is neglected in the analysis are related to the complexity modeling of interaction between soil and structure. In this paper, based on a new simple but efficient methodology we tried to determine for a lot of study cases the influence, in terms of assessing the interaction land structure on the behavior of structures that influence a new building on an existing one. The study covers additional subsidence that may occur during the execution of new works and after its completion. It also highlighted the efforts diagrams and deflections in the soil for both the original case and the final stage. This is necessary to see to what extent the expected impact of the new building on existing areas.

Keywords: soil, structure, interaction, piles, earthquakes

Procedia PDF Downloads 268
727 Computational Studies of the Reactivity Descriptors and the Optoelectronic Properties on the Efficiency Free-Base- and Zn-Porphyrin-Sensitized Solar Cells

Authors: Soraya Abtouche, Zeyneb Ghoualem, Syrine Daoudi, Lina Ouldmohamed, Xavier Assfeld

Abstract:

This work reports density functional theory calculations of the optimized geometries, molecular reactivity, energy gap,and thermodynamic properties of the free base (H2P) and their Zn (II) metallated (ZnP), bearing one, two, or three carboxylic acid groups using the hybrid functional B3LYP, Cam-B3lYP, wb97xd with 6-31G(d,p) basis sets. When donating groups are attached to the molecular dye, the bond lengths are slightly decreased, which is important for the easy transfer of an electron from donating to the accepting group. For all dyes, the highest occupied molecular orbital/lowest occupied molecular orbital analysis results in positive outcomes upon electron injection to the semiconductor and subsequent dye regeneration by the electrolyte. The ionization potential increases with increasing conjugation; therefore, the compound dye attached to one carboxylic acid group has the highest ionization potential. The results show higher efficiencies of those sensitized with ZnP. These results have been explained, taking into account the electronic character of the metal ion, which acts as a mediator in the injection step, and, on the other hand, considering the number of anchoring groups to which it binds to the surface of TiO2.

Keywords: DSSC, porphyrin, TD-DFT, electronic properties, donor-acceptor groups

Procedia PDF Downloads 54
726 Scientific Linux Cluster for BIG-DATA Analysis (SLBD): A Case of Fayoum University

Authors: Hassan S. Hussein, Rania A. Abul Seoud, Amr M. Refaat

Abstract:

Scientific researchers face in the analysis of very large data sets that is increasing noticeable rate in today’s and tomorrow’s technologies. Hadoop and Spark are types of software that developed frameworks. Hadoop framework is suitable for many Different hardware platforms. In this research, a scientific Linux cluster for Big Data analysis (SLBD) is presented. SLBD runs open source software with large computational capacity and high performance cluster infrastructure. SLBD composed of one cluster contains identical, commodity-grade computers interconnected via a small LAN. SLBD consists of a fast switch and Gigabit-Ethernet card which connect four (nodes). Cloudera Manager is used to configure and manage an Apache Hadoop stack. Hadoop is a framework allows storing and processing big data across the cluster by using MapReduce algorithm. MapReduce algorithm divides the task into smaller tasks which to be assigned to the network nodes. Algorithm then collects the results and form the final result dataset. SLBD clustering system allows fast and efficient processing of large amount of data resulting from different applications. SLBD also provides high performance, high throughput, high availability, expandability and cluster scalability.

Keywords: big data platforms, cloudera manager, Hadoop, MapReduce

Procedia PDF Downloads 335
725 Maximizing Bidirectional Green Waves for Major Road Axes

Authors: Christian Liebchen

Abstract:

Both from an environmental perspective and with respect to road traffic flow quality, planning so-called green waves along major road axes is a well-established target for traffic engineers. For one-way road axes (e.g. the Avenues in Manhattan), this is a trivial downstream task. For bidirectional arterials, the well-known necessary condition for establishing a green wave in both directions is that the driving times between two subsequent crossings must be an integer multiple of half of the cycle time of the signal programs at the nodes. In this paper, we propose an integer linear optimization model to establish fixed-time green waves in both directions that are as long and as wide as possible, even in the situation where the driving time condition is not fulfilled. In particular, we are considering an arterial along whose nodes separate left-turn signal groups are realized. In our computational results, we show that scheduling left-turn phases before or after the straight phases can reduce waiting times along the arterial. Moreover, we show that there is always a solution with green waves in both directions that are as long and as wide as possible, where absolute priority is put on just one direction. Compared to optimizing both directions together, establishing an ideal green wave into one direction can only provide suboptimal quality when considering prioritized parts of a green band (e.g., first few seconds).

Keywords: traffic light coordination, synchronization, phase sequencing, green waves, integer programming

Procedia PDF Downloads 91
724 On the Design of a Secure Two-Party Authentication Scheme for Internet of Things Using Cancelable Biometrics and Physically Unclonable Functions

Authors: Behnam Zahednejad, Saeed Kosari

Abstract:

Widespread deployment of Internet of Things (IoT) has raised security and privacy issues in this environment. Designing a secure two-factor authentication scheme between the user and server is still a challenging task. In this paper, we focus on Cancelable Biometric (CB) as an authentication factor in IoT. We show that previous CB-based scheme fail to provide real two-factor security, Perfect Forward Secrecy (PFS) and suffer database attacks and traceability of the user. Then we propose our improved scheme based on CB and Physically Unclonable Functions (PUF), which can provide real two-factor security, PFS, user’s unlinkability, and resistance to database attack. In addition, Key Compromise Impersonation (KCI) resilience is achieved in our scheme. We also prove the security of our proposed scheme formally using both Real-Or-Random (RoR) model and the ProVerif analysis tool. For the usability of our scheme, we conducted a performance analysis and showed that our scheme has the least communication cost compared to the previous CB-based scheme. The computational cost of our scheme is also acceptable for the IoT environment.

Keywords: IoT, two-factor security, cancelable biometric, key compromise impersonation resilience, perfect forward secrecy, database attack, real-or-random model, ProVerif

Procedia PDF Downloads 76
723 Achieving High Renewable Energy Penetration in Western Australia Using Data Digitisation and Machine Learning

Authors: A. D. Tayal

Abstract:

The energy industry is undergoing significant disruption. This research outlines that, whilst challenging; this disruption is also an emerging opportunity for electricity utilities. One such opportunity is leveraging the developments in data analytics and machine learning. As the uptake of renewable energy technologies and complimentary control systems increases, electricity grids will likely transform towards dense microgrids with high penetration of renewable generation sources, rich in network and customer data, and linked through intelligent, wireless communications. Data digitisation and analytics have already impacted numerous industries, and its influence on the energy sector is growing, as computational capabilities increase to manage big data, and as machines develop algorithms to solve the energy challenges of the future. The objective of this paper is to address how far the uptake of renewable technologies can go given the constraints of existing grid infrastructure and provides a qualitative assessment of how higher levels of renewable energy penetration can be facilitated by incorporating even broader technological advances in the fields of data analytics and machine learning. Western Australia is used as a contextualised case study, given its abundance and diverse renewable resources (solar, wind, biomass, and wave) and isolated networks, making a high penetration of renewables a feasible target for policy makers over coming decades.

Keywords: data, innovation, renewable, solar

Procedia PDF Downloads 340
722 Advanced Driver Assistance System: Veibra

Authors: C. Fernanda da S. Sampaio, M. Gabriela Sadith Perez Paredes, V. Antonio de O. Martins

Abstract:

Today the transport sector is undergoing a revolution, with the rise of Advanced Driver Assistance Systems (ADAS), industry and society itself will undergo a major transformation. However, the technological development of these applications is a challenge that requires new techniques and great machine learning and artificial intelligence. The study proposes to develop a vehicular perception system called Veibra, which consists of two front cameras for day/night viewing and an embedded device capable of working with Yolov2 image processing algorithms with low computational cost. The strategic version for the market is to assist the driver on the road with the detection of day/night objects, such as road signs, pedestrians, and animals that will be viewed through the screen of the phone or tablet through an application. The system has the ability to perform real-time driver detection and recognition to identify muscle movements and pupils to determine if the driver is tired or inattentive, analyzing the student's characteristic change and following the subtle movements of the whole face and issuing alerts through beta waves to ensure the concentration and attention of the driver. The system will also be able to perform tracking and monitoring through GSM (Global System for Mobile Communications) technology and the cameras installed in the vehicle.

Keywords: advanced driver assistance systems, tracking, traffic signal detection, vehicle perception system

Procedia PDF Downloads 129
721 Development of a French to Yorùbá Machine Translation System

Authors: Benjamen Nathaniel, Eludiora Safiriyu Ijiyemi, Egume Oneme Lucky

Abstract:

A review on machine translation systems shows that a lot of computational artefacts has been carried out to translate written or spoken texts from a source language to Yorùbá language through Machine Translation systems. However, there are no work on French to Yorùbá language machine translation system; hence, the study investigated the process involved in the translation of French-to-Yorùbá language equivalent with the view to adopting a rule- based MT approach to build a Machine Translation framework from simple sentences administered through questionnaire. Articles and relevant textbooks were reviewed with key speakers of both languages interviewed to find out the processes involved in the translation of French language and their equivalent in Yorùbálanguage simple sentences using home domain terminologies. Achieving this, a model was formulated using phrase grammar structure, re-write rule, parse tree, automata theory- based techniques, designed and implemented respectively with unified modeling language (UML) and python programming language. Analysing the result, it was observed when carrying out the result that, the Machine Translation system performed 18.45% above Experimental Subject Respondent and 2.7% below Linguistics Expert when analysed with word orthography, sentence syntax and semantic correctness of the sentences. And, when compared with Google Machine Translation system, it was noticed that the developed system performed better on lexicons of the target language.

Keywords: machine translation (MT), rule-based, French language, Yoru`ba´ language

Procedia PDF Downloads 40
720 A Mechanical Diagnosis Method Based on Vibration Fault Signal down-Sampling and the Improved One-Dimensional Convolutional Neural Network

Authors: Bowei Yuan, Shi Li, Liuyang Song, Huaqing Wang, Lingli Cui

Abstract:

Convolutional neural networks (CNN) have received extensive attention in the field of fault diagnosis. Many fault diagnosis methods use CNN for fault type identification. However, when the amount of raw data collected by sensors is massive, the neural network needs to perform a time-consuming classification task. In this paper, a mechanical fault diagnosis method based on vibration signal down-sampling and the improved one-dimensional convolutional neural network is proposed. Through the robust principal component analysis, the low-rank feature matrix of a large amount of raw data can be separated, and then down-sampling is realized to reduce the subsequent calculation amount. In the improved one-dimensional CNN, a smaller convolution kernel is used to reduce the number of parameters and computational complexity, and regularization is introduced before the fully connected layer to prevent overfitting. In addition, the multi-connected layers can better generalize classification results without cumbersome parameter adjustments. The effectiveness of the method is verified by monitoring the signal of the centrifugal pump test bench, and the average test accuracy is above 98%. When compared with the traditional deep belief network (DBN) and support vector machine (SVM) methods, this method has better performance.

Keywords: fault diagnosis, vibration signal down-sampling, 1D-CNN

Procedia PDF Downloads 107
719 A Study of Using Multiple Subproblems in Dantzig-Wolfe Decomposition of Linear Programming

Authors: William Chung

Abstract:

This paper is to study the use of multiple subproblems in Dantzig-Wolfe decomposition of linear programming (DW-LP). Traditionally, the decomposed LP consists of one LP master problem and one LP subproblem. The master problem and the subproblem is solved alternatively by exchanging the dual prices of the master problem and the proposals of the subproblem until the LP is solved. It is well known that convergence is slow with a long tail of near-optimal solutions (asymptotic convergence). Hence, the performance of DW-LP highly depends upon the number of decomposition steps. If the decomposition steps can be greatly reduced, the performance of DW-LP can be improved significantly. To reduce the number of decomposition steps, one of the methods is to increase the number of proposals from the subproblem to the master problem. To do so, we propose to add a quadratic approximation function to the LP subproblem in order to develop a set of approximate-LP subproblems (multiple subproblems). Consequently, in each decomposition step, multiple subproblems are solved for providing multiple proposals to the master problem. The number of decomposition steps can be reduced greatly. Note that each approximate-LP subproblem is nonlinear programming, and solving the LP subproblem must faster than solving the nonlinear multiple subproblems. Hence, using multiple subproblems in DW-LP is the tradeoff between the number of approximate-LP subproblems being formed and the decomposition steps. In this paper, we derive the corresponding algorithms and provide some simple computational results. Some properties of the resulting algorithms are also given.

Keywords: approximate subproblem, Dantzig-Wolfe decomposition, large-scale models, multiple subproblems

Procedia PDF Downloads 138
718 Land Cover Remote Sensing Classification Advanced Neural Networks Supervised Learning

Authors: Eiman Kattan

Abstract:

This study aims to evaluate the impact of classifying labelled remote sensing images conventional neural network (CNN) architecture, i.e., AlexNet on different land cover scenarios based on two remotely sensed datasets from different point of views such as the computational time and performance. Thus, a set of experiments were conducted to specify the effectiveness of the selected convolutional neural network using two implementing approaches, named fully trained and fine-tuned. For validation purposes, two remote sensing datasets, AID, and RSSCN7 which are publicly available and have different land covers features were used in the experiments. These datasets have a wide diversity of input data, number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in training, validation, and testing. As a result, the fully trained approach has achieved a trivial result for both of the two data sets, AID and RSSCN7 by 73.346% and 71.857% within 24 min, 1 sec and 8 min, 3 sec respectively. However, dramatic improvement of the classification performance using the fine-tuning approach has been recorded by 92.5% and 91% respectively within 24min, 44 secs and 8 min 41 sec respectively. The represented conclusion opens the opportunities for a better classification performance in various applications such as agriculture and crops remote sensing.

Keywords: conventional neural network, remote sensing, land cover, land use

Procedia PDF Downloads 344
717 A Flexible Bayesian State-Space Modelling for Population Dynamics of Wildlife and Livestock Populations

Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Hans-Peter Piepho

Abstract:

We aim to model dynamics of wildlife or pastoral livestock population for understanding of their population change and hence for wildlife conservation and promoting human welfare. The study is motivated by an age-sex structured population counts in different regions of Serengeti-Mara during the period 1989-2003. Developing reliable and realistic models for population dynamics of large herbivore population can be a very complex and challenging exercise. However, the Bayesian statistical domain offers some flexible computational methods that enable the development and efficient implementation of complex population dynamics models. In this work, we have used a novel Bayesian state-space model to analyse the dynamics of topi and hartebeest populations in the Serengeti-Mara Ecosystem of East Africa. The state-space model involves survival probabilities of the animals which further depend on various factors like monthly rainfall, size of habitat, etc. that cause recent declines in numbers of the herbivore populations and potentially threaten their future population viability in the ecosystem. Our study shows that seasonal rainfall is the most important factors shaping the population size of animals and indicates the age-class which most severely affected by any change in weather conditions.

Keywords: bayesian state-space model, Markov Chain Monte Carlo, population dynamics, conservation

Procedia PDF Downloads 181
716 Accelerated Structural Reliability Analysis under Earthquake-Induced Tsunamis by Advanced Stochastic Simulation

Authors: Sai Hung Cheung, Zhe Shao

Abstract:

Recent earthquake-induced tsunamis in Padang, 2004 and Tohoku, 2011 brought huge losses of lives and properties. Maintaining vertical evacuation systems is the most crucial strategy to effectively reduce casualty during the tsunami event. Thus, it is of our great interest to quantify the risk to structural dynamic systems due to earthquake-induced tsunamis. Despite continuous advancement in computational simulation of the tsunami and wave-structure interaction modeling, it still remains computationally challenging to evaluate the reliability (or its complement failure probability) of a structural dynamic system when uncertainties related to the system and its modeling are taken into account. The failure of the structure in a tsunami-wave-structural system is defined as any response quantities of the system exceeding specified thresholds during the time when the structure is subjected to dynamic wave impact due to earthquake-induced tsunamis. In this paper, an approach based on a novel integration of the Subset Simulation algorithm and a recently proposed moving least squares response surface approach for stochastic sampling is proposed. The effectiveness of the proposed approach is discussed by comparing its results with those obtained from the Subset Simulation algorithm without using the response surface approach.

Keywords: response surface model, subset simulation, structural reliability, Tsunami risk

Procedia PDF Downloads 351
715 Towards Developing Social Assessment Tool for Siwan Ecolodge Case Study: Babenshal Ecolodge

Authors: Amr Ali Bayoumi, Ola Ali Bayoumi

Abstract:

The aim of this research is enhancing one of the main aspects (Social Aspect) for developing an eco-lodge in Siwa oasis in Egyptian Western Desert. According to credible weightings built in this research through formal and informal questionnaires, the researcher detected one of the highest credible aspects, 'Social Aspect': through which it carries the maximum priorities among the total environmental and economic categories. From here, the researcher suggested the usage of ethnographic design approach and Space Syntax as observational and computational methods for developing future Eco-lodge in Siwa Oasis. These methods are used to study social spaces of Babenshal eco-lodge as a case study. This hybrid method is considered as a beginning of building Social Assessment Tool (SAT) for ecological tourism buildings located in Siwa as a case of Egyptian Western desert community. Towards livable social spaces, the proposed SAT was planned to be the optimum measurable weightings for social aspect's priorities of future Siwan eco-lodge(s). Finally, recommendations are proposed for enhancing SAT to be more correlated with sensitive desert biome (Siwa Oasis) to be adapted with the continuous social and environmental changes of the oasis.

Keywords: ecolodge, social aspect, space syntax, Siwa Oasis

Procedia PDF Downloads 106
714 Analysis of One-Way and Two-Way FSI Approaches to Characterise the Flow Regime and the Mechanical Behaviour during Closing Manoeuvring Operation of a Butterfly Valve

Authors: M. Ezkurra, J. A. Esnaola, M. Martinez-Agirre, U. Etxeberria, U. Lertxundi, L. Colomo, M. Begiristain, I. Zurutuza

Abstract:

Butterfly valves are widely used industrial piping components as on-off and flow controlling devices. The main challenge in the design process of this type of valves is the correct dimensioning to ensure proper mechanical performance as well as to minimise flow losses that affect the efficiency of the system. Butterfly valves are typically dimensioned in a closed position based on mechanical approaches considering uniform hydrostatic pressure, whereas the flow losses are analysed by means of CFD simulations. The main limitation of these approaches is that they do not consider either the influence of the dynamics of the manoeuvring stage or coupled phenomena. Recent works have included the influence of the flow on the mechanical behaviour for different opening angles by means of one-way FSI approach. However, these works consider steady-state flow for the selected angles, not capturing the effect of the transient flow evolution during the manoeuvring stage. Two-way FSI modelling approach could allow overcoming such limitations providing more accurate results. Nevertheless, the use of this technique is limited due to the increase in the computational cost. In the present work, the applicability of FSI one-way and two-way approaches is evaluated for the analysis of butterfly valves, showing that not considering fluid-structure coupling involves not capturing the most critical situation for the valve disc.

Keywords: butterfly valves, fluid-structure interaction, one-way approach, two-way approach

Procedia PDF Downloads 145
713 Determining Optimal Number of Trees in Random Forests

Authors: Songul Cinaroglu

Abstract:

Background: Random Forest is an efficient, multi-class machine learning method using for classification, regression and other tasks. This method is operating by constructing each tree using different bootstrap sample of the data. Determining the number of trees in random forests is an open question in the literature for studies about improving classification performance of random forests. Aim: The aim of this study is to analyze whether there is an optimal number of trees in Random Forests and how performance of Random Forests differ according to increase in number of trees using sample health data sets in R programme. Method: In this study we analyzed the performance of Random Forests as the number of trees grows and doubling the number of trees at every iteration using “random forest” package in R programme. For determining minimum and optimal number of trees we performed Mc Nemar test and Area Under ROC Curve respectively. Results: At the end of the analysis it was found that as the number of trees grows, it does not always means that the performance of the forest is better than forests which have fever trees. In other words larger number of trees only increases computational costs but not increases performance results. Conclusion: Despite general practice in using random forests is to generate large number of trees for having high performance results, this study shows that increasing number of trees doesn’t always improves performance. Future studies can compare different kinds of data sets and different performance measures to test whether Random Forest performance results change as number of trees increase or not.

Keywords: classification methods, decision trees, number of trees, random forest

Procedia PDF Downloads 376
712 Airflow Characteristics and Thermal Comfort of Air Diffusers: A Case Study

Authors: Tolga Arda Eraslan

Abstract:

The quality of the indoor environment is significant to occupants’ health, comfort, and productivity, as Covid-19 spread throughout the world, people started spending most of their time indoors. Since buildings are getting bigger, mechanical ventilation systems are widely used where natural ventilation is insufficient. Four primary tasks of a ventilation system have been identified indoor air quality, comfort, contamination control, and energy performance. To fulfill such requirements, air diffusers, which are a part of the ventilation system, have begun to enter our lives in different airflow distribution systems. Detailed observations are needed to assure that such devices provide high levels of comfort effectiveness and energy efficiency. This study addresses these needs. The objective of this article is to observe air characterizations of different air diffusers at different angles and their effect on people by the thermal comfort model in CFD simulation and to validate the outputs with the help of data results based on a simulated office room. Office room created to provide validation; Equipped with many thermal sensors, including head height, tabletop, and foot level. In addition, CFD simulations were carried out by measuring the temperature and velocity of the air coming out of the supply diffuser. The results considering the flow interaction between diffusers and surroundings showed good visual illustration.

Keywords: computational fluid dynamics, fanger’s model, predicted mean vote, thermal comfort

Procedia PDF Downloads 91
711 Investigating the Role of Dystrophin in Neuronal Homeostasis

Authors: Samantha Shallop, Hakinya Karra, Tytus Bernas, Gladys Shaw, Gretchen Neigh, Jeffrey Dupree, Mathula Thangarajh

Abstract:

Abnormal neuronal homeostasis is considered a structural correlate of cognitive deficits in Duchenne Muscular Dystrophy. Neurons are highly polarized cells with multiple dendrites but a single axon. Trafficking of cellular organelles are highly regulated, with the cargo in the somatodendritic region of the neuron not permitted to enter the axonal compartment. We investigated the molecular mechanisms that regular organelle trafficking in neurons using a multimodal approach, including high-resolution structural illumination, proteomics, immunohistochemistry, and computational modeling. We investigated the expression of ankyrin-G, the master regulator controlling neuronal polarity. The expression of ankyrin G and the morphology of the axon initial segment was profoundly abnormal in the CA1 hippocampal neurons in the mdx52 animal model of DMD. Ankyrin-G colocalized with kinesin KIF5a, the anterograde protein transporter, with higher levels in older mdx52 mice than younger mdx52 mice. These results suggest that the functional trafficking from the somatodendritic compartment is abnormal. Our data suggests that dystrophin deficiency compromised neuronal homeostasis via ankyrin-G-based mechanisms.

Keywords: neurons, axonal transport, duchenne muscular dystrophy, organelle transport

Procedia PDF Downloads 70
710 Enhancing Aerodynamic Performance of Savonius Vertical Axis Turbine Used with Triboelectric Generator

Authors: Bhavesh Dadhich, Fenil Bamnoliya, Akshita Swaminathan

Abstract:

This project aims to design a system to generate energy from flowing wind due to the motion of a vehicle on the road or from the flow of wind in compact areas to utilize the wasteful energy into a useful one. It is envisaged through a design and aerodynamic performance improvement of a Savonius vertical axis wind turbine rotor and used in an integrated system with a Triboelectric Nanogenerator (TENG) that can generate a good amount of electrical energy. Aerodynamic calculations are performed numerically using Computational Fluid Dynamics software, and TENG's performance is evaluated analytically. The Turbine's coefficient of power is validated with published results for an inlet velocity of 7 m/s with a Tip Speed Ratio of 0.75 and found to reasonably agree with that of experiment results. The baseline design is modified with a new blade arc angle and rotor position angle based on the recommended parameter ranges suggested by previous researchers. Simulations have been performed for different T.S.R. values ranging from 0.25 to 1.5 with an interval of 0.25 with two applicable free stream velocities of 5 m/s and 7m/s. Finally, the newly designed VAWT CFD performance results are used as input for the analytical performance prediction of the triboelectric nanogenerator. The results show that this approach could be feasible and useful for small power source applications.

Keywords: savonius turbine, power, overlap ratio, tip speed ratio, TENG

Procedia PDF Downloads 97
709 Numerical Investigation of AL₂O₃ Nanoparticle Effect on a Boiling Forced Swirl Flow Field

Authors: Ataollah Rabiee1, Amir Hossein Kamalinia, Alireza Atf

Abstract:

One of the most important issues in the design of nuclear fusion power plants is the heat removal from the hottest region at the diverter. Various methods could be employed in order to improve the heat transfer efficiency, such as generating turbulent flow and injection of nanoparticles in the host fluid. In the current study, Water/AL₂O₃ nanofluid forced swirl flow boiling has been investigated by using a homogeneous thermophysical model within the Eulerian-Eulerian framework through a twisted tape tube, and the boiling phenomenon was modeled using the Rensselaer Polytechnic Institute (RPI) approach. In addition to comparing the results with the experimental data and their reasonable agreement, it was evidenced that higher flow mixing results in more uniform bulk temperature and lower wall temperature along the twisted tape tube. The presence of AL₂O₃ nanoparticles in the boiling flow field showed that increasing the nanoparticle concentration leads to a reduced vapor volume fraction and wall temperature. The Computational fluid dynamics (CFD) results show that the average heat transfer coefficient in the tube increases both by increasing the nanoparticle concentration and the insertion of twisted tape, which significantly affects the thermal field of the boiling flow.

Keywords: nanoparticle, boiling, CFD, two phase flow, alumina, ITER

Procedia PDF Downloads 103
708 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 33
707 De Novo Design of a Minimal Catalytic Di-Nickel Peptide Capable of Sustained Hydrogen Evolution

Authors: Saroj Poudel, Joshua Mancini, Douglas Pike, Jennifer Timm, Alexei Tyryshkin, Vikas Nanda, Paul Falkowski

Abstract:

On the early Earth, protein-metal complexes likely harvested energy from a reduced environment. These complexes would have been precursors to the metabolic enzymes of ancient organisms. Hydrogenase is an essential enzyme in most anaerobic organisms for the reduction and oxidation of hydrogen in the environment and is likely one of the earliest evolved enzymes. To attempt to reinvent a precursor to modern hydrogenase, we computationally designed a short thirteen amino acid peptide that binds the often-required catalytic transition metal Nickel in hydrogenase. This simple complex can achieve hundreds of hydrogen evolution cycles using light energy in a broad range of temperature and pH. Biophysical and structural investigations strongly indicate the peptide forms a di-nickel active site analogous to Acetyl-CoA synthase, an ancient protein central to carbon reduction in the Wood-Ljungdahl pathway and capable of hydrogen evolution. This work demonstrates that prior to the complex evolution of multidomain enzymes, early peptide-metal complexes could have catalyzed energy transfer from the environment on the early Earth and enabled the evolution of modern metabolism

Keywords: hydrogenase, prebiotic enzyme, metalloenzyme, computational design

Procedia PDF Downloads 196
706 A Fast Parallel and Distributed Type-2 Fuzzy Algorithm Based on Cooperative Mobile Agents Model for High Performance Image Processing

Authors: Fatéma Zahra Benchara, Mohamed Youssfi, Omar Bouattane, Hassan Ouajji, Mohamed Ouadi Bensalah

Abstract:

The aim of this paper is to present a distributed implementation of the Type-2 Fuzzy algorithm in a parallel and distributed computing environment based on mobile agents. The proposed algorithm is assigned to be implemented on a SPMD (Single Program Multiple Data) architecture which is based on cooperative mobile agents as AVPE (Agent Virtual Processing Element) model in order to improve the processing resources needed for performing the big data image segmentation. In this work we focused on the application of this algorithm in order to process the big data MRI (Magnetic Resonance Images) image of size (n x m). It is encapsulated on the Mobile agent team leader in order to be split into (m x n) pixels one per AVPE. Each AVPE perform and exchange the segmentation results and maintain asynchronous communication with their team leader until the convergence of this algorithm. Some interesting experimental results are obtained in terms of accuracy and efficiency analysis of the proposed implementation, thanks to the mobile agents several interesting skills introduced in this distributed computational model.

Keywords: distributed type-2 fuzzy algorithm, image processing, mobile agents, parallel and distributed computing

Procedia PDF Downloads 399
705 Continuous Plug Flow and Discrete Particle Phase Coupling Using Triangular Parcels

Authors: Anders Schou Simonsen, Thomas Condra, Kim Sørensen

Abstract:

Various processes are modelled using a discrete phase, where particles are seeded from a source. Such particles can represent liquid water droplets, which are affecting the continuous phase by exchanging thermal energy, momentum, species etc. Discrete phases are typically modelled using parcel, which represents a collection of particles, which share properties such as temperature, velocity etc. When coupling the phases, the exchange rates are integrated over the cell, in which the parcel is located. This can cause spikes and fluctuating exchange rates. This paper presents an alternative method of coupling a discrete and a continuous plug flow phase. This is done using triangular parcels, which span between nodes following the dynamics of single droplets. Thus, the triangular parcels are propagated using the corner nodes. At each time step, the exchange rates are spatially integrated over the surface of the triangular parcels, which yields a smooth continuous exchange rate to the continuous phase. The results shows that the method is more stable, converges slightly faster and yields smooth exchange rates compared with the steam tube approach. However, the computational requirements are about five times greater, so the applicability of the alternative method should be limited to processes, where the exchange rates are important. The overall balances of the exchanged properties did not change significantly using the new approach.

Keywords: CFD, coupling, discrete phase, parcel

Procedia PDF Downloads 241
704 Detection of Important Biological Elements in Drug-Drug Interaction Occurrence

Authors: Reza Ferdousi, Reza Safdari, Yadollah Omidi

Abstract:

Drug-drug interactions (DDIs) are main cause of the adverse drug reactions and nature of the functional and molecular complexity of drugs behavior in human body make them hard to prevent and treat. With the aid of new technologies derived from mathematical and computational science the DDIs problems can be addressed with minimum cost and efforts. Market basket analysis is known as powerful method to identify co-occurrence of thing to discover patterns and frequency of the elements. In this research, we used market basket analysis to identify important bio-elements in DDIs occurrence. For this, we collected all known DDIs from DrugBank. The obtained data were analyzed by market basket analysis method. We investigated all drug-enzyme, drug-carrier, drug-transporter and drug-target associations. To determine the importance of the extracted bio-elements, extracted rules were evaluated in terms of confidence and support. Market basket analysis of the over 45,000 known DDIs reveals more than 300 important rules that can be used to identify DDIs, CYP 450 family were the most frequent shared bio-elements. We applied extracted rules over 2,000,000 unknown drug pairs that lead to discovery of more than 200,000 potential DDIs. Analysis of the underlying reason behind the DDI phenomena can help to predict and prevent DDI occurrence. Ranking of the extracted rules based on strangeness of them can be a supportive tool to predict the outcome of an unknown DDI.

Keywords: drug-drug interaction, market basket analysis, rule discovery, important bio-elements

Procedia PDF Downloads 291