Search results for: elasto-plastic model
13574 Common Sense Leadership in the Example of Turkish Political Leader Devlet Bahçeli
Authors: B. Gültekin, T. Gültekin
Abstract:
Peace diplomacy is the most important international tool to maintain peace all over the World. This study consists of three parts. In the first part, the leadership of Devlet Bahçeli, leader of the Nationalist Movement Party, will be introduced as a tool of peace communication and peace management. Also, in this part, peace communication will be explained by the peace leadership traits of Devlet Bahçeli, who is one of the efficient political leaders representing the concepts of compromise and agreement on different sides of politics. In the second part of study, it is aimed to analyze Devlet Bahçeli’s leadership within the frame of peace communication and the final part of this study is about creating an original public communication model for public diplomacy based on Devlet Bahçeli as an example. As a result, the main purpose of this study is to develop an original peace communication model including peace modules, peace management projects, original dialogue procedures and protocols exhibited in the policies of Devlet Bahçeli. The political leadership represented by Devlet Bahçeli inspires political leaders to provide peace communication. In this study, principles and policies of peace leadership of Devlet Bahçeli will be explained as an original model on a peace communication platform.Keywords: public diplomacy, dialogue management, peace leadership, peace diplomacy
Procedia PDF Downloads 17113573 Bioeconomic Modeling for the Sustainable Exploitation of Three Key Marine Species in Morocco
Authors: I .Ait El Harch, K. Outaaoui, Y. El Foutayeni
Abstract:
This study aims to deepen the understanding and optimize fishing activity in Morocco by holistically integrating biological and economic aspects. We develop a biological equilibrium model in which these competing species present their natural growth by logistic equations, taking into account density and competition between them. The integration of human intervention adds a realistic dimension to our model. A company specifically targets the three species, thus influencing population dynamics according to their fishing activities. The aim of this work is to determine the fishing effort that maximizes the company’s profit, taking into account the constraints associated with conserving ecosystem equilibrium.Keywords: bioeconomical modeling, optimization techniques, linear complementarity problem LCP, biological equilibrium, maximizing profits
Procedia PDF Downloads 2913572 Data Augmentation for Automatic Graphical User Interface Generation Based on Generative Adversarial Network
Authors: Xulu Yao, Moi Hoon Yap, Yanlong Zhang
Abstract:
As a branch of artificial neural network, deep learning is widely used in the field of image recognition, but the lack of its dataset leads to imperfect model learning. By analysing the data scale requirements of deep learning and aiming at the application in GUI generation, it is found that the collection of GUI dataset is a time-consuming and labor-consuming project, which is difficult to meet the needs of current deep learning network. To solve this problem, this paper proposes a semi-supervised deep learning model that relies on the original small-scale datasets to produce a large number of reliable data sets. By combining the cyclic neural network with the generated countermeasure network, the cyclic neural network can learn the sequence relationship and characteristics of data, make the generated countermeasure network generate reasonable data, and then expand the Rico dataset. Relying on the network structure, the characteristics of collected data can be well analysed, and a large number of reasonable data can be generated according to these characteristics. After data processing, a reliable dataset for model training can be formed, which alleviates the problem of dataset shortage in deep learning.Keywords: GUI, deep learning, GAN, data augmentation
Procedia PDF Downloads 18713571 Salting Effect in Partially Miscible Systems of Water/Acétic Acid/1-Butanol at 298.15k: Experimental Study and Estimation of New Solvent-Solvent and Salt-Solvent Binary Interaction Parameters for NRTL Model
Authors: N. Bourayou, A. -H. Meniai, A. Gouaoura
Abstract:
The presence of salt can either raise or lower the distribution coefficient of a solute acetic acid in liquid- liquid equilibria. The coefficient of solute is defined as the ratio of the composition of solute in solvent rich phase to the composition of solute in diluents (water) rich phase. The phenomena are known as salting–out or salting-in, respectively. The effect of monovalent salt, sodium chloride and the bivalent salt, sodium sulfate on the distribution of acetic acid between 1-butanol and water at 298.15K were experimentally shown to be effective in modifying the liquid-liquid equilibrium of water/acetic acid/1-butanol system in favour of the solvent extraction of acetic acid from an aqueous solution with 1-butanol, particularly at high salt concentrations of both salts. All the two salts studied are found to have to salt out effect for acetic acid in varying degrees. The experimentally measured data were well correlated by Eisen-Joffe equation. NRTL model for solvent mixtures containing salts was able to provide good correlation of the present liquid-liquid equilibrium data. Using the regressed salt concentration coefficients for the salt-solvent interaction parameters and the solvent-solvent interaction parameters obtained from the same system without salt. The calculated phase equilibrium was in a quite good agreement with the experimental data, showing the ability of NRTL model to correlate salt effect on the liquid-liquid equilibrium.Keywords: activity coefficient, Eisen-Joffe, NRTL model, sodium chloride
Procedia PDF Downloads 28513570 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational
Authors: Marco Sewald
Abstract:
Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating
Procedia PDF Downloads 22313569 Analog Input Output Buffer Information Specification Modelling Techniques for Single Ended Inter-Integrated Circuit and Differential Low Voltage Differential Signaling I/O Interfaces
Authors: Monika Rawat, Rahul Kumar
Abstract:
Input output Buffer Information Specification (IBIS) models are used for describing the analog behavior of the Input Output (I/O) buffers of a digital device. They are widely used to perform signal integrity analysis. Advantages of using IBIS models include simple structure, IP protection and fast simulation time with reasonable accuracy. As design complexity of driver and receiver increases, capturing exact behavior from transistor level model into IBIS model becomes an essential task to achieve better accuracy. In this paper, an improvement in existing methodology of generating IBIS model for complex I/O interfaces such as Inter-Integrated Circuit (I2C) and Low Voltage Differential Signaling (LVDS) is proposed. Furthermore, the accuracy and computational performance of standard method and proposed approach with respect to SPICE are presented. The investigations will be useful to further improve the accuracy of IBIS models and to enhance their wider acceptance.Keywords: IBIS, signal integrity, open-drain buffer, low voltage differential signaling, behavior modelling, transient simulation
Procedia PDF Downloads 19913568 Optimum Dimensions of Hydraulic Structures Foundation and Protections Using Coupled Genetic Algorithm with Artificial Neural Network Model
Authors: Dheyaa W. Abbood, Rafa H. AL-Suhaili, May S. Saleh
Abstract:
A model using the artificial neural networks and genetic algorithm technique is developed for obtaining optimum dimensions of the foundation length and protections of small hydraulic structures. The procedure involves optimizing an objective function comprising a weighted summation of the state variables. The decision variables considered in the optimization are the upstream and downstream cutoffs length sand their angles of inclination, the foundation length, and the length of the downstream soil protection. These were obtained for a given maximum difference in head, depth of impervious layer and degree of anisotropy.The optimization carried out subjected to constraints that ensure a safe structure against the uplift pressure force and sufficient protection length at the downstream side of the structure to overcome an excessive exit gradient. The Geo-studios oft ware, was used to analyze 1200 different cases. For each case the length of protection and volume of structure required to satisfy the safety factors mentioned previously were estimated. An ANN model was developed and verified using these cases input-output sets as its data base. A MatLAB code was written to perform a genetic algorithm optimization modeling coupled with this ANN model using a formulated optimization model. A sensitivity analysis was done for selecting the cross-over probability, the mutation probability and level ,the number of population, the position of the crossover and the weights distribution for all the terms of the objective function. Results indicate that the most factor that affects the optimum solution is the number of population required. The minimum value that gives stable global optimum solution of this parameters is (30000) while other variables have little effect on the optimum solution.Keywords: inclined cutoff, optimization, genetic algorithm, artificial neural networks, geo-studio, uplift pressure, exit gradient, factor of safety
Procedia PDF Downloads 32513567 Contextual SenSe Model: Word Sense Disambiguation using Sense and Sense Value of Context Surrounding the Target
Authors: Vishal Raj, Noorhan Abbas
Abstract:
Ambiguity in NLP (Natural language processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential am-biguities. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a novel method to create an affinity matrix to calculate the affinity be-tween any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an al-gorithm to create the sense clusters of tokens using affinity matrix under hierar-chy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contex-tual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and chal-lenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.Keywords: word sense disambiguation (wsd), contextual sense model (csm), most frequent sense (mfs), part of speech (pos), natural language processing (nlp), oov (out of vocabulary), lemma_pos (a token where lemma and pos of word are joined by underscore), information retrieval (ir), machine translation (mt)
Procedia PDF Downloads 11113566 Incorporating Spatial Selection Criteria with Decision-Maker Preferences of A Precast Manufacturing Plant
Authors: M. N. A. Azman, M. S. S. Ahamad
Abstract:
The Construction Industry Development Board of Malaysia has been actively promoting the use of precast manufacturing in the local construction industry over the last decade. In an era of rapid technological changes, precast manufacturing significantly contributes to improving construction activities and ensuring sustainable economic growth. Current studies on the location decision of precast manufacturing plants aimed to enhanced local economic development are scarce. To address this gap, the present research establishes a new set of spatial criteria, such as attribute maps and preference weights, derived from a survey of local industry decision makers. These data represent the input parameters for the MCE-GIS site selection model, for which the weighted linear combination method is used. Verification tests on the model were conducted to determine the potential precast manufacturing sites in the state of Penang, Malaysia. The tests yield a predicted area of 12.87 acres located within a designated industrial zone. Although, the model is developed specifically for precast manufacturing plant but nevertheless it can be employed to other types of industries by following the methodology and guidelines proposed in the present research.Keywords: geographical information system, multi criteria evaluation, industrialised building system, civil engineering
Procedia PDF Downloads 28913565 Approach to Quantify Groundwater Recharge Using GIS Based Water Balance Model
Authors: S. S. Rwanga, J. M. Ndambuki
Abstract:
Groundwater quantification needs a method which is not only flexible but also reliable in order to accurately quantify its spatial and temporal variability. As groundwater is dynamic and interdisciplinary in nature, an integrated approach of remote sensing (RS) and GIS technique is very useful in various groundwater management studies. Thus, the GIS water balance model (WetSpass) together with remote sensing (RS) can be used to quantify groundwater recharge. This paper discusses the concept of WetSpass in combination with GIS on the quantification of recharge with a view to managing water resources in an integrated framework. The paper presents the simulation procedures and expected output after simulation. Preliminary data are presented from GIS output only.Keywords: groundwater, recharge, GIS, WetSpass
Procedia PDF Downloads 45113564 Application of Response Surface Methodology to Optimize the Factor Influencing the Wax Deposition of Malaysian Crude Oil
Authors: Basem Elarbe, Ibrahim Elganidi, Norida Ridzuan, Norhyati Abdullah
Abstract:
Wax deposition in production pipelines and transportation tubing from offshore to onshore is critical in the oil and gas industry due to low-temperature conditions. It may lead to a reduction in production, shut-in, plugging of pipelines and increased fluid viscosity. The most significant popular approach to solve this issue is by injection of a wax inhibitor into the channel. This research aims to determine the amount of wax deposition of Malaysian crude oil by estimating the effective parameters using (Design-Expert version 7.1.6) by response surface methodology (RSM) method. Important parameters affecting wax deposition such as cold finger temperature, inhibitor concentration and experimental duration were investigated. It can be concluded that SA-co-BA copolymer had a higher capability of reducing wax in different conditions where the minimum point of wax reduction was found at 300 rpm, 14℃, 1h, 1200 ppmThe amount of waxes collected for each parameter were 0.12g. RSM approach was applied using rotatable central composite design (CCD) to minimize the wax deposit amount. The regression model’s variance (ANOVA) results revealed that the R2 value of 0.9906, indicating that the model can be clarified 99.06% of the data variation, and just 0.94% of the total variation were not clarified by the model. Therefore, it indicated that the model is extremely significant, confirming a close agreement between the experimental and the predicted values. In addition, the result has shown that the amount of wax deposit decreased significantly with the increase of temperature and the concentration of poly (stearyl acrylate-co-behenyl acrylate) (SABA), which were set at 14°C and 1200 ppm, respectively. The amount of wax deposit was successfully reduced to the minimum value of 0.01 g after the optimization.Keywords: wax deposition, SABA inhibitor, RSM, operation factors
Procedia PDF Downloads 28713563 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques
Authors: Gizem Eser Erdek
Abstract:
This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet
Procedia PDF Downloads 8013562 Design and Implementation a Platform for Adaptive Online Learning Based on Fuzzy Logic
Authors: Budoor Al Abid
Abstract:
Educational systems are increasingly provided as open online services, providing guidance and support for individual learners. To adapt the learning systems, a proper evaluation must be made. This paper builds the evaluation model Fuzzy C Means Adaptive System (FCMAS) based on data mining techniques to assess the difficulty of the questions. The following steps are implemented; first using a dataset from an online international learning system called (slepemapy.cz) the dataset contains over 1300000 records with 9 features for students, questions and answers information with feedback evaluation. Next, a normalization process as preprocessing step was applied. Then FCM clustering algorithms are used to adaptive the difficulty of the questions. The result is three cluster labeled data depending on the higher Wight (easy, Intermediate, difficult). The FCM algorithm gives a label to all the questions one by one. Then Random Forest (RF) Classifier model is constructed on the clustered dataset uses 70% of the dataset for training and 30% for testing; the result of the model is a 99.9% accuracy rate. This approach improves the Adaptive E-learning system because it depends on the student behavior and gives accurate results in the evaluation process more than the evaluation system that depends on feedback only.Keywords: machine learning, adaptive, fuzzy logic, data mining
Procedia PDF Downloads 19813561 Learners as Consultants: Knowledge Acquisition and Client Organisations-A Student as Producer Case Study
Authors: Barry Ardley, Abi Hunt, Nick Taylor
Abstract:
As a theoretical and practical framework, this study uses the student-as-producer approach to learning in higher education, as adopted by the Lincoln International Business School, University of Lincoln, UK. Students as producer positions learners as skilled and capable agents, able to participate as partners with tutors in live research projects. To illuminate the nature of this approach to learning and to highlight its critical issues, the authors report on two guided student consultancy projects. These were set up with the assistance of two local organisations in the city of Lincoln, UK. Using the student as a producer model to deliver the projects enabled learners to acquire and develop a range of key skills and knowledge not easily accessible in more traditional educational settings. This paper presents a systematic case study analysis of the eight organising principles of the student-as-producer model, as adopted by university tutors. The experience of tutors implementing students as producers suggests that the model can be widely applied to benefit not only the learning and teaching experiences of higher education students and staff but additionally a university’s research programme and its community partners.Keywords: consultancy, learning, student as producer, research
Procedia PDF Downloads 8013560 On the Implementation of The Pulse Coupled Neural Network (PCNN) in the Vision of Cognitive Systems
Authors: Hala Zaghloul, Taymoor Nazmy
Abstract:
One of the great challenges of the 21st century is to build a robot that can perceive and act within its environment and communicate with people, while also exhibiting the cognitive capabilities that lead to performance like that of people. The Pulse Coupled Neural Network, PCNN, is a relative new ANN model that derived from a neural mammal model with a great potential in the area of image processing as well as target recognition, feature extraction, speech recognition, combinatorial optimization, compressed encoding. PCNN has unique feature among other types of neural network, which make it a candid to be an important approach for perceiving in cognitive systems. This work show and emphasis on the potentials of PCNN to perform different tasks related to image processing. The main drawback or the obstacle that prevent the direct implementation of such technique, is the need to find away to control the PCNN parameters toward perform a specific task. This paper will evaluate the performance of PCNN standard model for processing images with different properties, and select the important parameters that give a significant result, also, the approaches towards find a way for the adaptation of the PCNN parameters to perform a specific task.Keywords: cognitive system, image processing, segmentation, PCNN kernels
Procedia PDF Downloads 28113559 Modeling Jordan University of Science and Technology Parking Using Arena Program
Authors: T. Qasim, M. Alqawasmi, M. Hawash, M. Betar, W. Qasim
Abstract:
Over the last decade, the over population that has happened in urban areas has been reflecting on the services that various local institutions provide to car users in the form of car parks, which is becoming a daily necessity in our lives. This study focuses on car parks at Jordan University of Science and Technology, in Irbid, Jordan, to understand the university parking needs. Data regarding arrival and departure times of cars and the parking utilization were collected, to find various options that the university can implement to solve and develop an efficient car parking system. Arena software was used to simulate a parking model. This model allows measuring the different solutions that solve the parking problem at Jordan University of Science and Technology.Keywords: car park, simulation, modeling, service time
Procedia PDF Downloads 18913558 Analysis of Creative City Indicators in Isfahan City, Iran
Authors: Reza Mokhtari Malek Abadi, Mohsen Saghaei, Fatemeh Iman
Abstract:
This paper investigates the indices of a creative city in Isfahan. Its main aim is to evaluate quantitative status of the creative city indices in Isfahan city, analyze the dispersion and distribution of these indices in Isfahan city. Concerning these, this study tries to analyze the creative city indices in fifteen area of Isfahan through secondary data, questionnaire, TOPSIS model, Shannon entropy and SPSS. Based on this, the fifteen areas of Isfahan city have been ranked with 12 factors of creative city indices. The results of studies show that fifteen areas of Isfahan city are not equally benefiting from creative indices and there is much difference between the areas of Isfahan city.Keywords: grading, creative city, creative city evaluation indicators, regional planning model
Procedia PDF Downloads 47413557 A Students' Ability Analysis Methods, Devices, Electronic Equipment and Storage Media Design
Authors: Dequn Teng, Tianshuo Yang, Mingrui Wang, Qiuyu Chen, Xiao Wang, Katie Atkinson
Abstract:
Currently, many students are kind of at a loss in the university due to the complex environment within the campus, where every information within the campus is isolated with fewer interactions with each other. However, if the on-campus resources are gathered and combined with the artificial intelligence modelling techniques, there will be a bridge for not only students in understanding themselves, and the teachers will understand students in providing a much efficient approach in education. The objective of this paper is to provide a competency level analysis method, apparatus, electronic equipment, and storage medium. It uses a user’s target competency level analysis model from a plurality of predefined candidate competency level analysis models by obtaining a user’s promotion target parameters, promotion target parameters including at least one of the following parameters: target profession, target industry, and the target company, according to the promotion target parameters. According to the parameters, the model analyzes the user’s ability level, determines the user’s ability level, realizes the quantitative and personalized analysis of the user’s ability level, and helps the user to objectively position his ability level.Keywords: artificial intelligence, model, university, education, recommendation system, evaluation, job hunting
Procedia PDF Downloads 14613556 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump
Authors: Ravi Verma
Abstract:
Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity
Procedia PDF Downloads 9213555 Using Photogrammetric Techniques to Map the Mars Surface
Authors: Ahmed Elaksher, Islam Omar
Abstract:
For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.Keywords: mars, photogrammetry, MOLA, HiRISE
Procedia PDF Downloads 6013554 Development of a Paediatric Head Model for the Computational Analysis of Head Impact Interactions
Authors: G. A. Khalid, M. D. Jones, R. Prabhu, A. Mason-Jones, W. Whittington, H. Bakhtiarydavijani, P. S. Theobald
Abstract:
Head injury in childhood is a common cause of death or permanent disability from injury. However, despite its frequency and significance, there is little understanding of how a child’s head responds during injurious loading. Whilst Infant Post Mortem Human Subject (PMHS) experimentation is a logical approach to understand injury biomechanics, it is the authors’ opinion that a lack of subject availability is hindering potential progress. Computer modelling adds great value when considering adult populations; however, its potential remains largely untapped for infant surrogates. The complexities of child growth and development, which result in age dependent changes in anatomy, geometry and physical response characteristics, present new challenges for computational simulation. Further geometric challenges are presented by the intricate infant cranial bones, which are separated by sutures and fontanelles and demonstrate a visible fibre orientation. This study presents an FE model of a newborn infant’s head, developed from high-resolution computer tomography scans, informed by published tissue material properties. To mimic the fibre orientation of immature cranial bone, anisotropic properties were applied to the FE cranial bone model, with elastic moduli representing the bone response both parallel and perpendicular to the fibre orientation. Biofiedility of the computational model was confirmed by global validation against published PMHS data, by replicating experimental impact tests with a series of computational simulations, in terms of head kinematic responses. Numerical results confirm that the FE head model’s mechanical response is in favourable agreement with the PMHS drop test results.Keywords: finite element analysis, impact simulation, infant head trauma, material properties, post mortem human subjects
Procedia PDF Downloads 32713553 Statistical Modelling of Maximum Temperature in Rwanda Using Extreme Value Analysis
Authors: Emmanuel Iyamuremye, Edouard Singirankabo, Alexis Habineza, Yunvirusaba Nelson
Abstract:
Temperature is one of the most important climatic factors for crop production. However, severe temperatures cause drought, feverish and cold spells that have various consequences for human life, agriculture, and the environment in general. It is necessary to provide reliable information related to the incidents and the probability of such extreme events occurring. In the 21st century, the world faces a huge number of threats, especially from climate change, due to global warming and environmental degradation. The rise in temperature has a direct effect on the decrease in rainfall. This has an impact on crop growth and development, which in turn decreases crop yield and quality. Countries that are heavily dependent on agriculture use to suffer a lot and need to take preventive steps to overcome these challenges. The main objective of this study is to model the statistical behaviour of extreme maximum temperature values in Rwanda. To achieve such an objective, the daily temperature data spanned the period from January 2000 to December 2017 recorded at nine weather stations collected from the Rwanda Meteorological Agency were used. The two methods, namely the block maxima (BM) method and the Peaks Over Threshold (POT), were applied to model and analyse extreme temperature. Model parameters were estimated, while the extreme temperature return periods and confidence intervals were predicted. The model fit suggests Gumbel and Beta distributions to be the most appropriate models for the annual maximum of daily temperature. The results show that the temperature will continue to increase, as shown by estimated return levels.Keywords: climate change, global warming, extreme value theory, rwanda, temperature, generalised extreme value distribution, generalised pareto distribution
Procedia PDF Downloads 18713552 Effect of Parameters for Exponential Loads on Voltage Transmission Line with Compensation
Authors: Benalia Nadia, Bensiali Nadia, Zerzouri Noura
Abstract:
This paper presents an analysis of the effects of parameters np and nq for exponential load on the transmission line voltage profile, transferred power and transmission losses for different shunt compensation size. For different values for np and nq in which active and reactive power vary with it is terminal voltages as in exponential form, variations of the load voltage for different sizes of shunt capacitors are simulated with a simple two-bus power system using Matlab SimPowerSystems Toolbox. It is observed that the compensation level is significantly affected by the voltage sensitivities of loads.Keywords: static load model, shunt compensation, transmission system, exponentiel load model
Procedia PDF Downloads 36913551 Mathematical Modeling of Thin Layer Drying Behavior of Bhimkol (Musa balbisiana) Pulp
Authors: Ritesh Watharkar, Sourabh Chakraborty, Brijesh Srivastava
Abstract:
Reduction of water from the fruits and vegetables using different drying techniques is widely employed to prolong the shelf life of these food commodities. Heat transfer occurs inside the sample by conduction and mass transfer takes place by diffusion in accordance with temperature and moisture concentration gradient respectively during drying. This study was undertaken to study and model the thin layer drying behavior of Bhimkol pulp. The drying was conducted in a tray drier at 500c temperature with 5, 10 and 15 % concentrations of added maltodextrin. The drying experiments were performed at 5mm thickness of the thin layer and the constant air velocity of 0.5 m/s.Drying data were fitted to different thin layer drying models found in the literature. Comparison of fitted models was based on highest R2(0.9917), lowest RMSE (0.03201), and lowest SSE (0.01537) revealed Middle equation as the best-fitted model for thin layer drying with 10% concentration of maltodextrin. The effective diffusivity was estimated based on the solution of Fick’s law of diffusion which is found in the range of 3.0396 x10-09 to 5.0661 x 10-09. There was a reduction in drying time with the addition of maltodextrin as compare to the raw pulp.Keywords: Bhimkol, diffusivity, maltodextrine, Midilli model
Procedia PDF Downloads 21413550 Experimental Investigation and Numerical Simulations of the Cylindrical Machining of a Ti-6Al-4V Tree
Authors: Mohamed Sahli, David Bassir, Thierry Barriere, Xavier Roizard
Abstract:
Predicting the behaviour of the Ti-6Al-4V alloy during the turning operation was very important in the choice of suitable cutting tools and also in the machining strategies. In this study, a 3D model with thermo-mechanical coupling has been proposed to study the influence of cutting parameters and also lubrication on the performance of cutting tools. The constants of the constitutive Johnson-Cook model of Ti-6Al-4V alloy were identified using inverse analysis based on the parameters of the orthogonal cutting process. Then, numerical simulations of the finishing machining operation were developed and experimentally validated for the cylindrical stock removal stage with the finishing cutting tool.Keywords: titanium turning, cutting tools, FE simulation, chip
Procedia PDF Downloads 17613549 Support Vector Regression Combined with Different Optimization Algorithms to Predict Global Solar Radiation on Horizontal Surfaces in Algeria
Authors: Laidi Maamar, Achwak Madani, Abdellah El Ahdj Abdellah
Abstract:
The aim of this work is to use Support Vector regression (SVR) combined with dragonfly, firefly, Bee Colony and particle swarm Optimization algorithm to predict global solar radiation on horizontal surfaces in some cities in Algeria. Combining these optimization algorithms with SVR aims principally to enhance accuracy by fine-tuning the parameters, speeding up the convergence of the SVR model, and exploring a larger search space efficiently; these parameters are the regularization parameter (C), kernel parameters, and epsilon parameter. By doing so, the aim is to improve the generalization and predictive accuracy of the SVR model. Overall, the aim is to leverage the strengths of both SVR and optimization algorithms to create a more powerful and effective regression model for various cities and under different climate conditions. Results demonstrate close agreement between predicted and measured data in terms of different metrics. In summary, SVM has proven to be a valuable tool in modeling global solar radiation, offering accurate predictions and demonstrating versatility when combined with other algorithms or used in hybrid forecasting models.Keywords: support vector regression (SVR), optimization algorithms, global solar radiation prediction, hybrid forecasting models
Procedia PDF Downloads 3913548 Green Supply Chain Network Optimization with Internet of Things
Authors: Sema Kayapinar, Ismail Karaoglan, Turan Paksoy, Hadi Gokcen
Abstract:
Green Supply Chain Management is gaining growing interest among researchers and supply chain management. The concept of Green Supply Chain Management is to integrate environmental thinking into the Supply Chain Management. It is the systematic concept emphasis on environmental problems such as reduction of greenhouse gas emissions, energy efficiency, recycling end of life products, generation of solid and hazardous waste. This study is to present a green supply chain network model integrated Internet of Things applications. Internet of Things provides to get precise and accurate information of end-of-life product with sensors and systems devices. The forward direction consists of suppliers, plants, distributions centres and sales and collect centres while, the reverse flow includes the sales and collects centres, disassembled centre, recycling and disposal centre. The sales and collection centre sells the new products are transhipped from factory via distribution centre and also receive the end-of life product according their value level. We describe green logistics activities by presenting specific examples including “recycling of the returned products and “reduction of CO2 gas emissions”. The different transportation choices are illustrated between echelons according to their CO2 gas emissions. This problem is formulated as a mixed integer linear programming model to solve the green supply chain problems which are emerged from the environmental awareness and responsibilities. This model is solved by using Gams package program. Numerical examples are suggested to illustrate the efficiency of the proposed model.Keywords: green supply chain optimization, internet of things, greenhouse gas emission, recycling
Procedia PDF Downloads 33113547 Investigation of Damage in Glass Subjected to Static Indentation Using Continuum Damage Mechanics
Authors: J. Ismail, F. Zaïri, M. Naït-Abdelaziz, Z. Azari
Abstract:
In this work, a combined approach of continuum damage mechanics (CDM) and fracture mechanics is applied to model a glass plate behavior under static indentation. A spherical indenter is used and a CDM based constitutive model with an anisotropic damage tensor was selected and implemented into a finite element code to study the damage of glass. Various regions with critical damage values were predicted in good agreement with the experimental observations in the literature. In these regions, the directions of crack propagation, including both cracks initiating on the surface as well as in the bulk, were predicted using the strain energy density factor.Keywords: finite element modeling, continuum damage mechanics, indentation, cracks
Procedia PDF Downloads 42413546 Modeling of the Biodegradation Performance of a Membrane Bioreactor to Enhance Water Reuse in Agri-food Industry - Poultry Slaughterhouse as an Example
Authors: masmoudi Jabri Khaoula, Zitouni Hana, Bousselmi Latifa, Akrout Hanen
Abstract:
Mathematical modeling has become an essential tool for sustainable wastewater management, particularly for the simulation and the optimization of complex processes involved in activated sludge systems. In this context, the activated sludge model (ASM3h) was used for the simulation of a Biological Membrane Reactor (MBR) as it includes the integration of biological wastewater treatment and physical separation by membrane filtration. In this study, the MBR with a useful volume of 12.5 L was fed continuously with poultry slaughterhouse wastewater (PSWW) for 50 days at a feed rate of 2 L/h and for a hydraulic retention time (HRT) of 6.25h. Throughout its operation, High removal efficiency was observed for the removal of organic pollutants in terms of COD with 84% of efficiency. Moreover, the MBR has generated a treated effluent which fits with the limits of discharge into the public sewer according to the Tunisian standards which were set in March 2018. In fact, for the nitrogenous compounds, average concentrations of nitrate and nitrite in the permeat reached 0.26±0.3 mg. L-1 and 2.2±2.53 mg. L-1, respectively. The simulation of the MBR process was performed using SIMBA software v 5.0. The state variables employed in the steady state calibration of the ASM3h were determined using physical and respirometric methods. The model calibration was performed using experimental data obtained during the first 20 days of the MBR operation. Afterwards, kinetic parameters of the model were adjusted and the simulated values of COD, N-NH4+and N- NOx were compared with those reported from the experiment. A good prediction was observed for the COD, N-NH4+and N- NOx concentrations with 467 g COD/m³, 110.2 g N/m³, 3.2 g N/m³ compared to the experimental data which were 436.4 g COD/m³, 114.7 g N/m³ and 3 g N/m³, respectively. For the validation of the model under dynamic simulation, the results of the experiments obtained during the second treatment phase of 30 days were used. It was demonstrated that the model simulated the conditions accurately by yielding a similar pattern on the variation of the COD concentration. On the other hand, an underestimation of the N-NH4+ concentration was observed during the simulation compared to the experimental results and the measured N-NO3 concentrations were lower than the predicted ones, this difference could be explained by the fact that the ASM models were mainly designed for the simulation of biological processes in the activated sludge systems. In addition, more treatment time could be required by the autotrophic bacteria to achieve a complete and stable nitrification. Overall, this study demonstrated the effectiveness of mathematical modeling in the prediction of the performance of the MBR systems with respect to organic pollution, the model can be further improved for the simulation of nutrients removal for a longer treatment period.Keywords: activated sludge model (ASM3h), membrane bioreactor (MBR), poultry slaughter wastewater (PSWW), reuse
Procedia PDF Downloads 6113545 The Impact of Religiosity and Ethical Senstivity on Accounting Students’ Ethical Judgement Decision
Authors: Ahmed Mohamed Alteer
Abstract:
The purpose of this paper is come up with theoretical model through understanding the causes and motives behind the auditors' sensitive to ethical dilemma through Auditing Students. This study considers the possibility of auditing students’ ethical judgement being affected by two individual factors, namely ethical sensitivity and religiosity. The finding of this study that there are several ethical theories a models provide a significant understanding of ethical issues and supported that ethical sensitivity and religiosity may affect ethical judgement decision among accounting students. The suggestion model proposes that student ethical judgement is influenced by their ethical sensitivity and their religiosity. Nonetheless, the influence of religiosity on ethical judgement is expected to be via ethical sensitivity.Keywords: asccounting students, ethical sensitivity, religiosity, ethical judgement
Procedia PDF Downloads 622