Search results for: multi-factorial error modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5560

Search results for: multi-factorial error modeling

4540 The Methodology of System Modeling of Mechatronic Systems

Authors: Lakhoua Najeh

Abstract:

Aims of the work: After a presentation of the functionality of an example of a mechatronic system which is a paint mixer system, we present the concepts of modeling and safe operation. This paper briefly discusses how to model and protect the functioning of a mechatronic system relying mainly on functional analysis and safe operation techniques. Methods: For the study of an example of a mechatronic system, we use methods for external functional analysis that illustrate the relationships between a mechatronic system and its external environment. Thus, we present the Safe-Structured Analysis Design Technique method (Safe-SADT) which allows the representation of a mechatronic system. A model of operating safety and automation is proposed. This model enables us to use a functional analysis technique of the mechatronic system based on the GRAFCET (Graphe Fonctionnel de Commande des Etapes et Transitions: Step Transition Function Chart) method; study of the safe operation of the mechatronic system based on the Safe-SADT method; automation of the mechatronic system based on a software tool. Results: The expected results are to propose a model and safe operation of a mechatronic system. This methodology enables us to analyze the relevance of the different models based on Safe-SADT and GRAFCET in relation to the control and monitoring functions and to study the means allowing exploiting their synergy. Conclusion: In order to propose a general model of a mechatronic system, a model of analysis, safety operation and automation of a mechatronic system has been developed. This is how we propose to validate this methodology through a case study of a paint mixer system.

Keywords: mechatronic systems, system modeling, safe operation, Safe-SADT

Procedia PDF Downloads 220
4539 Developing an ANN Model to Predict Anthropometric Dimensions Based on Real Anthropometric Database

Authors: Waleed A. Basuliman, Khalid S. AlSaleh, Mohamed Z. Ramadan

Abstract:

Applying the anthropometric dimensions is considered one of the important factors when designing any human-machine system. In this study, the estimation of anthropometric dimensions has been improved by developing artificial neural network that aims to predict the anthropometric measurements of the male in Saudi Arabia. A total of 1427 Saudi males from age 6 to 60 participated in measuring twenty anthropometric dimensions. These anthropometric measurements are important for designing the majority of work and life applications in Saudi Arabia. The data were collected during 8 months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining fifteen dimensions were set to be the measured variables (outcomes). The hidden layers have been varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was significantly able to predict the body dimensions for the population of Saudi Arabia. The network mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found 0.0348 and 3.225 respectively. The accuracy of the developed neural network was evaluated by compare the predicted outcomes with a multiple regression model. The ANN model performed better and resulted excellent correlation coefficients between the predicted and actual dimensions.

Keywords: artificial neural network, anthropometric measurements, backpropagation, real anthropometric database

Procedia PDF Downloads 555
4538 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology

Authors: Ugwu O. C., Mamah R. O., Awudu W. S.

Abstract:

This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.

Keywords: beamforming algorithm, adaptive beamforming, simulink, reception

Procedia PDF Downloads 16
4537 The Use of Creativity to Nudge Students Into Heutagogy: An Implementation in Graduate Business Education

Authors: Ricardo Bragança, Tom Vinaimont

Abstract:

This paper discusses the introduction of processes of self-determined learning (heutagogy) into a graduate course on financial modeling, using elements of entangled pedagogy and Biggs’ constructive alignment. To encourage learners to take control of their own learning journey and develop critical thinking and problem-solving skills, each session in the course receives tailor-made media-enhanced pedagogical assets. The design of those assets specifically supports entangled pedagogy, which opposes technological or pedagogical determinism in support of the collaborative integration of pedagogy and technology. Media assets for each of the ten sessions in this course consist of three components. The first component in this three-pronged approach is a game-cut-like cinematographic representation that introduces the context of the session. The second component represents a character from an open-source-styled community that encourages self-determined learning. The third component consists of a character, which refers to the in-person instructor and also aligns learning outcomes and assessment tasks, using Biggs’ constructive alignment, to the cinematographic and open-source-styled component. In essence, the course's metamorphosis helps students apply the concepts they've studied to actual financial modeling issues. The audio-visual media assets create a storyline throughout the course based on gamified and real-world applications, thus encouraging student engagement and interaction. The structured entanglement of pedagogy and technology also guides the instructor in the design of the in-class interactions and directs the focus on outcomes and assessments. The transformation process of this graduate course in financial modeling led to an institutional teaching award in 2021. The transformation of this course may be used as a model for other courses and programs in many disciplines to help with intended learning outcomes integration, constructive alignment, and Assurance of Learning.

Keywords: innovative education, active learning, entangled pedagogy, heutagogy, constructive alignment, project based learning, financial modeling, graduate business education

Procedia PDF Downloads 61
4536 Ontological Modeling Approach for Statistical Databases Publication in Linked Open Data

Authors: Bourama Mane, Ibrahima Fall, Mamadou Samba Camara, Alassane Bah

Abstract:

At the level of the National Statistical Institutes, there is a large volume of data which is generally in a format which conditions the method of publication of the information they contain. Each household or business data collection project includes a dissemination platform for its implementation. Thus, these dissemination methods previously used, do not promote rapid access to information and especially does not offer the option of being able to link data for in-depth processing. In this paper, we present an approach to modeling these data to publish them in a format intended for the Semantic Web. Our objective is to be able to publish all this data in a single platform and offer the option to link with other external data sources. An application of the approach will be made on data from major national surveys such as the one on employment, poverty, child labor and the general census of the population of Senegal.

Keywords: Semantic Web, linked open data, database, statistic

Procedia PDF Downloads 164
4535 Application of Watershed Modeling System for Urbanization Management in Tabuk Area, Saudi Arabia

Authors: Abd-Alrahman Embaby, Ayman Abu Halawa, Medhat Ramadan

Abstract:

The infiltrated water into the subsurface activates expansive soil in localized manner, leading to the differential heaving and destructive of the construction. The Watershed Modeling System (WMS) and Hydrologic Engineering Center (HEC-1) are used to delineate and identify the drainage system and basin morphometry in Tabuk area, where flash floods and accumulation of water may take place. Eight drainage basins effect on Tabuk city. Three of them are expected to be high. The flash floods and surface runoff behavior in these basins are important for any protection projects. It was found that the risky areas that contain Tabuk shale could be expanded when exposed to flash floods and/or surface runoff. The resident neighborhoods in the middle of Tabuk city and affected by surface runoff of the tributaries of the basin of Wadi Abu Nishayfah, Na'am and Atanah outlet, represent high-risk zones. These high-risk neighborhoods are Al Qadsiyah, Al Maseif, Arrwdah, Al Nakhil and Al Rajhi. It can be avoided new constructions on these districts. The low or very low-risk zones include the western and the eastern districts. The western side of the city is lying in the upstream of the small basin. It is suitable for a future urban extension. The direction of surface runoff flow or storm water drain discharge should be away from Tabuk city. The quicker the water can flow out, the better it is.

Keywords: digital elevation model (DEM), flash floods, Saudi Arabia, Tabuk City, watershed modeling system (WMS)

Procedia PDF Downloads 245
4534 Design and Preliminary Evaluation of Benzoxazolone-Based Agents for Targeting Mitochondrial-Located Translocator Protein

Authors: Nidhi Chadha, A. K. Tiwari, Marilyn D. Milton, Anil K. Mishra

Abstract:

Translocator protein (18 kDa) TSPO is highly expressed during microglia activation in neuroinflammation. Although a number of PET ligands have been developed for the visualization of activated microglia, one of the advantageous approaches is to develop potential optical imaging (OI) probe. Our study involves computational screening, synthesis and evaluation of TSPO ligand through various imaging modalities namely PET/SPECT/Optical. The initial computational screening involves pharmacophore modeling from the library designing having oxo-benzooxazol-3-yl-N-phenyl-acetamide groups and synthesis for visualization of efficacy of these compounds as multimodal imaging probes. Structure modeling of monomer, Ala147Thr mutated, parallel and anti-parallel TSPO dimers was performed and docking analysis was performed for distinct binding sites. Computational analysis showed pattern of variable binding profile of known diagnostic ligands and NBMP via interactions with conserved residues along with TSPO’s natural polymorphism of Ala147→Thr, which showed alteration in the binding affinity due to considerable changes in tertiary structure. Preliminary in vitro binding studies shows binding affinity in the range of 1-5 nm and selectivity was also certified by blocking studies. In summary, this skeleton was found to be potential probe for TSPO imaging due to ease in synthesis, appropriate lipophilicity and reach to specific region of brain.

Keywords: TSPO, molecular modeling, imaging, docking

Procedia PDF Downloads 443
4533 Measuring the Height of a Person in Closed Circuit Television Video Footage Using 3D Human Body Model

Authors: Dojoon Jung, Kiwoong Moon, Joong Lee

Abstract:

The height of criminals is one of the important clues that can determine the scope of the suspect's search or exclude the suspect from the search target. Although measuring the height of criminals by video alone is limited by various reasons, the 3D data of the scene and the Closed Circuit Television (CCTV) footage are matched, the height of the criminal can be measured. However, it is still difficult to measure the height of CCTV footage in the non-contact type measurement method because of variables such as position, posture, and head shape of criminals. In this paper, we propose a method of matching the CCTV footage with the 3D data on the crime scene and measuring the height of the person using the 3D human body model in the matched data. In the proposed method, the height is measured by using 3D human model in various scenes of the person in the CCTV footage, and the measurement value of the target person is corrected by the measurement error of the replay CCTV footage of the reference person. We tested for 20 people's walking CCTV footage captured from an indoor and an outdoor and corrected the measurement values with 5 reference persons. Experimental results show that the measurement error (true value-measured value) average is 0.45 cm, and this method is effective for the measurement of the person's height in CCTV footage.

Keywords: human height, CCTV footage, 2D/3D matching, 3D human body model

Procedia PDF Downloads 238
4532 A Model Architecture Transformation with Approach by Modeling: From UML to Multidimensional Schemas of Data Warehouses

Authors: Ouzayr Rabhi, Ibtissam Arrassen

Abstract:

To provide a complete analysis of the organization and to help decision-making, leaders need to have relevant data; Data Warehouses (DW) are designed to meet such needs. However, designing DW is not trivial and there is no formal method to derive a multidimensional schema from heterogeneous databases. In this article, we present a Model-Driven based approach concerning the design of data warehouses. We describe a multidimensional meta-model and also specify a set of transformations starting from a Unified Modeling Language (UML) metamodel. In this approach, the UML metamodel and the multidimensional one are both considered as a platform-independent model (PIM). The first meta-model is mapped into the second one through transformation rules carried out by the Query View Transformation (QVT) language. This proposal is validated through the application of our approach to generating a multidimensional schema of a Balanced Scorecard (BSC) DW. We are interested in the BSC perspectives, which are highly linked to the vision and the strategies of an organization.

Keywords: data warehouse, meta-model, model-driven architecture, transformation, UML

Procedia PDF Downloads 142
4531 A Weighted Sum Particle Swarm Approach (WPSO) Combined with a Novel Feasibility-Based Ranking Strategy for Constrained Multi-Objective Optimization of Compact Heat Exchangers

Authors: Milad Yousefi, Moslem Yousefi, Ricarpo Poley, Amer Nordin Darus

Abstract:

Design optimization of heat exchangers is a very complicated task that has been traditionally carried out based on a trial-and-error procedure. To overcome the difficulties of the conventional design approaches especially when a large number of variables, constraints and objectives are involved, a new method based on a well-stablished evolutionary algorithm, particle swarm optimization (PSO), weighted sum approach and a novel constraint handling strategy is presented in this study. Since, the conventional constraint handling strategies are not effective and easy-to-implement in multi-objective algorithms, a novel feasibility-based ranking strategy is introduced which is both extremely user-friendly and effective. A case study from industry has been investigated to illustrate the performance of the presented approach. The results show that the proposed algorithm can find the near pareto-optimal with higher accuracy when it is compared to conventional non-dominated sorting genetic algorithm II (NSGA-II). Moreover, the difficulties of a trial-and-error process for setting the penalty parameters is solved in this algorithm.

Keywords: Heat exchanger, Multi-objective optimization, Particle swarm optimization, NSGA-II Constraints handling.

Procedia PDF Downloads 543
4530 Acoustic Modeling of a Data Center with a Hot Aisle Containment System

Authors: Arshad Alfoqaha, Seth Bard, Dustin Demetriou

Abstract:

A new multi-physics acoustic modeling approach using ANSYS Mechanical FEA and FLUENT CFD methods is developed for modeling servers mounted to racks, such as IBM Z and IBM Power Systems, in data centers. This new approach allows users to determine the thermal and acoustic conditions that people are exposed to within the data center. The sound pressure level (SPL) exposure for a human working inside a hot aisle containment system inside the data center is studied. The SPL is analyzed at the noise source, at the human body, on the rack walls, on the containment walls, and on the ceiling and flooring plenum walls. In the acoustic CFD simulation, it is assumed that a four-inch diameter sphere with monopole acoustic radiation, placed in the middle of each rack, provides a single-source representation of all noise sources within the rack. Ffowcs Williams & Hawkings (FWH) acoustic model is employed. The target frequency is 1000 Hz, and the total simulation time for the transient analysis is 1.4 seconds, with a very small time step of 3e-5 seconds and 10 iterations to ensure convergence and accuracy. A User Defined Function (UDF) is developed to accurately simulate the acoustic noise source, and a Dynamic Mesh is applied to ensure acoustic wave propagation. Initial validation of the acoustic CFD simulation using a closed-form solution for the spherical propagation of an acoustic point source is performed.

Keywords: data centers, FLUENT, acoustics, sound pressure level, SPL, hot aisle containment, IBM

Procedia PDF Downloads 159
4529 Analytical Performance of Cobas C 8000 Analyzer Based on Sigma Metrics

Authors: Sairi Satari

Abstract:

Introduction: Six-sigma is a metric that quantifies the performance of processes as a rate of Defects-Per-Million Opportunities. Sigma methodology can be applied in chemical pathology laboratory for evaluating process performance with evidence for process improvement in quality assurance program. In the laboratory, these methods have been used to improve the timeliness of troubleshooting, reduce the cost and frequency of quality control and minimize pre and post-analytical errors. Aim: The aim of this study is to evaluate the sigma values of the Cobas 8000 analyzer based on the minimum requirement of the specification. Methodology: Twenty-one analytes were chosen in this study. The analytes were alanine aminotransferase (ALT), albumin, alkaline phosphatase (ALP), Amylase, aspartate transaminase (AST), total bilirubin, calcium, chloride, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, lactate dehydrogenase (LDH), magnesium, potassium, protein, sodium, triglyceride, uric acid and urea. Total error was obtained from Clinical Laboratory Improvement Amendments (CLIA). The Bias was calculated from end cycle report of Royal College of Pathologists of Australasia (RCPA) cycle from July to December 2016 and coefficient variation (CV) from six-month internal quality control (IQC). The sigma was calculated based on the formula :Sigma = (Total Error - Bias) / CV. The analytical performance was evaluated based on the sigma, sigma > 6 is world class, sigma > 5 is excellent, sigma > 4 is good and sigma < 4 is satisfactory and sigma < 3 is poor performance. Results: Based on the calculation, we found that, 96% are world class (ALT, albumin, ALP, amylase, AST, total bilirubin, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, LDH, magnesium, potassium, triglyceride and uric acid. 14% are excellent (calcium, protein and urea), and 10% ( chloride and sodium) require more frequent IQC performed per day. Conclusion: Based on this study, we found that IQC should be performed frequently for only Chloride and Sodium to ensure accurate and reliable analysis for patient management.

Keywords: sigma matrics, analytical performance, total error, bias

Procedia PDF Downloads 154
4528 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments

Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora

Abstract:

Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.

Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver

Procedia PDF Downloads 303
4527 Investigation of Topic Modeling-Based Semi-Supervised Interpretable Document Classifier

Authors: Dasom Kim, William Xiu Shun Wong, Yoonjin Hyun, Donghoon Lee, Minji Paek, Sungho Byun, Namgyu Kim

Abstract:

There have been many researches on document classification for classifying voluminous documents automatically. Through document classification, we can assign a specific category to each unlabeled document on the basis of various machine learning algorithms. However, providing labeled documents manually requires considerable time and effort. To overcome the limitations, the semi-supervised learning which uses unlabeled document as well as labeled documents has been invented. However, traditional document classifiers, regardless of supervised or semi-supervised ones, cannot sufficiently explain the reason or the process of the classification. Thus, in this paper, we proposed a methodology to visualize major topics and class components of each document. We believe that our methodology for visualizing topics and classes of each document can enhance the reliability and explanatory power of document classifiers.

Keywords: data mining, document classifier, text mining, topic modeling

Procedia PDF Downloads 385
4526 Sustainability of Ecotourism Related Activities in the Town of Yercaud: A Modeling Study

Authors: Manoj Gupta Charan Pushparaj

Abstract:

Tourism related activities are getting popular day by day and tourism has become an integral part of everyone’s life. Ecotourism initiatives have grown enormously in the past decade, and the concept of ecotourism has shown to bring great benefits in terms of environment conservation and to improve the livelihood of local people. However, the potential of ecotourism to sustain improving the livelihood of the local population in the remote future is a topic of active debate. A primary challenge that exists in this regard is the enormous costs of limiting the impacts of tourism related activities on the environment. Here we employed systems modeling approach using computer simulations to determine if ecotourism activities in the small hill town of Yercaud (Tamil Nadu, India) can be sustained over years in improving the livelihood of the local population. Increasing damage to the natural environment as a result of tourism-related activities have plagued the pristine hill station of Yercaud. Though ecotourism efforts can help conserve the environment and enrich local population, questions remain if this can be sustained in the distant future. The vital state variables in the model are the existing tourism foundation (labor, services available to tourists, etc.,) in the town of Yercaud and its natural environment (water, flora and fauna). Another state variable is the textile industry that drives the local economy. Our results would help to understand if environment conservation efforts are sustainable in Yercaud and would also offer suggestions to make it sustainable over the course of several years.

Keywords: ecotourism, simulations, modeling, Yercaud

Procedia PDF Downloads 259
4525 Future Projection of Glacial Lake Outburst Floods Hazard: A Hydrodynamic Study of the Highest Lake in the Dhauliganga Basin, Uttarakhand

Authors: Ashim Sattar, Ajanta Goswami, Anil V. Kulkarni

Abstract:

Glacial lake outburst floods (GLOF) highly contributes to mountain hazards in the Himalaya. Over the past decade, high altitude lakes in the Himalaya has been showing notable growth in their size and number. The key reason is rapid retreat of its glacier front. Hydrodynamic modeling GLOF using shallow water equations (SWE) would result in understanding its impact in the downstream region. The present study incorporates remote sensing based ice thickness modeling to determine the future extent of the Dhauliganga Lake to map the over deepening extent around the highest lake in the Dhauliganga basin. The maximum future volume of the lake calculated using area-volume scaling is used to model a GLOF event. The GLOF hydrograph is routed along the channel using one dimensional and two dimensional model to understand the flood wave propagation till it reaches the 1st hydropower station located 72 km downstream of the lake. The present extent of the lake calculated using SENTINEL 2 images is 0.13 km². The maximum future extent of the lake, mapped by investigating the glacier bed has a calculated scaled volume of 3.48 x 106 m³. The GLOF modeling releasing the future volume of the lake resulted in a breach hydrograph with a peak flood of 4995 m³/s at just downstream of the lake. Hydraulic routing

Keywords: GLOF, glacial lake outburst floods, mountain hazard, Central Himalaya, future projection

Procedia PDF Downloads 148
4524 Nonlinear Estimation Model for Rail Track Deterioration

Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami

Abstract:

Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.

Keywords: ANFIS, MGT, prediction modeling, rail track degradation

Procedia PDF Downloads 308
4523 Computational Fluid Dynamics Simulations and Analysis of Air Bubble Rising in a Column of Liquid

Authors: Baha-Aldeen S. Algmati, Ahmed R. Ballil

Abstract:

Multiphase flows occur widely in many engineering and industrial processes as well as in the environment we live in. In particular, bubbly flows are considered to be crucial phenomena in fluid flow applications and can be studied and analyzed experimentally, analytically, and computationally. In the present paper, the dynamic motion of an air bubble rising within a column of liquid is numerically simulated using an open-source CFD modeling tool 'OpenFOAM'. An interface tracking numerical algorithm called MULES algorithm, which is built-in OpenFOAM, is chosen to solve an appropriate mathematical model based on the volume of fluid (VOF) numerical method. The bubbles initially have a spherical shape and starting from rest in the stagnant column of liquid. The algorithm is initially verified against numerical results and is also validated against available experimental data. The comparison revealed that this algorithm provides results that are in a very good agreement with the 2D numerical data of other CFD codes. Also, the results of the bubble shape and terminal velocity obtained from the 3D numerical simulation showed a very good qualitative and quantitative agreement with the experimental data. The simulated rising bubbles yield a very small percentage of error in the bubble terminal velocity compared with the experimental data. The obtained results prove the capability of OpenFOAM as a powerful tool to predict the behavior of rising characteristics of the spherical bubbles in the stagnant column of liquid. This will pave the way for a deeper understanding of the phenomenon of the rise of bubbles in liquids.

Keywords: CFD simulations, multiphase flows, OpenFOAM, rise of bubble, volume of fluid method, VOF

Procedia PDF Downloads 107
4522 Genetics of Atopic Dermatitis: Role of Cytokine Genes Polymorphisms

Authors: Ghaleb Bin Huraib

Abstract:

Atopic dermatitis (AD), also known as atopic eczema, is a chronic inflammatory skin disease characterized by severe itching and recurrent, relapsing eczema-like skin lesions, affecting up to 20% of children and 10% of adults in industrialized countries. AD is a complex multifactorial disease, and its exact etiology and pathogenesis have not been fully elucidated. The aim of this study was to investigate the impact of gene polymorphisms of T helper cell subtype Th1 and Th2 cytokines, interferon-gamma (IFN-γ), interleukin-6 (IL-6) and transforming growth factor (TGF)-β1on AD susceptibility in a Saudi cohort. One hundred four unrelated patients with AD and 195 healthy controls were genotyped for IFN-γ (874A/T), IL-6 (174G/C) and TGF-β1 (509C/T) polymorphisms using ARMS-PCR and PCR-RFLP technique. The frequency of genotypes AA and AT of IFN-γ (874A/T) differed significantly among patients and controls (P 0.001). The genotype AT was increased while genotype AA was decreased in AD patients as compared to controls. AD patients also had a higher frequency of T-containing genotypes (AT+TT) than controls (P = 0.001). The frequencies of alleles T and A were statistically different in patients and controls (P = 0.04). The frequencies of genotype GG and allele G of IL-6 (174G/C) were significantly higher, while genotype GC and allele C were lower in AD patients than in controls. There was no significant difference in the frequencies of alleles and genotypes of TGF-β1 (509C/T) polymorphism between the patient and control groups. These results showed that susceptibility to AD is influenced by the presence or absence of genotypes of IFN-γ (874A/T) and IL-6 (174G/C) polymorphisms. It is concluded T-allele and T-containing genotypes (AT+TT) of IFN-γ (874A/T) and G-allele and GG genotype ofIL-6 (174G/C) polymorphisms are susceptible to AD in Saudis. On the other hand, the TGF-β1 (509C/T) polymorphism may not be associated with AD risk in our population; however, further studies with large sample sizes are required to confirm these results.

Keywords: atopic dermatitis, Polymorphism, Interferon, IL-6

Procedia PDF Downloads 58
4521 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software

Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi

Abstract:

Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.

Keywords: climate change, GIS, interpolation, co-kriging

Procedia PDF Downloads 110
4520 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 248
4519 Field Saturation Flow Measurement Using Dynamic Passenger Car Unit under Mixed Traffic Condition

Authors: Ramesh Chandra Majhi

Abstract:

Saturation flow is a very important input variable for the design of signalized intersections. Saturation flow measurement is well established for homogeneous traffic. However, saturation flow measurement and modeling is a challenging task in heterogeneous characterized by multiple vehicle types and non-lane based movement. Present study focuses on proposing a field procedure for Saturation flow measurement and the effect of typical mixed traffic behavior at the signal as far as non-lane based traffic movement is concerned. Data collected during peak and off-peak hour from five intersections with varying approach width is used for validating the saturation flow model. The insights from the study can be used for modeling saturation flow and delay at signalized intersection in heterogeneous traffic conditions.

Keywords: optimization, passenger car unit, saturation flow, signalized intersection

Procedia PDF Downloads 313
4518 General Mathematical Framework for Analysis of Cattle Farm System

Authors: Krzysztof Pomorski

Abstract:

In the given work we present universal mathematical framework for modeling of cattle farm system that can set and validate various hypothesis that can be tested against experimental data. The presented work is preliminary but it is expected to be valid tool for future deeper analysis that can result in new class of prediction methods allowing early detection of cow dieseaes as well as cow performance. Therefore the presented work shall have its meaning in agriculture models and in machine learning as well. It also opens the possibilities for incorporation of certain class of biological models necessary in modeling of cow behavior and farm performance that might include the impact of environment on the farm system. Particular attention is paid to the model of coupled oscillators that it the basic building hypothesis that can construct the model showing certain periodic or quasiperiodic behavior.

Keywords: coupled ordinary differential equations, cattle farm system, numerical methods, stochastic differential equations

Procedia PDF Downloads 132
4517 Hazardous Vegetation Detection in Right-Of-Way Power Transmission Lines in Brazil Using Unmanned Aerial Vehicle and Light Detection and Ranging

Authors: Mauricio George Miguel Jardini, Jose Antonio Jardini

Abstract:

Transmission power utilities participate with kilometers of circuits, many with particularities in terms of vegetation growth. To control these rights-of-way, maintenance teams perform ground, and air inspections, and the identification method is subjective (indirect). On a ground inspection, when identifying an irregularity, for example, high vegetation threatening contact with the conductor cable, pruning or suppression is performed immediately. In an aerial inspection, the suppression team is mobilized to the identified point. This work investigates the use of 3D modeling of a transmission line segment using RGB (red, blue, and green) images and LiDAR (Light Detection and Ranging) sensor data. Both sensors are coupled to unmanned aerial vehicle. The goal is the accurate and timely detection of vegetation along the right-of-way that can cause shutdowns.

Keywords: 3D modeling, LiDAR, right-of-way, transmission lines, vegetation

Procedia PDF Downloads 119
4516 Modeling and Monitoring of Agricultural Influences on Harmful Algal Blooms in Western Lake Erie

Authors: Xiaofang Wei

Abstract:

Harmful Algal Blooms are a recurrent disturbing occurrence in Lake Erie that has caused significant negative impacts on water quality and aquatic ecosystem around Great Lakes areas in the United States. Targeting the recent HAB events in western Lake Erie, this paper utilizes satellite imagery and hydrological modeling to monitor HAB cyanobacteria blooms and analyze the impacts of agricultural activities from Maumee watershed, the biggest watershed of Lake Erie and agriculture dominant.SWAT (Soil & Water Assessment Tool) Model for Maumee watershed was established with DEM, land use data, crop data layer, soil data, and weather data, and calibrated with Maumee River gauge stations data for streamflow and nutrients. Fast Line-of-sight Atmospheric Analysis of Hypercubes (FLAASH) was applied to remove atmospheric attenuation and cyanobacteria Indices were calculated from Landsat OLI imagery to study the intensity of HAB events in the years 2015, 2017, and 2019. The agricultural practice and nutrients management within the Maumee watershed was studied and correlated with HAB cyanobacteria indices to study the relationship between HAB intensity and nutrient loadings. This study demonstrates that hydrological models and satellite imagery are effective tools in HAB monitoring and modeling in rivers and lakes.

Keywords: harmful algal bloom, landsat OLI imagery, SWAT, HAB cyanobacteria

Procedia PDF Downloads 162
4515 Subpixel Corner Detection for Monocular Camera Linear Model Research

Authors: Guorong Sui, Xingwei Jia, Fei Tong, Xiumin Gao

Abstract:

Camera calibration is a fundamental issue of high precision noncontact measurement. And it is necessary to analyze and study the reliability and application range of its linear model which is often used in the camera calibration. According to the imaging features of monocular cameras, a camera model which is based on the image pixel coordinates and three dimensional space coordinates is built. Using our own customized template, the image pixel coordinate is obtained by the subpixel corner detection method. Without considering the aberration of the optical system, the feature extraction and linearity analysis of the line segment in the template are performed. Moreover, the experiment is repeated 11 times by constantly varying the measuring distance. At last, the linearity of the camera is achieved by fitting 11 groups of data. The camera model measurement results show that the relative error does not exceed 1%, and the repeated measurement error is not more than 0.1 mm magnitude. Meanwhile, it is found that the model has some measurement differences in the different region and object distance. The experiment results show this linear model is simple and practical, and have good linearity within a certain object distance. These experiment results provide a powerful basis for establishment of the linear model of camera. These works will have potential value to the actual engineering measurement.

Keywords: camera linear model, geometric imaging relationship, image pixel coordinates, three dimensional space coordinates, sub-pixel corner detection

Procedia PDF Downloads 266
4514 The Mirage of Progress? a Longitudinal Study of Japanese Students’ L2 Oral Grammar

Authors: Robert Long, Hiroaki Watanabe

Abstract:

This longitudinal study examines the grammatical errors of Japanese university students’ dialogues with a native speaker over an academic year. The L2 interactions of 15 Japanese speakers were taken from the JUSFC2018 corpus (April/May 2018) and the JUSFC2019 corpus (January/February). The corpora were based on a self-introduction monologue and a three-question dialogue; however, this study examines the grammatical accuracy found in the dialogues. Research questions focused on a possible significant difference in grammatical accuracy from the first interview session in 2018 and the second one the following year, specifically regarding errors in clauses per 100 words, global errors and local errors, and with specific errors related to parts of speech. The investigation also focused on which forms showed the least improvement or had worsened? Descriptive statistics showed that error-free clauses/errors per 100 words decreased slightly while clauses with errors/100 words increased by one clause. Global errors showed a significant decline, while local errors increased from 97 to 158 errors. For errors related to parts of speech, a t-test confirmed there was a significant difference between the two speech corpora with more error frequency occurring in the 2019 corpus. This data highlights the difficulty in having students self-edit themselves.

Keywords: clause analysis, global vs. local errors, grammatical accuracy, L2 output, longitudinal study

Procedia PDF Downloads 115
4513 Determinants of Aggregate Electricity Consumption in Ghana: A Multivariate Time Series Analysis

Authors: Renata Konadu

Abstract:

In Ghana, electricity has become the main form of energy which all sectors of the economy rely on for their businesses. Therefore, as the economy grows, the demand and consumption of electricity also grow alongside due to the heavy dependence on it. However, since the supply of electricity has not increased to match the demand, there has been frequent power outages and load shedding affecting business performances. To solve this problem and advance policies to secure electricity in Ghana, it is imperative that those factors that cause consumption to increase be analysed by considering the three classes of consumers; residential, industrial and non-residential. The main argument, however, is that, export of electricity to other neighbouring countries should be included in the electricity consumption model and considered as one of the significant factors which can decrease or increase consumption. The author made use of multivariate time series data from 1980-2010 and econometric models such as Ordinary Least Squares (OLS) and Vector Error Correction Model. Findings show that GDP growth, urban population growth, electricity exports and industry value added to GDP were cointegrated. The results also showed that there is unidirectional causality from electricity export and GDP growth and Industry value added to GDP to electricity consumption in the long run. However, in the short run, there was found to be a directional causality among all the variables and electricity consumption. The results have useful implication for energy policy makers especially with regards to electricity consumption, demand, and supply.

Keywords: electricity consumption, energy policy, GDP growth, vector error correction model

Procedia PDF Downloads 421
4512 Estimating Anthropometric Dimensions for Saudi Males Using Artificial Neural Networks

Authors: Waleed Basuliman

Abstract:

Anthropometric dimensions are considered one of the important factors when designing human-machine systems. In this study, the estimation of anthropometric dimensions has been improved by using Artificial Neural Network (ANN) model that is able to predict the anthropometric measurements of Saudi males in Riyadh City. A total of 1427 Saudi males aged 6 to 60 years participated in measuring 20 anthropometric dimensions. These anthropometric measurements are considered important for designing the work and life applications in Saudi Arabia. The data were collected during eight months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining 15 dimensions were set to be the measured variables (Model’s outcomes). The hidden layers varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was able to estimate the body dimensions of Saudi male population in Riyadh City. The network's mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found to be 0.0348 and 3.225, respectively. These results were found less, and then better, than the errors found in the literature. Finally, the accuracy of the developed neural network was evaluated by comparing the predicted outcomes with regression model. The ANN model showed higher coefficient of determination (R2) between the predicted and actual dimensions than the regression model.

Keywords: artificial neural network, anthropometric measurements, back-propagation

Procedia PDF Downloads 472
4511 Modal FDTD Method for Wave Propagation Modeling Customized for Parallel Computing

Authors: H. Samadiyeh, R. Khajavi

Abstract:

A new FD-based procedure, modal finite difference method (MFDM), is proposed for seismic wave propagation modeling, in which simulation is dealt with in the modal space. The method employs eigenvalues of a characteristic matrix formed by appropriate time-space FD stencils. Since MFD runs for different modes are totally independent of each other, MFDM can easily be parallelized while considerable simplicity in parallel-algorithm is also achieved. There is no requirement to any domain-decomposition procedure and inter-core data exchange. More important is the possibility to skip processing of less-significant modes, which enables one to adjust the procedure up to the level of accuracy needed. Thus, in addition to considerable ease of parallel programming, computation and storage costs are significantly reduced. The method is qualified for its efficiency by some numerical examples.

Keywords: Finite Difference Method, Graphics Processing Unit (GPU), Message Passing Interface (MPI), Modal, Wave propagation

Procedia PDF Downloads 278