Search results for: error minimization
1541 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements
Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck
Abstract:
This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow
Procedia PDF Downloads 1371540 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 1391539 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network
Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin
Abstract:
The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake
Procedia PDF Downloads 661538 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping
Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM
Procedia PDF Downloads 961537 The Impact of Introspective Models on Software Engineering
Authors: Rajneekant Bachan, Dhanush Vijay
Abstract:
The visualization of operating systems has refined the Turing machine, and current trends suggest that the emulation of 32 bit architectures will soon emerge. After years of technical research into Web services, we demonstrate the synthesis of gigabit switches, which embodies the robust principles of theory. Loam, our new algorithm for forward-error correction, is the solution to all of these challenges.Keywords: software engineering, architectures, introspective models, operating systems
Procedia PDF Downloads 5391536 Median-Based Nonparametric Estimation of Returns in Mean-Downside Risk Portfolio Frontier
Authors: H. Ben Salah, A. Gannoun, C. de Peretti, A. Trabelsi
Abstract:
The Downside Risk (DSR) model for portfolio optimisation allows to overcome the drawbacks of the classical mean-variance model concerning the asymetry of returns and the risk perception of investors. This model optimization deals with a positive definite matrix that is endogenous with respect to portfolio weights. This aspect makes the problem far more difficult to handle. For this purpose, Athayde (2001) developped a new recurcive minimization procedure that ensures the convergence to the solution. However, when a finite number of observations is available, the portfolio frontier presents an appearance which is not very smooth. In order to overcome that, Athayde (2003) proposed a mean kernel estimation of the returns, so as to create a smoother portfolio frontier. This technique provides an effect similar to the case in which we had continuous observations. In this paper, taking advantage on the the robustness of the median, we replace the mean estimator in Athayde's model by a nonparametric median estimator of the returns. Then, we give a new version of the former algorithm (of Athayde (2001, 2003)). We eventually analyse the properties of this improved portfolio frontier and apply this new method on real examples.Keywords: Downside Risk, Kernel Method, Median, Nonparametric Estimation, Semivariance
Procedia PDF Downloads 4931535 The Per Capita Income, Energy production and Environmental Degradation: A Comprehensive Assessment of the existence of the Environmental Kuznets Curve Hypothesis in Bangladesh
Authors: Ashique Mahmud, MD. Ataul Gani Osmani, Shoria Sharmin
Abstract:
In the first quarter of the twenty-first century, the most substantial global concern is environmental contamination, and it has gained the prioritization of both the national and international community. Keeping in mind this crucial fact, this study conducted different statistical and econometrical methods to identify whether the gross national income of the country has a significant impact on electricity production from nonrenewable sources and different air pollutants like carbon dioxide, nitrous oxide, and methane emissions. Besides, the primary objective of this research was to analyze whether the environmental Kuznets curve hypothesis holds for the examined variables. After analyzing different statistical properties of the variables, this study came to the conclusion that the environmental Kuznets curve hypothesis holds for gross national income and carbon dioxide emission in Bangladesh in the short run as well as the long run. This study comes to this conclusion based on the findings of ordinary least square estimations, ARDL bound tests, short-run causality analysis, the Error Correction Model, and other pre-diagnostic and post-diagnostic tests that have been employed in the structural model. Moreover, this study wants to demonstrate that the outline of gross national income and carbon dioxide emissions is in its initial stage of development and will increase up to the optimal peak. The compositional effect will then force the emission to decrease, and the environmental quality will be restored in the long run.Keywords: environmental Kuznets curve hypothesis, carbon dioxide emission in Bangladesh, gross national income in Bangladesh, autoregressive distributed lag model, granger causality, error correction model
Procedia PDF Downloads 1511534 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation
Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad
Abstract:
Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate
Procedia PDF Downloads 2651533 A Deep Learning Based Method for Faster 3D Structural Topology Optimization
Authors: Arya Prakash Padhi, Anupam Chakrabarti, Rajib Chowdhury
Abstract:
Topology or layout optimization often gives better performing economic structures and is very helpful in the conceptual design phase. But traditionally it is being done in finite element-based optimization schemes which, although gives a good result, is very time-consuming especially in 3D structures. Among other alternatives machine learning, especially deep learning-based methods, have a very good potential in resolving this computational issue. Here convolutional neural network (3D-CNN) based variational auto encoder (VAE) is trained using a dataset generated from commercially available topology optimization code ABAQUS Tosca using solid isotropic material with penalization (SIMP) method for compliance minimization. The encoded data in latent space is then fed to a 3D generative adversarial network (3D-GAN) to generate the outcome in 64x64x64 size. Here the network consists of 3D volumetric CNN with rectified linear unit (ReLU) activation in between and sigmoid activation in the end. The proposed network is seen to provide almost optimal results with significantly reduced computational time, as there is no iteration involved.Keywords: 3D generative adversarial network, deep learning, structural topology optimization, variational auto encoder
Procedia PDF Downloads 1771532 Optimum Design of Alkali Activated Slag Concretes for Low Chloride Ion Permeability and Water Absorption Capacity
Authors: Müzeyyen Balçikanli, Erdoğan Özbay, Hakan Tacettin Türker, Okan Karahan, Cengiz Duran Atiş
Abstract:
In this research, effect of curing time (TC), curing temperature (CT), sodium concentration (SC) and silicate modules (SM) on the compressive strength, chloride ion permeability, and water absorption capacity of alkali activated slag (AAS) concretes were investigated. For maximization of compressive strength while for minimization of chloride ion permeability and water absorption capacity of AAS concretes, best possible combination of CT, CTime, SC and SM were determined. An experimental program was conducted by using the central composite design method. Alkali solution-slag ratio was kept constant at 0.53 in all mixture. The effects of the independent parameters were characterized and analyzed by using statistically significant quadratic regression models on the measured properties (dependent parameters). The proposed regression models are valid for AAS concretes with the SC from 0.1% to 7.5%, SM from 0.4 to 3.2, CT from 20 °C to 94 °C and TC from 1.2 hours to 25 hours. The results of test and analysis indicate that the most effective parameter for the compressive strength, chloride ion permeability and water absorption capacity is the sodium concentration.Keywords: alkali activation, slag, rapid chloride permeability, water absorption capacity
Procedia PDF Downloads 3131531 Minimization of Seepage in Sandy Soil Using Different Grouting Types
Authors: Eng. M. Ahmed, A. Ibrahim, M. Ashour
Abstract:
One of the major concerns facing dam is the repair of their structures to prevent the seepage under them. In previous years, many existing dams have been treated by grouting, but with varying degrees of success. One of the major reasons for this erratic performance is the unsuitable selection of the grouting materials to reduce the seepage. Grouting is an effective way to improve the engineering properties of the soil and strengthen of the permeability of the soil to reduce the seepage. The purpose of this paper is to focus on the efficiency of current available grouting materials and techniques from construction, environmental and economical point of view. The seepage reduction usually accomplished by either chemical grouting or cementious grouting using ultrafine cement. In addition, the study shows a comparison between grouting materials according to their degree of permeability reduction and cost. The application of seepage reduction is based on the permeation grouting using grout curtain installation. The computer program (SEEP/W) is employed to model a dam rested on sandy soil, using grout curtain to reduce seepage quantity and hydraulic gradient by different grouting materials. This study presents a relationship that takes into account the permeability of the soil, grout curtain spacing and a new performance parameter that can be used to predict the best selection of grouting materials for seepage reduction.Keywords: seepage, sandy soil, grouting, permeability
Procedia PDF Downloads 3691530 A Pilot Study to Investigate the Use of Machine Translation Post-Editing Training for Foreign Language Learning
Authors: Hong Zhang
Abstract:
The main purpose of this study is to show that machine translation (MT) post-editing (PE) training can help our Chinese students learn Spanish as a second language. Our hypothesis is that they might make better use of it by learning PE skills specific for foreign language learning. We have developed PE training materials based on the data collected in a previous study. Training material included the special error types of the output of MT and the error types that our Chinese students studying Spanish could not detect in the experiment last year. This year we performed a pilot study in order to evaluate the PE training materials effectiveness and to what extent PE training helps Chinese students who study the Spanish language. We used screen recording to record these moments and made note of every action done by the students. Participants were speakers of Chinese with intermediate knowledge of Spanish. They were divided into two groups: Group A performed PE training and Group B did not. We prepared a Chinese text for both groups, and participants translated it by themselves (human translation), and then used Google Translate to translate the text and asked them to post-edit the raw MT output. Comparing the results of PE test, Group A could identify and correct the errors faster than Group B students, Group A did especially better in omission, word order, part of speech, terminology, mistranslation, official names, and formal register. From the results of this study, we can see that PE training can help Chinese students learn Spanish as a second language. In the future, we could focus on the students’ struggles during their Spanish studies and complete the PE training materials to teach Chinese students learning Spanish with machine translation.Keywords: machine translation, post-editing, post-editing training, Chinese, Spanish, foreign language learning
Procedia PDF Downloads 1451529 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea
Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng
Abstract:
During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea
Procedia PDF Downloads 1741528 Design of the Compliant Mechanism of a Biomechanical Assistive Device for the Knee
Authors: Kevin Giraldo, Juan A. Gallego, Uriel Zapata, Fanny L. Casado
Abstract:
Compliant mechanisms are designed to deform in a controlled manner in response to external forces, utilizing the flexibility of their components to store potential elastic energy during deformation, gradually releasing it upon returning to its original form. This article explores the design of a knee orthosis intended to assist users during stand-up motion. The orthosis makes use of a compliant mechanism to balance the user’s weight, thereby minimizing the strain on leg muscles during standup motion. The primary function of the compliant mechanism is to store and exchange potential energy, so when coupled with the gravitational potential of the user, the total potential energy variation is minimized. The design process for the semi-rigid knee orthosis involved material selection and the development of a numerical model for the compliant mechanism seen as a spring. Geometric properties are obtained through the numerical modeling of the spring once the desired stiffness and safety factor values have been attained. Subsequently, a 3D finite element analysis was conducted. The study demonstrates a strong correlation between the maximum stress in the mathematical model (250.22 MPa) and the simulation (239.8 MPa), with a 4.16% error. Both analyses safety factors: 1.02 for the mathematical approach and 1.1 for the simulation, with a consistent 7.84% margin of error. The spring’s stiffness, calculated at 90.82 Nm/rad analytically and 85.71 Nm/rad in the simulation, exhibits a 5.62% difference. These results suggest significant potential for the proposed device in assisting patients with knee orthopedic restrictions, contributing to ongoing efforts in advancing the understanding and treatment of knee osteoarthritis.Keywords: biomechanics, complaint mechanisms, gonarthrosis, orthoses
Procedia PDF Downloads 391527 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 701526 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer
Authors: Aprajeeta Jha, Punyadarshini P. Tripathy
Abstract:
Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer
Procedia PDF Downloads 1521525 Open Forging of Cylindrical Blanks Subjected to Lateral Instability
Authors: A. H. Elkholy, D. M. Almutairi
Abstract:
The successful and efficient execution of a forging process is dependent upon the correct analysis of loading and metal flow of blanks. This paper investigates the Upper Bound Technique (UBT) and its application in the analysis of open forging process when a possibility of blank bulging exists. The UBT is one of the energy rate minimization methods for the solution of metal forming process based on the upper bound theorem. In this regards, the kinematically admissible velocity field is obtained by minimizing the total forging energy rate. A computer program is developed in this research to implement the UBT. The significant advantages of this method is the speed of execution while maintaining a fairly high degree of accuracy and the wide prediction capability. The information from this analysis is useful for the design of forging processes and dies. Results for the prediction of forging loads and stresses, metal flow and surface profiles with the assured benefits in terms of press selection and blank preform design are outlined in some detail. The obtained predictions are ready for comparison with both laboratory and industrial results.Keywords: forging, upper bound technique, metal forming, forging energy, forging die/platen
Procedia PDF Downloads 2951524 Influence of Bio-Based Admixture on Compressive Strength of Concrete for Columns
Authors: K. Raza, S. Gul, M. Ali
Abstract:
Concrete is a fundamental building material, extensively utilized by the construction industry. Problems related to the strength of concrete is an immense issue for the sustainability of concrete structures. Concrete mostly loses its strength due to the cracks produced in it by shrinkage or hydration process. This study aims to enhance the strength and service life of the concrete structures by incorporating bio-based admixture in the concrete. By the injection of bio-based admixture (BBA) in concrete, it will self-heal the cracks by producing calcium carbonate. Minimization of cracks will compact the microstructure of the concrete, due to which strength will increase. For this study, Bacillus subtilis will be used as a bio-based admixture (BBA) in concrete. Calcium lactate up to 1.5% will be used as the food source for the Bacillus subtilis in concrete. Two formulations containing 0 and 5% of Bacillus subtilis by weight of cement, will be used for the casting of concrete specimens. Direct mixing method will be adopted for the usage of bio-based admixture in concrete. Compressive strength test will be carried out after 28 days of curing. Scanning electron microscopy (SEM) and X-ray diffraction analysis (XRD) will be performed for the examination of micro-structure of concrete. Results will be drawn by comparing the test results of 0 and 5% the formulations. It will be recommended to use to bio-based admixture (BBA) in concrete for columns because of the satisfactory increase in the compressive strength of concrete.Keywords: bio-based admixture, Bacillus subtilis, calcium lactate, compressive strength
Procedia PDF Downloads 2281523 The Power House of Mind: Determination of Action
Authors: Sheetla Prasad
Abstract:
The focus issue of this article is to determine the mechanism of mind with geometrical analysis of human face. Research paradigm has been designed for study of spatial dynamic of face and it was found that different shapes of face have their own function for determine the action of mind. The functional ratio (FR) of face has determined the behaviour operation of human beings. It is not based on the formulistic approach of prediction but scientific dogmatism and mathematical analysis is the root of the prediction of behaviour. For analysis, formulae were developed and standardized. It was found that human psyche is designed in three forms; manipulated, manifested and real psyche. Functional output of the psyche has been determined by degree of energy flow in the psyche and reserve energy for future. Face is the recipient and transmitter of energy but distribution and control is the possible by mind. Mind directs behaviour. FR indicates that the face is a power house of energy and as per its geometrical domain force of behaviours has been designed and actions are possible in the nature of individual. The impact factor of this study is the promotion of human capital for job fitness objective and minimization of criminalization in society.Keywords: functional ratio, manipulated psyche, manifested psyche, real psyche
Procedia PDF Downloads 4551522 A Genetic Algorithm Approach to Solve a Weaving Job Scheduling Problem, Aiming Tardiness Minimization
Authors: Carolina Silva, João Nuno Oliveira, Rui Sousa, João Paulo Silva
Abstract:
This study uses genetic algorithms to solve a job scheduling problem in a weaving factory. The underline problem regards an NP-Hard problem concerning unrelated parallel machines, with sequence-dependent setup times. This research uses real data regarding a weaving industry located in the North of Portugal, with a capacity of 96 looms and a production, on average, of 440000 meters of fabric per month. Besides, this study includes a high level of complexity once most of the real production constraints are applied, and several real data instances are tested. Topics such as data analyses and algorithm performance are addressed and tested, to offer a solution that can generate reliable and due date results. All the approaches will be tested in the operational environment, and the KPIs monitored, to understand the solution's impact on the production, with a particular focus on the total number of weeks of late deliveries to clients. Thus, the main goal of this research is to develop a solution that allows for the production of automatically optimized production plans, aiming to the tardiness minimizing.Keywords: genetic algorithms, textile industry, job scheduling, optimization
Procedia PDF Downloads 1591521 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid
Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang
Abstract:
Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid
Procedia PDF Downloads 4341520 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 4181519 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser
Authors: Edoardo Schinco
Abstract:
Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.Keywords: Althusser, enlightenment, Gramsci, ideology
Procedia PDF Downloads 2021518 Using Equipment Telemetry Data for Condition-Based maintenance decisions
Authors: John Q. Todd
Abstract:
Given that modern equipment can provide comprehensive health, status, and error condition data via built-in sensors, maintenance organizations have a new and valuable source of insight to take advantage of. This presentation will expose what these data payloads might look like and how they can be filtered, visualized, calculated into metrics, used for machine learning, and generate alerts for further action.Keywords: condition based maintenance, equipment data, metrics, alerts
Procedia PDF Downloads 1901517 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning
Authors: Shayla He
Abstract:
Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.Keywords: homeless, prediction, model, RNN
Procedia PDF Downloads 1221516 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study
Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin
Abstract:
Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)
Procedia PDF Downloads 6031515 A Real Time Monitoring System of the Supply Chain Conditions, Products and Means of Transport
Authors: Dimitris E. Kontaxis, George Litainas, Dimitris P. Ptochos
Abstract:
Real-time monitoring of the supply chain conditions and procedures is a critical element for the optimal coordination and safety of the deliveries, as well as for the minimization of the delivery time and cost. Real-time monitoring requires IoT data streams, which are related to the conditions of the products and the means of transport (e.g., location, temperature/humidity conditions, kinematic state, ambient light conditions, etc.). These streams are generated by battery-based IoT tracking devices, equipped with appropriate sensors, and are transmitted to a cloud-based back-end system. Proper handling and processing of the IoT data streams, using predictive and artificial intelligence algorithms, can provide significant and useful results, which can be exploited by the supply chain stakeholders in order to enhance their financial benefits, as well as the efficiency, security, transparency, coordination, and sustainability of the supply chain procedures. The technology, the features, and the characteristics of a complete, proprietary system, including hardware, firmware, and software tools -developed in the context of a co-funded R&D programme- are addressed and presented in this paper.Keywords: IoT embedded electronics, real-time monitoring, tracking device, sensor platform
Procedia PDF Downloads 1801514 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems
Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana
Abstract:
Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP
Procedia PDF Downloads 2031513 Correction Factors for Soil-Structure Interaction Predicted by Simplified Models: Axisymmetric 3D Model versus Fully 3D Model
Authors: Fu Jia
Abstract:
The effects of soil-structure interaction (SSI) are often studied using axial-symmetric three-dimensional (3D) models to avoid the high computational cost of the more realistic, fully 3D models, which require 2-3 orders of magnitude more computer time and storage. This paper analyzes the error and presents correction factors for system frequency, system damping, and peak amplitude of structural response computed by axisymmetric models, embedded in uniform or layered half-space. The results are compared with those for fully 3D rectangular foundations of different aspect ratios. Correction factors are presented for a range of the model parameters, such as fixed-base frequency, structure mass, height and length-to-width ratio, foundation embedment, soil-layer stiffness and thickness. It is shown that the errors are larger for stiffer, taller and heavier structures, deeper foundations and deeper soil layer. For example, for a stiff structure like Millikan Library (NS response; length-to-width ratio 1), the error is 6.5% in system frequency, 49% in system damping and 180% in peak amplitude. Analysis of a case study shows that the NEHRP-2015 provisions for reduction of base shear force due to SSI effects may be unsafe for some structures and need revision. The presented correction factor diagrams can be used in practical design and other applications.Keywords: 3D soil-structure interaction, correction factors for axisymmetric models, length-to-width ratio, NEHRP-2015 provisions for reduction of base shear force, rectangular embedded foundations, SSI system frequency, SSI system damping
Procedia PDF Downloads 2691512 One vs. Rest and Error Correcting Output Codes Principled Rebalancing Schemes for Solving Imbalanced Multiclass Problems
Authors: Alvaro Callejas-Ramos, Lorena Alvarez-Perez, Alexander Benitez-Buenache, Anibal R. Figueiras-Vidal
Abstract:
This contribution presents a promising formulation which allows to extend the principled binary rebalancing procedures, also known as neutral re-balancing mechanisms in the sense that they do not alter the likelihood ratioKeywords: Bregman divergences, imbalanced multiclass classifi-cation, informed re-balancing, invariant likelihood ratio
Procedia PDF Downloads 219