Search results for: root computation
549 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark
Authors: B. Elshafei, X. Mao
Abstract:
The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.Keywords: data fusion, Gaussian process regression, signal denoise, temporal extrapolation
Procedia PDF Downloads 135548 Evaluation of Chitin Filled Epoxy Coating for Corrosion Protection of Q235 Steel in Saline Environment
Authors: Innocent O. Arukalam, Emeka E. Oguzie
Abstract:
Interest in the development of eco-friendly anti-corrosion coatings using bio-based renewable materials is gaining momentum recently. To this effect, chitin biopolymer, which is non-toxic, biodegradable, and inherently possesses anti-microbial property, was successfully synthesized from snail shells and used as a filler in the preparation of epoxy coating. The chitin particles were characterized with contact angle goniometer, scanning electron microscope (SEM), Fourier transform infrared (FTIR) spectrophotometer, and X-ray diffractometer (XRD). The performance of the coatings was evaluated by immersion and electrochemical impedance spectroscopy (EIS) tests. Electronic structure properties of the coating ingredients and molecular level interaction of the corrodent and coated Q235 steel were appraised by quantum chemical computations (QCC) and molecular dynamics (MD) simulation techniques, respectively. The water contact angle (WCA) measurement of chitin particles was found to be 129.3o while that of chitin particles modified with amino trimethoxy silane (ATMS) was 149.6o, suggesting it is highly hydrophobic. Immersion and EIS analyses revealed that epoxy coating containing silane-modified chitin exhibited lowest water absorption and highest barrier as well as anti-corrosion performances. The QCC showed that quantum parameters for the coating containing silane-modified chitin are optimum and therefore corresponds to high corrosion protection. The high negative value of adsorption energies (Eads) for the coating containing silane-modified chitin indicates the coating molecules interacted and adsorbed strongly on the steel surface. The observed results have shown that silane-modified epoxy-chitin coating would perform satisfactorily for surface protection of metal structures in saline environment.Keywords: chitin, EIS, epoxy coating, hydrophobic, molecular dynamics simulation, quantum chemical computation
Procedia PDF Downloads 98547 Ghost Frequency Noise Reduction through Displacement Deviation Analysis
Authors: Paua Ketan, Bhagate Rajkumar, Adiga Ganesh, M. Kiran
Abstract:
Low gear noise is an important sound quality feature in modern passenger cars. Annoying gear noise from the gearbox is influenced by the gear design, gearbox shaft layout, manufacturing deviations in the components, assembly errors and the mounting arrangement of the complete gearbox. Geometrical deviations in the form of profile and lead errors are often present on the flanks of the inspected gears. Ghost frequencies of a gear are very challenging to identify in standard gear measurement and analysis process due to small wavelengths involved. In this paper, gear whine noise occurring at non-integral multiples of gear mesh frequency of passenger car gearbox is investigated and the root cause is identified using the displacement deviation analysis (DDA) method. DDA method is applied to identify ghost frequency excitations on the flanks of gears arising out of generation grinding. Frequency identified through DDA correlated with the frequency of vibration and noise on the end-of-line machine as well as vehicle level measurements. With the application of DDA method along with standard lead profile measurement, gears with ghost frequency geometry deviations were identified on the production line to eliminate defective parts and thereby eliminate ghost frequency noise from a vehicle. Further, displacement deviation analysis can be used in conjunction with the manufacturing process simulation to arrive at suitable countermeasures for arresting the ghost frequency.Keywords: displacement deviation analysis, gear whine, ghost frequency, sound quality
Procedia PDF Downloads 146546 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 186545 The Connection between Heroism and Violence in War Narratives from the Aspect of Rituals
Authors: Rita Fofai
Abstract:
The aim of the study is to help peacebuilding by analyzing the symbolical level of fights in the war. Despite the sufferings, war heroism still represents such a noble value in war narratives (especially in literature and films, whether it is high- or popular culture) which can make warfare attractive for every age-group. The questions of the study will revolve around the events when heroism is not a necessary and unselfish act for a greater good, but when the primary aim is to express strength in order to build self-mythology. Since war is a scene where the mythological level can meet reality, and even modern narratives use the elements of rituals and sacral references in even secular contexts, understanding the connection between rites and modern battles will ground this study, and the analysis will follow the logic of the violent rites. From this aspect, war is not merely the fight for different countries and ideas, but the fight of mankind with superhuman and natural or supernatural phenomena, as well. In this context, enemy symbolizes the threat of the world which is unpredictable for mankind, and the fight becomes a ritual combat; therefore the winner’s symbolic reward is to redefine himself or herself not only in the human environment but in the context of the whole world. The analysis of the study reveals that this kind of violence does not represents real heroism and rarely results in recruitment, on the contrary, conserves fear and the feeling of weakness, which is the root cause of this kind of act. The result of this study is a way to reshape the attitude toward so-called heroic war violence which is often a part of war narratives even nowadays. Since stepping out of the war tradition is mainly a cultural question, redefining the connection between society and narratives which has an effect on mentality and emotions, giving a clear guide to making difference between heroism and useless violence is very important in peacebuilding.Keywords: war, ritual, heroism, violence, narratives, culture
Procedia PDF Downloads 127544 An Overview of Domain Models of Urban Quantitative Analysis
Authors: Mohan Li
Abstract:
Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design
Procedia PDF Downloads 177543 Iterative Method for Lung Tumor Localization in 4D CT
Authors: Sarah K. Hagi, Majdi Alnowaimi
Abstract:
In the last decade, there were immense advancements in the medical imaging modalities. These advancements can scan a whole volume of the lung organ in high resolution images within a short time. According to this performance, the physicians can clearly identify the complicated anatomical and pathological structures of lung. Therefore, these advancements give large opportunities for more advance of all types of lung cancer treatment available and will increase the survival rate. However, lung cancer is still one of the major causes of death with around 19% of all the cancer patients. Several factors may affect survival rate. One of the serious effects is the breathing process, which can affect the accuracy of diagnosis and lung tumor treatment plan. We have therefore developed a semi automated algorithm to localize the 3D lung tumor positions across all respiratory data during respiratory motion. The algorithm can be divided into two stages. First, a lung tumor segmentation for the first phase of the 4D computed tomography (CT). Lung tumor segmentation is performed using an active contours method. Then, localize the tumor 3D position across all next phases using a 12 degrees of freedom of an affine transformation. Two data set where used in this study, a compute simulate for 4D CT using extended cardiac-torso (XCAT) phantom and 4D CT clinical data sets. The result and error calculation is presented as root mean square error (RMSE). The average error in data sets is 0.94 mm ± 0.36. Finally, evaluation and quantitative comparison of the results with a state-of-the-art registration algorithm was introduced. The results obtained from the proposed localization algorithm show a promising result to localize alung tumor in 4D CT data.Keywords: automated algorithm , computed tomography, lung tumor, tumor localization
Procedia PDF Downloads 602542 A Comparative Study of Various Control Methods for Rendezvous of a Satellite Couple
Authors: Hasan Basaran, Emre Unal
Abstract:
Formation flying of satellites is a mission that involves a relative position keeping of different satellites in the constellation. In this study, different control algorithms are compared with one another in terms of ΔV, velocity increment, and tracking error. Various control methods, covering continuous and impulsive approaches are implemented and tested for satellites flying in low Earth orbit. Feedback linearization, sliding mode control, and model predictive control are designed and compared with an impulsive feedback law, which is based on mean orbital elements. Feedback linearization and sliding mode control approaches have identical mathematical models that include second order Earth oblateness effects. The model predictive control, on the other hand, does not include any perturbations and assumes circular chief orbit. The comparison is done with 4 different initial errors and achieved with velocity increment, root mean square error, maximum steady state error, and settling time. It was observed that impulsive law consumed the least ΔV, while produced the highest maximum error in the steady state. The continuous control laws, however, consumed higher velocity increments and produced lower amounts of tracking errors. Finally, the inversely proportional relationship between tracking error and velocity increment was established.Keywords: chief-deputy satellites, feedback linearization, follower-leader satellites, formation flight, fuel consumption, model predictive control, rendezvous, sliding mode
Procedia PDF Downloads 105541 Spatial Interpolation of Aerosol Optical Depth Pollution: Comparison of Methods for the Development of Aerosol Distribution
Authors: Sahabeh Safarpour, Khiruddin Abdullah, Hwee San Lim, Mohsen Dadras
Abstract:
Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD) values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard NASA’s Terra satellites, for the 10 years period of 2000-2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE) and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs) yielded the best results for spring, summer, and winter and ordinary kriging yielded the best results for fall.Keywords: aerosol optical depth, MODIS, spatial interpolation techniques, Radial Basis Functions
Procedia PDF Downloads 407540 A Comparative Analysis of the Performance of COSMO and WRF Models in Quantitative Rainfall Prediction
Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Mary Nsabagwa, Triphonia Jacob Ngailo, Joachim Reuder, Sch¨attler Ulrich, Musa Semujju
Abstract:
The Numerical weather prediction (NWP) models are considered powerful tools for guiding quantitative rainfall prediction. A couple of NWP models exist and are used at many operational weather prediction centers. This study considers two models namely the Consortium for Small–scale Modeling (COSMO) model and the Weather Research and Forecasting (WRF) model. It compares the models’ ability to predict rainfall over Uganda for the period 21st April 2013 to 10th May 2013 using the root mean square (RMSE) and the mean error (ME). In comparing the performance of the models, this study assesses their ability to predict light rainfall events and extreme rainfall events. All the experiments used the default parameterization configurations and with same horizontal resolution (7 Km). The results show that COSMO model had a tendency of largely predicting no rain which explained its under–prediction. The COSMO model (RMSE: 14.16; ME: -5.91) presented a significantly (p = 0.014) higher magnitude of error compared to the WRF model (RMSE: 11.86; ME: -1.09). However the COSMO model (RMSE: 3.85; ME: 1.39) performed significantly (p = 0.003) better than the WRF model (RMSE: 8.14; ME: 5.30) in simulating light rainfall events. All the models under–predicted extreme rainfall events with the COSMO model (RMSE: 43.63; ME: -39.58) presenting significantly higher error magnitudes than the WRF model (RMSE: 35.14; ME: -26.95). This study recommends additional diagnosis of the models’ treatment of deep convection over the tropics.Keywords: comparative performance, the COSMO model, the WRF model, light rainfall events, extreme rainfall events
Procedia PDF Downloads 261539 A Monolithic Arbitrary Lagrangian-Eulerian Finite Element Strategy for Partly Submerged Solid in Incompressible Fluid with Mortar Method for Modeling the Contact Surface
Authors: Suman Dutta, Manish Agrawal, C. S. Jog
Abstract:
Accurate computation of hydrodynamic forces on floating structures and their deformation finds application in the ocean and naval engineering and wave energy harvesting. This manuscript presents a monolithic, finite element strategy for fluid-structure interaction involving hyper-elastic solids partly submerged in an incompressible fluid. A velocity-based Arbitrary Lagrangian-Eulerian (ALE) formulation has been used for the fluid and a displacement-based Lagrangian approach has been used for the solid. The flexibility of the ALE technique permits us to treat the free surface of the fluid as a Lagrangian entity. At the interface, the continuity of displacement, velocity and traction are enforced using the mortar method. In the mortar method, the constraints are enforced in a weak sense using the Lagrange multiplier method. In the literature, the mortar method has been shown to be robust in solving various contact mechanics problems. The time-stepping strategy used in this work reduces to the generalized trapezoidal rule in the Eulerian setting. In the Lagrangian limit, in the absence of external load, the algorithm conserves the linear and angular momentum and the total energy of the system. The use of monolithic coupling with an energy-conserving time-stepping strategy gives an unconditionally stable algorithm and allows the user to take large time steps. All the governing equations and boundary conditions have been mapped to the reference configuration. The use of the exact tangent stiffness matrix ensures that the algorithm converges quadratically within each time step. The robustness and good performance of the proposed method are demonstrated by solving benchmark problems from the literature.Keywords: ALE, floating body, fluid-structure interaction, monolithic, mortar method
Procedia PDF Downloads 274538 Indicator-Based Approach for Assessing Socio Economic Vulnerability of Dairy Farmers to Impacts of Climate Variability and Change in India
Authors: Aparna Radhakrishnan, Jancy Gupta, R. Dileepkumar
Abstract:
This paper aims at assessing the Socio Economic Vulnerability (SEV) of dairy farmers to Climate Variability and Change (CVC) in 3 states of Western Ghat region in India. For this purpose, a composite SEV index has been developed on the basis of functional relationships amongst sensitivity, exposure and adaptive capacity using 30 indicators related to dairy farming underlying the principles of Intergovernmental Panel on Climate Change and Fussel framework for nomenclature of vulnerable situation. Household level data were collected through Participatory Rural Appraisal and personal interviews of 540 dairy farmers of nine taluks, three each from a district selected from Kerala, Karnataka and Maharashtra, complemented by thirty years of gridded weather data. The data were normalized and then combined into three indices for sensitivity, exposure and adaptive capacity, which were then averaged with weights given using principal component analysis, to obtain the overall SEV index. Results indicated that the taluks of Western Ghats are vulnerable to CVC. The dairy farmers of Pulpally taluka were most vulnerable having the SEV score +1.24 and 42.66% farmers under high-level vulnerability category. Even though the taluks are geographically closer, there is wide variation in SEV components. Policies for incentivizing the ‘climate risk adaptation’ costs for small and marginal farmers and livelihood infrastructure for mitigating risks and promoting grass root level innovations are necessary to sustain dairy farming of the region.Keywords: climate change, dairy, vulnerability, livelihoods, adaptation strategies
Procedia PDF Downloads 419537 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction
Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite
Abstract:
Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination
Procedia PDF Downloads 124536 Numerical Simulation of Aeroelastic Influence Exerted by Kinematic and Geometrical Parameters on Oscillations' Frequencies and Phase Shift Angles in a Simulated Compressor of Gas Transmittal Unit
Authors: Liliia N. Butymova, Vladimir Y. Modorsky, Nikolai A. Shevelev
Abstract:
Prediction of vibration processes in gas transmittal units (GTU) is an urgent problem. Despite numerous scientific publications on the problem of vibrations in general, there are not enough works concerning FSI-modeling interaction processes between several deformable blades in gas-dynamic flow. Since it is very difficult to solve the problem in full scope, with all factors considered, a unidirectional dynamic coupled 1FSI model is suggested for use at the first stage, which would include, from symmetry considerations, two blades, which might be considered as the first stage of solving more general bidirectional problem. ANSYS CFX programmed multi-processor was chosen as a numerical computation tool. The problem was solved on PNRPU high-capacity computer complex. At the first stage of the study, blades were believed oscillating with the same frequency, although oscillation phases could be equal and could be different. At that non-stationary gas-dynamic forces distribution over the blades surfaces is calculated in run of simulation experiment. Oscillations in the “gas — structure” dynamic system are assumed to increase if the resultant of these gas-dynamic forces is in-phase with blade oscillation, and phase shift (φ=0). Provided these oscillation occur with phase shift, then oscillations might increase or decrease, depending on the phase shift value. The most important results are as follows: the angle of phase shift in inter-blade oscillation and the gas-dynamic force depends on the flow velocity, the specific inter-blade gap, and the shaft rotation speed; a phase shift in oscillation of adjacent blades does not always correspond to phase shift of gas-dynamic forces affecting the blades. Thus, it was discovered, that asynchronous oscillation of blades might cause either attenuation or intensification of oscillation. It was revealed that clocking effect might depend not only on the mutual circumferential displacement of blade rows and the gap between the blades, but also on the blade dynamic deformation nature.Keywords: aeroelasticity, ANSYS CFX, oscillation, phase shift, clocking effect, vibrations
Procedia PDF Downloads 269535 Modeling of Strong Motion Generation Areas of the 2011 Tohoku, Japan Earthquake Using Modified Semi-Empirical Technique Incorporating Frequency Dependent Radiation Pattern Model
Authors: Sandeep, A. Joshi, Kamal, Piu Dhibar, Parveen Kumar
Abstract:
In the present work strong ground motion has been simulated using a modified semi-empirical technique (MSET), with frequency dependent radiation pattern model. Joshi et al. (2014) have modified the semi-empirical technique to incorporate the modeling of strong motion generation areas (SMGAs). A frequency dependent radiation pattern model is applied to simulate high frequency ground motion more precisely. Identified SMGAs (Kurahashi and Irikura 2012) of the 2011 Tohoku earthquake (Mw 9.0) were modeled using this modified technique. Records are simulated for both frequency dependent and constant radiation pattern function. Simulated records for both cases are compared with observed records in terms of peak ground acceleration and pseudo acceleration response spectra at different stations. Comparison of simulated and observed records in terms of root mean square error suggests that the method is capable of simulating record which matches in a wide frequency range for this earthquake and bears realistic appearance in terms of shape and strong motion parameters. The results confirm the efficacy and suitability of rupture model defined by five SMGAs for the developed modified technique.Keywords: strong ground motion, semi-empirical, strong motion generation area, frequency dependent radiation pattern, 2011 Tohoku Earthquake
Procedia PDF Downloads 537534 A Neural Network for the Prediction of Contraction after Burn Injuries
Authors: Ginger Egberts, Marianne Schaaphok, Fred Vermolen, Paul van Zuijlen
Abstract:
A few years ago, a promising morphoelastic model was developed for the simulation of contraction formation after burn injuries. Contraction can lead to a serious reduction in physical mobility, like a reduction in the range-of-motion of joints. If this is the case in a healing burn wound, then this is referred to as a contracture that needs medical intervention. The morphoelastic model consists of a set of partial differential equations describing both a chemical part and a mechanical part in dermal wound healing. These equations are solved with the numerical finite element method (FEM). In this method, many calculations are required on each of the chosen elements. In general, the more elements, the more accurate the solution. However, the number of elements increases rapidly if simulations are performed in 2D and 3D. In that case, it not only takes longer before a prediction is available, the computation also becomes more expensive. It is therefore important to investigate alternative possibilities to generate the same results, based on the input parameters only. In this study, a surrogate neural network has been designed to mimic the results of the one-dimensional morphoelastic model. The neural network generates predictions quickly, is easy to implement, and there is freedom in the choice of input and output. Because a neural network requires extensive training and a data set, it is ideal that the one-dimensional FEM code generates output quickly. These feed-forward-type neural network results are very promising. Not only can the network give faster predictions, but it also has a performance of over 99%. It reports on the relative surface area of the wound/scar, the total strain energy density, and the evolutions of the densities of the chemicals and mechanics. It is, therefore, interesting to investigate the applicability of a neural network for the two- and three-dimensional morphoelastic model for contraction after burn injuries.Keywords: biomechanics, burns, feasibility, feed-forward NN, morphoelasticity, neural network, relative surface area wound
Procedia PDF Downloads 55533 Exchange Rate, Market Size and Human Capital Nexus Foreign Direct Investment: A Bound Testing Approach for Pakistan
Authors: Naveed Iqbal Chaudhry, Mian Saqib Mehmood, Asif Mehmood
Abstract:
This study investigates the motivators of foreign direct investment (FDI) which will provide a panacea tool and ground breaking results related to it in case of Pakistan. The study considers exchange rate, market size and human capital as the motivators for attracting FDI. In this regard, time series data on annual basis has been collected for the period 1985–2010 and an Augmented Dickey–Fuller (ADF) and Phillips–Perron (PP) unit root tests are utilized to determine the stationarity of the variables. A bound testing approach to co-integration was applied because the variables included in the model are at I(1) – first level stationary. The empirical findings of this study confirm the long run relationship among the variables. However, market size and human capital have strong positive and significant impact, in short and long-run, for attracting FDI but exchange rate shows negative impact in this regard. The significant negative coefficient of the ECM indicates that it converges towards equilibrium. CUSUM and CUSUMSQ tests plots are with in the lines of critical value, which indicates the stability of the estimated parameters. However, this model can be used by Pakistan in policy and decision making. For achieving higher economic growth and economies of scale, the country should concentrate on the ingredients of this study so that it could attract more FDI as compared to the other countries.Keywords: ARDL, CUSUM and CUSUMSQ tests, ECM, exchange rate, FDI, human capital, market size, Pakistan
Procedia PDF Downloads 392532 Variability for Nodulation and Yield Traits in Biofertilizer Treated and Untreated Pea (Pisum sativum L.) Varieties
Authors: Areej Javaid, Nishat Fatima, Mehwish Naseer
Abstract:
There is a tremendous use of biofertilizers in agriculture to increase crop productivity. Pakistan spends a huge amount on the purchase of synthetic fertilizers every year. The use of natural compounds to harness crop productivity is the major area of interest nowadays due to being safe for human health and the environment as well. Legumes have the intrinsic quality to enrich the nutrient status of soil because of the presence of nitrogen fixation bacteria on nodules. This research determined the effect of biofertilizer on nodulation attributes and yield of the pea plant. Seeds of pea varieties were treated with a slurry of biofertilizer prepared in a 10% sugar solution just before seed sowing. The impact of biofertilizer on different parameters of growth, yield and nodulation was observed. Analysis of variance showed that plant height, days to flowering, number of nodes, days to first pod, root length and plant height exhibited significant genetic variation. All the yield parameters, including the number of pods per plant, number of seeds per pod, seed fresh and dry weight showed significant results under treatment. Among nodulation parameters, nodule number responded positively to biofertilizer treatment. Genotypes 2001-40 showed better performance followed by 2001-20 and LINA-PAK in all the parameters, whereas 2001-40 and 2001-20 performed well in nodulation and yield parameters. Consequently, seed treatment with biofertilizer before sowing is recommended to obtain higher crop yield.Keywords: biological nitrogen fixation, correlation analysis, quantitative inheritance, varietal responses
Procedia PDF Downloads 152531 Parameter Identification Analysis in the Design of Rock Fill Dams
Authors: G. Shahzadi, A. Soulaimani
Abstract:
This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS
Procedia PDF Downloads 146530 Spatial Rank-Based High-Dimensional Monitoring through Random Projection
Authors: Chen Zhang, Nan Chen
Abstract:
High-dimensional process monitoring becomes increasingly important in many application domains, where usually the process distribution is unknown and much more complicated than the normal distribution, and the between-stream correlation can not be neglected. However, since the process dimension is generally much bigger than the reference sample size, most traditional nonparametric multivariate control charts fail in high-dimensional cases due to the curse of dimensionality. Furthermore, when the process goes out of control, the influenced variables are quite sparse compared with the whole dimension, which increases the detection difficulty. Targeting at these issues, this paper proposes a new nonparametric monitoring scheme for high-dimensional processes. This scheme first projects the high-dimensional process into several subprocesses using random projections for dimension reduction. Then, for every subprocess with the dimension much smaller than the reference sample size, a local nonparametric control chart is constructed based on the spatial rank test to detect changes in this subprocess. Finally, the results of all the local charts are fused together for decision. Furthermore, after an out-of-control (OC) alarm is triggered, a diagnostic framework is proposed. using the square-root LASSO. Numerical studies demonstrate that the chart has satisfactory detection power for sparse OC changes and robust performance for non-normally distributed data, The diagnostic framework is also effective to identify truly changed variables. Finally, a real-data example is presented to demonstrate the application of the proposed method.Keywords: random projection, high-dimensional process control, spatial rank, sequential change detection
Procedia PDF Downloads 299529 Exchange Rate Fluctuations and Economic Performance of Manufacturing Sector: Evidence from Nigeria
Authors: Ifeoma Patricia Osamor, Ayotunde Qudus Saka, Godwin Omoregbee, Hikmat Oreoluwalomo Omolaja
Abstract:
Persistent fall in the value of Nigeria's currency compared to other foreign currencies, constant fluctuations in the exchange rate, and an increase in the price of goods and services necessitated the examination of the effects of exchange rate fluctuations on the economic performance of the manufacturing sector in Nigeria. An ex-post facto research design was adopted. Manufacturing gross domestic product (MGDP) was proxied for performance; Naira/Dollar exchange rate (NDE), Naira/Pounds exchange rate (NPE), Foreign exchange supply (FES) were used for exchange rate fluctuations; and inflation rate (INF) was a control variable. Data were collected from CBN Statistical Bulletin (2020) also World Development Indicators of the World Bank, while data collected were analysed using descriptive analysis, unit root, bounds cointegration test, and ARDL. Findings showed that changes in Naira/Dollar exchange rate (NDE) and Naira/Pound Sterling exchange rate negatively but significantly impact the economic performance of the manufacturing sector, while foreign exchange supply leads to an insignificant positive effect on the economic performance of the manufacturing. The study concludes that exchange rate fluctuations negatively impact the performance of the manufacturing sector in Nigeria and, therefore, recommends that government should encourage export diversification through agriculture, agro-investment, and agro-allied industries that would boost export in order to improve the value of the Naira, thereby stabilizing the exchange rate.Keywords: exchange rate, economic performance, gross domestic product, inflation rate, foreign exchange supply
Procedia PDF Downloads 193528 Comparison of Microleakage of Composite Restorations Using Fifth and Seventh Generation of Bonding Agents
Authors: Karina Nabilla, Dedi Sumantri, Nurul T. Rizal, Siti H. Yavitha
Abstract:
Background: Composite resin is the most frequently used material for restoring teeth, but still failure cases are seen which leading to microleakage. Microleakage might be attributed to various factors, one of them is bonding agent. Various generations of bonding agents have been introduced to overcome the microleakage. The aim of this study was to evaluate the microleakage of composite restorations using the fifth and seventh bonding agent. Methods: Class I cavities (3X2X2 mm) were prepared on the occlusal surfaces of 32 human upper premolars. Teeth were classified into two groups according to the type of bonding agent used (n =16). Group I: Fifth Generation of Bonding Agent-Adper Single Bond2. Group II: Seventh Generation of Bonding Agent-Single Bond Universal. All cavities were restored with Filtek Z250 XT composite resin, stored in sterile aquades water at 370C for 24 h. The root apices were sealed with sticky wax, and all the surfaces, except for 2 mm from the margins, were coated with nail varnish. The teeth were immersed in a 1% methylene blue dye solution for 24 h, and then rinsed in running water, blot-dried and sectioned longitudinally through the center of restorations from the buccal to palatal surface. The sections were blindly assessed for microleakage of dye penetration by using a stereomicroscope. Dye penetration along margin was measured in µm then calculated into the percentage and classified into scoring system 1 to 3. Data were collected and statistically analyzed by Chi-Square test. Result: There was no significant difference (p > 0,05) between two groups. Conclusion: Fifth generation of bonding agent revealed less leakage compared to the seventh generation even statistically there was no significant difference.Keywords: composite restoration, fifth generation of bonding agent, microleakage, seventh generation of bonding agent
Procedia PDF Downloads 268527 Natural Antioxidant Changes in Fresh and Dried Spices and Vegetables
Authors: Liga Priecina, Daina Karklina
Abstract:
Antioxidants are became the most analyzed substances in last decades. Antioxidants act as in activator for free radicals. Spices and vegetables are one of major antioxidant sources. Most common antioxidants in vegetables and spices are vitamin C, E, phenolic compounds, carotenoids. Therefore, it is important to get some view about antioxidant changes in spices and vegetables during processing. In this article was analyzed nine fresh and dried spices and vegetables- celery (Apium graveolens), parsley (Petroselinum crispum), dill (Anethum graveolens), leek (Allium ampeloprasum L.), garlic (Allium sativum L.), onion (Allium cepa), celery root (Apium graveolens var. rapaceum), pumpkin (Curcubica maxima), carrot (Daucus carota)- grown in Latvia 2013. Total carotenoids and phenolic compounds and their antiradical scavenging activity were determined for all samples. Dry matter content was calculated from moisture content. After drying process carotenoid content significantly decreases in all analyzed samples, except one -carotenoid content increases in parsley. Phenolic composition was different and depends on sample – fresh or dried. Total phenolic, flavonoid and phenolic acid content increases in dried spices. Flavan-3-ol content is not detected in fresh spice samples. For dried vegetables- phenolic acid content decreases significantly, but increases flavan-3-ols content. The higher antiradical scavenging activity was observed in samples with higher flavonoid and phenolic acid content.Keywords: antiradical scavenging activity, carotenoids, phenolic compounds, spices, vegetables
Procedia PDF Downloads 262526 Poverty and Environmental Degeneration in Central City of Ibadan, Nigeria
Authors: Funmilayo Lanrewaju Amao, Amos Olusegun Amao, Odetoye Adeola Sunday, Joseph Joshua Olu
Abstract:
There is a high magnitude of housing inadequacy in urban centers in Nigeria. This is manifested in quantitative and qualitative terms. Severe overcrowding and insanitary physical environment characterize the housing in the urban centers. The culminating effect of this is the growth of slum areas. This paper takes a critical look at inter-allia history and anatomy, general characteristic, present condition, root causes, official responses and reactions, possible solution and advocacy housing in central city slum of Ibadan. It also examines slum development and consequent deviant behaviors in the inner-city neighborhoods of Ibadan, the capital city of Oyo State, Nigeria. Residing there are many underemployed and unemployed individuals, these are miscreants who are generally socially frustrated. The activities of this group of people are a cause of concern. Deleterious and anti-social behaviors such as prostitution and house burglary are commonplace in the neighborhoods. The paper examines building conditions in the neighborhoods and the nexus with the deviant behavior of the inhabitants. The paper affirms that there is monumental deficiency in housing quality, while the design and the arrangement of the buildings into spatial units significantly influence the behavior of the residents. The paper suggests a two-prong approach in dealing with the situation. This involves urban renewal and slum upgrading programmes on the one hand, and an improvement in the socio-economic circumstances of the inhabitants, especially an increase in employment opportunity on the other.Keywords: slum, behavior, housing, poverty, environmental degeneration
Procedia PDF Downloads 404525 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas
Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards
Abstract:
Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.Keywords: airborne laser scanning, digital terrain models, filtering, forested areas
Procedia PDF Downloads 139524 The Evaluation of Gravity Anomalies Based on Global Models by Land Gravity Data
Authors: M. Yilmaz, I. Yilmaz, M. Uysal
Abstract:
The Earth system generates different phenomena that are observable at the surface of the Earth such as mass deformations and displacements leading to plate tectonics, earthquakes, and volcanism. The dynamic processes associated with the interior, surface, and atmosphere of the Earth affect the three pillars of geodesy: shape of the Earth, its gravity field, and its rotation. Geodesy establishes a characteristic structure in order to define, monitor, and predict of the whole Earth system. The traditional and new instruments, observables, and techniques in geodesy are related to the gravity field. Therefore, the geodesy monitors the gravity field and its temporal variability in order to transform the geodetic observations made on the physical surface of the Earth into the geometrical surface in which positions are mathematically defined. In this paper, the main components of the gravity field modeling, (Free-air and Bouguer) gravity anomalies are calculated via recent global models (EGM2008, EIGEN6C4, and GECO) over a selected study area. The model-based gravity anomalies are compared with the corresponding terrestrial gravity data in terms of standard deviation (SD) and root mean square error (RMSE) for determining the best fit global model in the study area at a regional scale in Turkey. The least SD (13.63 mGal) and RMSE (15.71 mGal) were obtained by EGM2008 for the Free-air gravity anomaly residuals. For the Bouguer gravity anomaly residuals, EIGEN6C4 provides the least SD (8.05 mGal) and RMSE (8.12 mGal). The results indicated that EIGEN6C4 can be a useful tool for modeling the gravity field of the Earth over the study area.Keywords: free-air gravity anomaly, Bouguer gravity anomaly, global model, land gravity
Procedia PDF Downloads 169523 Application of a Lighting Design Method Using Mean Room Surface Exitance
Authors: Antonello Durante, James Duff, Kevin Kelly
Abstract:
The visual needs of people in modern work based buildings are changing. Self-illuminated screens of computers, televisions, tablets and smart phones have changed the relationship between people and the lit environment. In the past, lighting design practice was primarily based on providing uniform horizontal illuminance on the working plane, but this has failed to ensure good quality lit environments. Lighting standards of today continue to be set based upon a 100 year old approach that at its core, considers the task illuminance of the utmost importance, with this task typically being located on a horizontal plane. An alternative method focused on appearance has been proposed, as opposed to the traditional performance based approach. Mean Room Surface Exitance (MRSE) and Target-Ambient Illuminance Ratio (TAIR) are two new metrics proposed to assess illumination adequacy in interiors. The hypothesis is that these factors will be superior to the existing metrics used, which are horizontal illuminance led. For the six past years, research has examined this, within the Dublin Institute of Technology, with a view to determining the suitability of this approach for application to general lighting practice. Since the start of this research, a number of key findings have been produced that centered on how occupants will react to various levels of MRSE. This paper provides a broad update on how this research has progressed. More specifically, this paper will: i) Demonstrate how MRSE can be measured using HDR images technology, ii) Illustrate how MRSE can be calculated using scripting and an open source lighting computation engine, iii) Describe experimental results that demonstrate how occupants have reacted to various levels of MRSE within experimental office environments.Keywords: illumination hierarchy (IH), mean room surface exitance (MRSE), perceived adequacy of illumination (PAI), target-ambient illumination ratio (TAIR)
Procedia PDF Downloads 187522 Application of Seasonal Autoregressive Integrated Moving Average Model for Forecasting Monthly Flows in Waterval River, South Africa
Authors: Kassahun Birhanu Tadesse, Megersa Olumana Dinka
Abstract:
Reliable future river flow information is basic for planning and management of any river systems. For data scarce river system having only a river flow records like the Waterval River, a univariate time series models are appropriate for river flow forecasting. In this study, a univariate Seasonal Autoregressive Integrated Moving Average (SARIMA) model was applied for forecasting Waterval River flow using GRETL statistical software. Mean monthly river flows from 1960 to 2016 were used for modeling. Different unit root tests and Mann-Kendall trend analysis were performed to test the stationarity of the observed flow time series. The time series was differenced to remove the seasonality. Using the correlogram of seasonally differenced time series, different SARIMA models were identified, their parameters were estimated, and diagnostic check-up of model forecasts was performed using white noise and heteroscedasticity tests. Finally, based on minimum Akaike Information (AIc) and Hannan-Quinn (HQc) criteria, SARIMA (3, 0, 2) x (3, 1, 3)12 was selected as the best model for Waterval River flow forecasting. Therefore, this model can be used to generate future river information for water resources development and management in Waterval River system. SARIMA model can also be used for forecasting other similar univariate time series with seasonality characteristics.Keywords: heteroscedasticity, stationarity test, trend analysis, validation, white noise
Procedia PDF Downloads 205521 Reconstruction Spectral Reflectance Cube Based on Artificial Neural Network for Multispectral Imaging System
Authors: Iwan Cony Setiadi, Aulia M. T. Nasution
Abstract:
The multispectral imaging (MSI) technique has been used for skin analysis, especially for distant mapping of in-vivo skin chromophores by analyzing spectral data at each reflected image pixel. For ergonomic purpose, our multispectral imaging system is decomposed in two parts: a light source compartment based on LED with 11 different wavelenghts and a monochromatic 8-Bit CCD camera with C-Mount Objective Lens. The software based on GUI MATLAB to control the system was also developed. Our system provides 11 monoband images and is coupled with a software reconstructing hyperspectral cubes from these multispectral images. In this paper, we proposed a new method to build a hyperspectral reflectance cube based on artificial neural network algorithm. After preliminary corrections, a neural network is trained using the 32 natural color from X-Rite Color Checker Passport. The learning procedure involves acquisition, by a spectrophotometer. This neural network is then used to retrieve a megapixel multispectral cube between 380 and 880 nm with a 5 nm resolution from a low-spectral-resolution multispectral acquisition. As hyperspectral cubes contain spectra for each pixel; comparison should be done between the theoretical values from the spectrophotometer and the reconstructed spectrum. To evaluate the performance of reconstruction, we used the Goodness of Fit Coefficient (GFC) and Root Mean Squared Error (RMSE). To validate reconstruction, the set of 8 colour patches reconstructed by our MSI system and the one recorded by the spectrophotometer were compared. The average GFC was 0.9990 (standard deviation = 0.0010) and the average RMSE is 0.2167 (standard deviation = 0.064).Keywords: multispectral imaging, reflectance cube, spectral reconstruction, artificial neural network
Procedia PDF Downloads 322520 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kumar Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform
Procedia PDF Downloads 115