Search results for: uncertainty and error visualisation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2859

Search results for: uncertainty and error visualisation

2079 Overview of Risk Management in Electricity Markets Using Financial Derivatives

Authors: Aparna Viswanath

Abstract:

Electricity spot prices are highly volatile under optimal generation capacity scenarios due to factors such as non-storability of electricity, peak demand at certain periods, generator outages, fuel uncertainty for renewable energy generators, huge investments and time needed for generation capacity expansion etc. As a result market participants are exposed to price and volume risk, which has led to the development of risk management practices. This paper provides an overview of risk management practices by market participants in electricity markets using financial derivatives.

Keywords: financial derivatives, forward, futures, options, risk management

Procedia PDF Downloads 479
2078 Fast Terminal Sliding Mode Controller For Quadrotor UAV

Authors: Vahid Tabrizi, Reza GHasemi, Ahmadreza Vali

Abstract:

This paper presents robust nonlinear control law for a quadrotor UAV using fast terminal sliding mode control. Fast terminal sliding mode idea is used for introducing a nonlinear sliding variable that guarantees the finite time convergence in sliding phase. Then, in reaching phase for removing chattering and producing smooth control signal, continuous approximation idea is used. Simulation results show that the proposed algorithm is robust against parameter uncertainty and has better performance than conventional sliding mode for controlling a quadrotor UAV.

Keywords: quadrotor UAV, fast terminal sliding mode, second order sliding mode t

Procedia PDF Downloads 547
2077 The Bayesian Premium Under Entropy Loss

Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita

Abstract:

Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.

Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation

Procedia PDF Downloads 334
2076 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 401
2075 Application of Grey Theory in the Forecast of Facility Maintenance Hours for Office Building Tenants and Public Areas

Authors: Yen Chia-Ju, Cheng Ding-Ruei

Abstract:

This study took case office building as subject and explored the responsive work order repair request of facilities and equipment in offices and public areas by gray theory, with the purpose of providing for future related office building owners, executive managers, property management companies, mechanical and electrical companies as reference for deciding and assessing forecast model. Important conclusions of this study are summarized as follows according to the study findings: 1. Grey Relational Analysis discusses the importance of facilities repair number of six categories, namely, power systems, building systems, water systems, air conditioning systems, fire systems and manpower dispatch in order. In terms of facilities maintenance importance are power systems, building systems, water systems, air conditioning systems, manpower dispatch and fire systems in order. 2. GM (1,N) and regression method took maintenance hours as dependent variables and repair number, leased area and tenants number as independent variables and conducted single month forecast based on 12 data from January to December 2011. The mean absolute error and average accuracy of GM (1,N) from verification results were 6.41% and 93.59%; the mean absolute error and average accuracy of regression model were 4.66% and 95.34%, indicating that they have highly accurate forecast capability.

Keywords: rey theory, forecast model, Taipei 101, office buildings, property management, facilities, equipment

Procedia PDF Downloads 445
2074 A Coordination of Supply Chain Disruption in Different Types of Manufacturing Environments: A Case Study of Sugar Manufacturing Company

Authors: Max Moleke, Gilbert Mbonde

Abstract:

Coordinating supply chain process within a manufacturing environment is a very critical aspect of any organization. Nowadays, most manufacturing industries turn to look at only the financial indicator which in real life situation on the shop floor, there are a number of supply chain disruptions that are been ignored. In this work, we had to look at different types of supply chain disruption and their various impact within the organization. A number of Industrial engineering tools are employed which includes, Multifactor productivity, activity on arrow and rescheduling plans. The final result shows that supply chain disruption various with different geographical area where the production plant is operating.

Keywords: supply chain, disruptions, flow shop scheduling, uncertainty

Procedia PDF Downloads 429
2073 Microwave Dielectric Constant Measurements of Titanium Dioxide Using Five Mixture Equations

Authors: Jyh Sheen, Yong-Lin Wang

Abstract:

This research dedicates to find a different measurement procedure of microwave dielectric properties of ceramic materials with high dielectric constants. For the composite of ceramic dispersed in the polymer matrix, the dielectric constants of the composites with different concentrations can be obtained by various mixture equations. The other development of mixture rule is to calculate the permittivity of ceramic from measurements on composite. To do this, the analysis method and theoretical accuracy on six basic mixture laws derived from three basic particle shapes of ceramic fillers have been reported for dielectric constants of ceramic less than 40 at microwave frequency. Similar researches have been done for other well-known mixture rules. They have shown that both the physical curve matching with experimental results and low potential theory error are important to promote the calculation accuracy. Recently, a modified of mixture equation for high dielectric constant ceramics at microwave frequency has also been presented for strontium titanate (SrTiO3) which was selected from five more well known mixing rules and has shown a good accuracy for high dielectric constant measurements. However, it is still not clear the accuracy of this modified equation for other high dielectric constant materials. Therefore, the five more well known mixing rules are selected again to understand their application to other high dielectric constant ceramics. The other high dielectric constant ceramic, TiO2 with dielectric constant 100, was then chosen for this research. Their theoretical error equations are derived. In addition to the theoretical research, experimental measurements are always required. Titanium dioxide is an interesting ceramic for microwave applications. In this research, its powder is adopted as the filler material and polyethylene powder is like the matrix material. The dielectric constants of those ceramic-polyethylene composites with various compositions were measured at 10 GHz. The theoretical curves of the five published mixture equations are shown together with the measured results to understand the curve matching condition of each rule. Finally, based on the experimental observation and theoretical analysis, one of the five rules was selected and modified to a new powder mixture equation. This modified rule has show very good curve matching with the measurement data and low theoretical error. We can then calculate the dielectric constant of pure filler medium (titanium dioxide) by those mixing equations from the measured dielectric constants of composites. The accuracy on the estimating dielectric constant of pure ceramic by various mixture rules will be compared. This modified mixture rule has also shown good measurement accuracy on the dielectric constant of titanium dioxide ceramic. This study can be applied to the microwave dielectric properties measurements of other high dielectric constant ceramic materials in the future.

Keywords: microwave measurement, dielectric constant, mixture rules, composites

Procedia PDF Downloads 367
2072 Attention States in the Sustained Attention to Response Task: Effects of Trial Duration, Mind-Wandering and Focus

Authors: Aisling Davies, Ciara Greene

Abstract:

Over the past decade the phenomenon of mind-wandering in cognitive tasks has attracted widespread scientific attention. Research indicates that mind-wandering occurrences can be detected through behavioural responses in the Sustained Attention to Response Task (SART) and several studies have attributed a specific pattern of responding around an error in this task to an observable effect of a mind-wandering state. SART behavioural responses are also widely accepted as indices of sustained attention and of general attention lapses. However, evidence suggests that these same patterns of responding may be attributable to other factors associated with more focused states and that it may also be possible to distinguish the two states within the same task. To use behavioural responses in the SART to study mind-wandering, it is essential to establish both the SART parameters that would increase the likelihood of errors due to mind-wandering, and exactly what type of responses are indicative of mind-wandering, neither of which have yet been determined. The aims of this study were to compare different versions of the SART to establish which task would induce the most mind-wandering episodes and to determine whether mind-wandering related errors can be distinguished from errors during periods of focus, by behavioural responses in the SART. To achieve these objectives, 25 Participants completed four modified versions of the SART that differed from the classic paradigm in several ways so to capture more instances of mind-wandering. The duration that trials were presented for was increased proportionately across each of the four versions of the task; Standard, Medium Slow, Slow, and Very Slow and participants intermittently responded to thought probes assessing their level of focus and degree of mind-wandering throughout. Error rates, reaction times and variability in reaction times decreased in proportion to the decrease in trial duration rate and the proportion of mind-wandering related errors increased, until the Very Slow condition where the extra decrease in duration no longer had an effect. Distinct reaction time patterns around an error, dependent on level of focus (high/low) and level of mind-wandering (high/low) were also observed indicating four separate attention states occurring within the SART. This study establishes the optimal duration of trial presentation for inducing mind-wandering in the SART, provides evidence supporting the idea that different attention states can be observed within the SART and highlights the importance of addressing other factors contributing to behavioural responses when studying mind-wandering during this task. A notable finding in relation to the standard SART, was that while more errors were observed in this version of the task, most of these errors were during periods of focus, raising significant questions about our current understanding of mind-wandering and associated failures of attention.

Keywords: attention, mind-wandering, trial duration rate, Sustained Attention to Response Task (SART)

Procedia PDF Downloads 182
2071 Single Valued Neutrosophic Hesitant Fuzzy Rough Set and Its Application

Authors: K. M. Alsager, N. O. Alshehri

Abstract:

In this paper, we proposed the notion of single valued neutrosophic hesitant fuzzy rough set, by combining single valued neutrosophic hesitant fuzzy set and rough set. The combination of single valued neutrosophic hesitant fuzzy set and rough set is a powerful tool for dealing with uncertainty, granularity and incompleteness of knowledge in information systems. We presented both definition and some basic properties of the proposed model. Finally, we gave a general approach which is applied to a decision making problem in disease diagnoses, and demonstrated the effectiveness of the approach by a numerical example.

Keywords: single valued neutrosophic fuzzy set, single valued neutrosophic fuzzy hesitant set, rough set, single valued neutrosophic hesitant fuzzy rough set

Procedia PDF Downloads 274
2070 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations

Authors: Karthikeyan Kalirajan, Ashok Joshi

Abstract:

An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.

Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection

Procedia PDF Downloads 427
2069 Strategic Risk Issues for Film Distributors of Hindi Film Industry in Mumbai: A Grounded Theory Approach

Authors: Rashmi Dyondi, Shishir K. Jha

Abstract:

The purpose of the paper is to address the strategic risk issues surrounding Hindi film distribution in Mumbai for a film distributor, who acts as an entrepreneur when launching a product (movie) in the market (film territory).The paper undertakes a fundamental review of films and risk in the Hindi film industry and applies Grounded Theory technique to understand the complex phenomena of risk taking behavior of the film distributors (both independent and studios) in Mumbai. Rich in-depth interviews with distributors are coded to develop core categories through constant comparison leading to conceptualization of the phenomena of interest. This paper is a first-of-its-kind-attempt to understand risk behavior of a distributor, which is akin to entrepreneurial risk behavior under conditions of uncertainty. Unlike extensive scholarly work on dynamics of Hollywood motion picture industry, Hindi film industry is an under-researched area till now. Especially how do film distributors perceive risk is an unexplored study for the Hindi film industry. Films are unique experience products and the film distributor acts as an entrepreneur assuming high risks given the uncertainty in the motion picture business. With the entry of mighty corporate studios and astronomical film budgets posing serious business threats to the independent distributors, there is a need for an in-depth qualitative enquiry (applying grounded theory technique) for unraveling the definition of risk for the independent distributors in Mumbai vis-à-vis the corporate studios. Need for good content was a common challenge to both the groups in the present state of the industry, however corporate studios with their distinct ideologies, focus on own productions and financial power faced different set of challenges than the independents (like achieving sustainability in business). Softer issues like market goodwill and relations with producers, honesty in business dealings and transparency came out to be clear markers for success of independents in long run. The findings from the qualitative analysis stress on different elements of risk and challenges as perceived by the two groups of distributors in the Hindi film industry and provide a future research agenda for empirical investigation of determinants of box-office success of Hindi films distributed in Mumbai.

Keywords: entrepreneurial risk behavior, film distribution strategy, Hindi film industry, risk

Procedia PDF Downloads 313
2068 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 231
2067 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements

Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck

Abstract:

This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.

Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow

Procedia PDF Downloads 136
2066 Fuzzy Logic in Detecting Children with Behavioral Disorders

Authors: David G. Maxinez, Andrés Ferreyra Ramírez, Liliana Castillo Sánchez, Nancy Adán Mendoza, Carlos Aviles Cruz

Abstract:

This research describes the use of fuzzy logic in detection, assessment, analysis and evaluation of children with behavioral disorders. It shows how to acquire and analyze ambiguous, vague and full of uncertainty data coming from the input variables to get an accurate assessment result for each of the typologies presented by children with behavior problems. Behavior disorders analyzed in this paper are: hyperactivity (H), attention deficit with hyperactivity (DAH), conduct disorder (TD) and attention deficit (AD).

Keywords: alteration, behavior, centroid, detection, disorders, economic, fuzzy logic, hyperactivity, impulsivity, social

Procedia PDF Downloads 563
2065 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network

Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin

Abstract:

The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.

Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake

Procedia PDF Downloads 64
2064 A Modified Shannon Entropy Measure for Improved Image Segmentation

Authors: Mohammad A. U. Khan, Omar A. Kittaneh, M. Akbar, Tariq M. Khan, Husam A. Bayoud

Abstract:

The Shannon Entropy measure has been widely used for measuring uncertainty. However, in partial settings, the histogram is used to estimate the underlying distribution. The histogram is dependent on the number of bins used. In this paper, a modification is proposed that makes the Shannon entropy based on histogram consistent. For providing the benefits, two application are picked in medical image processing applications. The simulations are carried out to show the superiority of this modified measure for image segmentation problem. The improvement may be contributed to robustness shown to uneven background in images.

Keywords: Shannon entropy, medical image processing, image segmentation, modification

Procedia PDF Downloads 497
2063 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 130
2062 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping

Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.

Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM

Procedia PDF Downloads 95
2061 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia

Authors: S. Cencek, A. Markun

Abstract:

Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.

Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines

Procedia PDF Downloads 234
2060 The Impact of Introspective Models on Software Engineering

Authors: Rajneekant Bachan, Dhanush Vijay

Abstract:

The visualization of operating systems has refined the Turing machine, and current trends suggest that the emulation of 32 bit architectures will soon emerge. After years of technical research into Web services, we demonstrate the synthesis of gigabit switches, which embodies the robust principles of theory. Loam, our new algorithm for forward-error correction, is the solution to all of these challenges.

Keywords: software engineering, architectures, introspective models, operating systems

Procedia PDF Downloads 538
2059 The Per Capita Income, Energy production and Environmental Degradation: A Comprehensive Assessment of the existence of the Environmental Kuznets Curve Hypothesis in Bangladesh

Authors: Ashique Mahmud, MD. Ataul Gani Osmani, Shoria Sharmin

Abstract:

In the first quarter of the twenty-first century, the most substantial global concern is environmental contamination, and it has gained the prioritization of both the national and international community. Keeping in mind this crucial fact, this study conducted different statistical and econometrical methods to identify whether the gross national income of the country has a significant impact on electricity production from nonrenewable sources and different air pollutants like carbon dioxide, nitrous oxide, and methane emissions. Besides, the primary objective of this research was to analyze whether the environmental Kuznets curve hypothesis holds for the examined variables. After analyzing different statistical properties of the variables, this study came to the conclusion that the environmental Kuznets curve hypothesis holds for gross national income and carbon dioxide emission in Bangladesh in the short run as well as the long run. This study comes to this conclusion based on the findings of ordinary least square estimations, ARDL bound tests, short-run causality analysis, the Error Correction Model, and other pre-diagnostic and post-diagnostic tests that have been employed in the structural model. Moreover, this study wants to demonstrate that the outline of gross national income and carbon dioxide emissions is in its initial stage of development and will increase up to the optimal peak. The compositional effect will then force the emission to decrease, and the environmental quality will be restored in the long run.

Keywords: environmental Kuznets curve hypothesis, carbon dioxide emission in Bangladesh, gross national income in Bangladesh, autoregressive distributed lag model, granger causality, error correction model

Procedia PDF Downloads 150
2058 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate

Procedia PDF Downloads 260
2057 A Pilot Study to Investigate the Use of Machine Translation Post-Editing Training for Foreign Language Learning

Authors: Hong Zhang

Abstract:

The main purpose of this study is to show that machine translation (MT) post-editing (PE) training can help our Chinese students learn Spanish as a second language. Our hypothesis is that they might make better use of it by learning PE skills specific for foreign language learning. We have developed PE training materials based on the data collected in a previous study. Training material included the special error types of the output of MT and the error types that our Chinese students studying Spanish could not detect in the experiment last year. This year we performed a pilot study in order to evaluate the PE training materials effectiveness and to what extent PE training helps Chinese students who study the Spanish language. We used screen recording to record these moments and made note of every action done by the students. Participants were speakers of Chinese with intermediate knowledge of Spanish. They were divided into two groups: Group A performed PE training and Group B did not. We prepared a Chinese text for both groups, and participants translated it by themselves (human translation), and then used Google Translate to translate the text and asked them to post-edit the raw MT output. Comparing the results of PE test, Group A could identify and correct the errors faster than Group B students, Group A did especially better in omission, word order, part of speech, terminology, mistranslation, official names, and formal register. From the results of this study, we can see that PE training can help Chinese students learn Spanish as a second language. In the future, we could focus on the students’ struggles during their Spanish studies and complete the PE training materials to teach Chinese students learning Spanish with machine translation.

Keywords: machine translation, post-editing, post-editing training, Chinese, Spanish, foreign language learning

Procedia PDF Downloads 144
2056 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 172
2055 Layouting Phase II of New Priok Using Adaptive Port Planning Frameworks

Authors: Mustarakh Gelfi, Tiedo Vellinga, Poonam Taneja, Delon Hamonangan

Abstract:

The development of New Priok/Kalibaru as an expansion terminal of the old port has been being done by IPC (Indonesia Port Cooperation) together with the subsidiary company, Port Developer (PT Pengembangan Pelabuhan Indonesia). As stated in the master plan, from 2 phases that had been proposed, phase I has shown its form and even Container Terminal I has been operated in 2016. It was planned principally, the development will be divided into Phase I (2013-2018) consist of 3 container terminals and 2 product terminals and Phase II (2018-2023) consist of 4 container terminals. In fact, the master plan has to be changed due to some major uncertainties which were escaped in prediction. This study is focused on the design scenario of phase II (2035- onwards) to deal with future uncertainty. The outcome is the robust design of phase II of the Kalibaru Terminal taking into account the future changes. Flexibility has to be a major goal in such a large infrastructure project like New Priok in order to deal and manage future uncertainty. The phasing of project needs to be adapted and re-look frequently before being irrelevant to future challenges. One of the frameworks that have been developed by an expert in port planning is Adaptive Port Planning (APP) with scenario-based planning. The idea behind APP framework is the adaptation that might be needed at any moment as an answer to a challenge. It is a continuous procedure that basically aims to increase the lifespan of waterborne transport infrastructure by increasing flexibility in the planning, contracting and design phases. Other methods used in this study are brainstorming with the port authority, desk study, interview and site visit to the real project. The result of the study is expected to be the insight for the port authority of Tanjung Priok over the future look and how it will impact the design of the port. There will be guidelines to do the design in an uncertain environment as well. Solutions of flexibility can be divided into: 1 - Physical solutions, all the items related hard infrastructure in the projects. The common things in this type of solution are using modularity, standardization, multi-functional, shorter and longer design lifetime, reusability, etc. 2 - Non-physical solutions, usually related to the planning processes, decision making and management of the projects. To conclude, APP framework seems quite robust to deal with the problem of designing phase II of New Priok Project for such a long period.

Keywords: Indonesia port, port's design, port planning, scenario-based planning

Procedia PDF Downloads 240
2054 Design of the Compliant Mechanism of a Biomechanical Assistive Device for the Knee

Authors: Kevin Giraldo, Juan A. Gallego, Uriel Zapata, Fanny L. Casado

Abstract:

Compliant mechanisms are designed to deform in a controlled manner in response to external forces, utilizing the flexibility of their components to store potential elastic energy during deformation, gradually releasing it upon returning to its original form. This article explores the design of a knee orthosis intended to assist users during stand-up motion. The orthosis makes use of a compliant mechanism to balance the user’s weight, thereby minimizing the strain on leg muscles during standup motion. The primary function of the compliant mechanism is to store and exchange potential energy, so when coupled with the gravitational potential of the user, the total potential energy variation is minimized. The design process for the semi-rigid knee orthosis involved material selection and the development of a numerical model for the compliant mechanism seen as a spring. Geometric properties are obtained through the numerical modeling of the spring once the desired stiffness and safety factor values have been attained. Subsequently, a 3D finite element analysis was conducted. The study demonstrates a strong correlation between the maximum stress in the mathematical model (250.22 MPa) and the simulation (239.8 MPa), with a 4.16% error. Both analyses safety factors: 1.02 for the mathematical approach and 1.1 for the simulation, with a consistent 7.84% margin of error. The spring’s stiffness, calculated at 90.82 Nm/rad analytically and 85.71 Nm/rad in the simulation, exhibits a 5.62% difference. These results suggest significant potential for the proposed device in assisting patients with knee orthopedic restrictions, contributing to ongoing efforts in advancing the understanding and treatment of knee osteoarthritis.

Keywords: biomechanics, complaint mechanisms, gonarthrosis, orthoses

Procedia PDF Downloads 37
2053 Exploring Affordable Care Practs in Nigeria’s Health Insurance Discourse

Authors: Emmanuel Chinaguh, Kehinde Adeosun

Abstract:

Nigerians die untimely, with 55.75 years of life expectancy, which is 17.45 below the world average of 73.2 (Worldometer, 2020). This is due, among other factors, to the country's limited access to high-quality healthcare. To increase access to good and affordable healthcare services, the National Health Insurance Authority (NHIA) Bill 2022 – which repealed the National Health Insurance Scheme Act 2004 – was passed into law. Applying Jacob Mey’s (2001) pragmatics act (pract) theory, this study explores how NHIA seeks to actualise these healthcare goals by characterising the general situational prototype or pragmemes and pragmatic acts in institutional communications. Data was sourced from the NHIA operational guidelines, which has 147 pages and four sections, and shared posters on NHIA Nigeria Twitter Handle with 14,200 followers. Digital humanities tools, like AntConc and Voyant, were engaged in the data analysis for text encoding and data visualisation. This study identifies these discourse tokens in the data: advertisement and programmes, standards and accreditation, records and information, and offences and penalties. Advertisement and programmes pract facilitating, propagating, prospecting, advising and informing; standards and accreditation, and records and information pract stating, informing and instructing; and offences and penalties pract stating and sanctioning. These practs combined to advance the goals of affordable care and universal accessibility to quality healthcare services. The pragmatic acts were marked by these pragmatic tools: shared situational knowledge (SSK), relevance (REL), reference (REF) and inference (INF). This paper adds to the understanding of health insurance discourse in Nigeria as a mediated social practice that promotes the health of Nigerians.

Keywords: affordable care, NHIA, Nigeria’s health insurance discourse, pragmatic acts.

Procedia PDF Downloads 85
2052 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 68
2051 Web Application for Evaluating Tests in Distance Learning Systems

Authors: Bogdan Walek, Vladimir Bradac, Radim Farana

Abstract:

Distance learning systems offer useful methods of learning and usually contain final course test or another form of test. The paper proposes web application for evaluating tests using expert system in distance learning systems. Proposed web application is appropriate for didactic tests or tests with results for subsequent studying follow-up courses. Web application works with test questions and uses expert system and LFLC tool for test evaluation. After test evaluation the results are visualized and shown to student.

Keywords: distance learning, test, uncertainty, fuzzy, expert system, student

Procedia PDF Downloads 486
2050 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.

Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer

Procedia PDF Downloads 150