Search results for: estimations of probability distributions
1589 Applications of Multivariate Statistical Methods on Geochemical Data to Evaluate the Hydrocarbons Source Rocks and Oils from Ghadames Basin, NW Libya
Authors: Mohamed Hrouda
Abstract:
The Principal Component Analysis (PCA) was performed on a dataset comprising 41 biomarker concentrations from twenty-three core source rocks samples and seven oil samples from different location, with the objective of establishing the major sources of variance within the steranes, tricyclic terpanes, hopanes, and triaromatic steroid. This type of analysis can be used as an aid when deciding which molecular biomarker maturity, source facies or depositional environment parameters should be plotted, because the principal component loadings plots tend to extract the biomarker variables related to maturity, source facies or depositional environment controls. Facies characterization of the source rock samples separate the Silurian and Devonian source rock samples into three groups. Maturity evaluation of source rock samples based on biomarker and aromatic hydrocarbon distributions indicates that not all the samples are strongly affected by maturity, the Upper Devonian samples from wells located in the northern part of the basin are immature, whereas the other samples which have been selected from the Lower Silurian are mature and have reached the main stage of the oil window, the Lower Silurian source rock strata revealed a trend of increasing maturity towards the south and southwestern part of Ghadames Basin. Most of the facies-based parameters employed in this project using biomarker distributions clearly separate the oil samples into three groups. Group I contain oil samples from wells within Al-Wafa oil field Located in the south western part of the basin, Group II contains oil samples collected from Al-Hamada oil field complex in the south and the third group contains oil samples collected from oil fields located in the northKeywords: Ghadamis basin, geochemistry, silurian, devonian
Procedia PDF Downloads 621588 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood
Authors: Randa Alharbi, Vladislav Vyshemirsky
Abstract:
Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)
Procedia PDF Downloads 2021587 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories
Authors: Haj Najafi Leila, Tehranizadeh Mohsen
Abstract:
Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.Keywords: dependency, story-cost, cost modes, engineering demand parameter
Procedia PDF Downloads 1801586 On Fourier Type Integral Transform for a Class of Generalized Quotients
Authors: A. S. Issa, S. K. Q. AL-Omari
Abstract:
In this paper, we investigate certain spaces of generalized functions for the Fourier and Fourier type integral transforms. We discuss convolution theorems and establish certain spaces of distributions for the considered integrals. The new Fourier type integral is well-defined, linear, one-to-one and continuous with respect to certain types of convergences. Many properties and an inverse problem are also discussed in some details.Keywords: Boehmian, Fourier integral, Fourier type integral, generalized quotient
Procedia PDF Downloads 3651585 Social Dimension of Air Transport Sustainable Development
Authors: Dimitrios J. Dimitriou, Maria F. Sartzetaki
Abstract:
Air Transport links markets and individuals, making regions more competitive and promoting social and economic development. The assessment of social contribution is the key objective of this paper, focusing on the definition of the components of social dimension and welfare metrics in the national scale. According to a top-down approach, the key dimensions that affect the social welfare are presented. Conventional wisdom is to provide estimations on added value to social issues caused by the air transport development and present the methodology framework for measuring the contribution of transport development in social value chain. Greece is the case study of this paper, providing results from the contribution of air transport infrastructures in national welfare. The application key findings are essential for managers and decision makers to support actions and plans towards economic recovery of an economy presenting strong seasonal characteristics (because of tourism) and suffering from recession.Keywords: air transport, social coherence, resilient business development, socioeconomic impact
Procedia PDF Downloads 2211584 A Theoretical Approach on Electoral Competition, Lobby Formation and Equilibrium Policy Platforms
Authors: Deepti Kohli, Meeta Keswani Mehra
Abstract:
The paper develops a theoretical model of electoral competition with purely opportunistic candidates and a uni-dimensional policy using the probability voting approach while focusing on the aspect of lobby formation to analyze the inherent complex interactions between centripetal and centrifugal forces and their effects on equilibrium policy platforms. There exist three types of agents, namely, Left-wing, Moderate and Right-wing who comprise of the total voting population. Also, it is assumed that the Left and Right agents are free to initiate a lobby of their choice. If initiated, these lobbies generate donations which in turn can be contributed to one (or both) electoral candidates in order to influence them to implement the lobby’s preferred policy. Four different lobby formation scenarios have been considered: no lobby formation, only Left, only Right and both Left and Right. The equilibrium policy platforms, amount of individual donations by agents to their respective lobbies and the contributions offered to the electoral candidates have been solved for under each of the above four cases. Since it is assumed that the agents cannot coordinate each other’s actions during the lobby formation stage, there exists a probability with which a lobby would be formed, which is also solved for in the model. The results indicate that the policy platforms of the two electoral candidates converge completely under the cases of no lobby and both (extreme) formations but diverge under the cases of only one (Left or Right) lobby formation. This is because in the case of no lobby being formed, only the centripetal forces (emerging from the election-winning aspect) are present while in the case of both extreme (Left-wing and Right-wing) lobbies being formed, centrifugal forces (emerging from the lobby formation aspect) also arise but cancel each other out, again resulting in a pure policy convergence phenomenon. In contrast, in case of only one lobby being formed, both centripetal and centrifugal forces interact strategically, leading the two electoral candidates to choose completely different policy platforms in equilibrium. Additionally, it is found that in equilibrium, while the donation by a specific agent type increases with the formation of both lobbies in comparison to when only one lobby is formed, the probability of implementation of the policy being advocated by that lobby group falls.Keywords: electoral competition, equilibrium policy platforms, lobby formation, opportunistic candidates
Procedia PDF Downloads 3301583 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator
Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur
Abstract:
Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature
Procedia PDF Downloads 1091582 Asymptotic Spectral Theory for Nonlinear Random Fields
Authors: Karima Kimouche
Abstract:
In this paper, we consider the asymptotic problems in spectral analysis of stationary causal random fields. We impose conditions only involving (conditional) moments, which are easily verifiable for a variety of nonlinear random fields. Limiting distributions of periodograms and smoothed periodogram spectral density estimates are obtained and applications to the spectral domain bootstrap are given.Keywords: spatial nonlinear processes, spectral estimators, GMC condition, bootstrap method
Procedia PDF Downloads 4511581 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity
Authors: Shivdayal Patel, Suhail Ahmad
Abstract:
Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling
Procedia PDF Downloads 2791580 Two-Stage Hospital Efficiency Analysis Including Qualitative Evidence: A Greek Case
Authors: Panos Xenos, Milton Nektarios, John Yfantopoulos
Abstract:
Background: Policy makers, professional organizations and payers have introduced a variety of initiatives and reforms for the health systems worldwide, aimed at improving hospital efficiency. Their efforts are concentrated in two main categories: to constrain increasing healthcare costs and to enhance quality of services provided. Research Objectives: This study examines the efficiency of 112 Greek public hospitals for the year 2009, evaluates the importance of bootstrapping techniques and investigates the effect of contextual factors on hospital efficiency. Furthermore, the effect of qualitative evidence, on hospital efficiency is explored using data from 28 large hospitals. Methods: We applied Data Envelopment Analysis, augmented by bootstrapping techniques, to estimate efficiency scores. In order to measure the effect of environmental factors on hospital efficiency we used Tobit regression analysis. The significance of our models is evaluated using statistical tests to compare distributions. Results: The Kolmogorov-Smirnov test between the original and the bootstrap-corrected efficiency indicates that their distributions are significantly different (p-value<0.01). The environmental factors, that seem to influence efficiency, are Occupancy Rating and the ratio between Outpatient Visits and Inpatient Days. Results indicate that the inclusion of the quality variable in DEA modelling generates statistically significant variations in efficiency scores (p-value<0.05). Conclusions: The inclusion of quality variables and the use of bootstrap resampling in efficiency analysis impose a statistically significant effect on the distribution of efficiency scores. As a policy conclusion we highlight the importance of these methods on hospital efficiency analysis and, by implication, on healthcare resource allocation.Keywords: hospitals, efficiency, quality, data envelopment analysis, Greek public hospital sector
Procedia PDF Downloads 3091579 Forecasting 24-Hour Ahead Electricity Load Using Time Series Models
Authors: Ramin Vafadary, Maryam Khanbaghi
Abstract:
Forecasting electricity load is important for various purposes like planning, operation, and control. Forecasts can save operating and maintenance costs, increase the reliability of power supply and delivery systems, and correct decisions for future development. This paper compares various time series methods to forecast 24 hours ahead of electricity load. The methods considered are the Holt-Winters smoothing, SARIMA Modeling, LSTM Network, Fbprophet, and Tensorflow probability. The performance of each method is evaluated by using the forecasting accuracy criteria, namely, the mean absolute error and root mean square error. The National Renewable Energy Laboratory (NREL) residential energy consumption data is used to train the models. The results of this study show that the SARIMA model is superior to the others for 24 hours ahead forecasts. Furthermore, a Bagging technique is used to make the predictions more robust. The obtained results show that by Bagging multiple time-series forecasts, we can improve the robustness of the models for 24 hours ahead of electricity load forecasting.Keywords: bagging, Fbprophet, Holt-Winters, LSTM, load forecast, SARIMA, TensorFlow probability, time series
Procedia PDF Downloads 951578 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 1431577 Comparison Approach for Wind Resource Assessment to Determine Most Precise Approach
Authors: Tasir Khan, Ishfaq Ahmad, Yejuan Wang, Muhammad Salam
Abstract:
Distribution models of the wind speed data are essential to assess the potential wind speed energy because it decreases the uncertainty to estimate wind energy output. Therefore, before performing a detailed potential energy analysis, the precise distribution model for data relating to wind speed must be found. In this research, material from numerous criteria goodness-of-fits, such as Kolmogorov Simonov, Anderson Darling statistics, Chi-Square, root mean square error (RMSE), AIC and BIC were combined finally to determine the wind speed of the best-fitted distribution. The suggested method collectively makes each criterion. This method was useful in a circumstance to fitting 14 distribution models statistically with the data of wind speed together at four sites in Pakistan. The consequences show that this method provides the best source for selecting the most suitable wind speed statistical distribution. Also, the graphical representation is consistent with the analytical results. This research presents three estimation methods that can be used to calculate the different distributions used to estimate the wind. In the suggested MLM, MOM, and MLE the third-order moment used in the wind energy formula is a key function because it makes an important contribution to the precise estimate of wind energy. In order to prove the presence of the suggested MOM, it was compared with well-known estimation methods, such as the method of linear moment, and maximum likelihood estimate. In the relative analysis, given to several goodness-of-fit, the presentation of the considered techniques is estimated on the actual wind speed evaluated in different time periods. The results obtained show that MOM certainly provides a more precise estimation than other familiar approaches in terms of estimating wind energy based on the fourteen distributions. Therefore, MOM can be used as a better technique for assessing wind energy.Keywords: wind-speed modeling, goodness of fit, maximum likelihood method, linear moment
Procedia PDF Downloads 841576 Exploring the Activity Fabric of an Intelligent Environment with Hierarchical Hidden Markov Theory
Authors: Chiung-Hui Chen
Abstract:
The Internet of Things (IoT) was designed for widespread convenience. With the smart tag and the sensing network, a large quantity of dynamic information is immediately presented in the IoT. Through the internal communication and interaction, meaningful objects provide real-time services for users. Therefore, the service with appropriate decision-making has become an essential issue. Based on the science of human behavior, this study employed the environment model to record the time sequences and locations of different behaviors and adopted the probability module of the hierarchical Hidden Markov Model for the inference. The statistical analysis was conducted to achieve the following objectives: First, define user behaviors and predict the user behavior routes with the environment model to analyze user purposes. Second, construct the hierarchical Hidden Markov Model according to the logic framework, and establish the sequential intensity among behaviors to get acquainted with the use and activity fabric of the intelligent environment. Third, establish the intensity of the relation between the probability of objects’ being used and the objects. The indicator can describe the possible limitations of the mechanism. As the process is recorded in the information of the system created in this study, these data can be reused to adjust the procedure of intelligent design services.Keywords: behavior, big data, hierarchical hidden Markov model, intelligent object
Procedia PDF Downloads 2331575 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach
Authors: M. Bahari Mehrabani, Hua-Peng Chen
Abstract:
Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling
Procedia PDF Downloads 2331574 Implicit Transaction Costs and the Fundamental Theorems of Asset Pricing
Authors: Erindi Allaj
Abstract:
This paper studies arbitrage pricing theory in financial markets with transaction costs. We extend the existing theory to include the more realistic possibility that the price at which the investors trade is dependent on the traded volume. The investors in the market always buy at the ask and sell at the bid price. Transaction costs are composed of two terms, one is able to capture the implicit transaction costs and the other the price impact. Moreover, a new definition of a self-financing portfolio is obtained. The self-financing condition suggests that continuous trading is possible, but is restricted to predictable trading strategies which have left and right limit and finite quadratic variation. That is, predictable trading strategies of infinite variation and of finite quadratic variation are allowed in our setting. Within this framework, the existence of an equivalent probability measure is equivalent to the absence of arbitrage opportunities, so that the first fundamental theorem of asset pricing (FFTAP) holds. It is also proved that, when this probability measure is unique, any contingent claim in the market is hedgeable in an L2-sense. The price of any contingent claim is equal to the risk-neutral price. To better understand how to apply the theory proposed we provide an example with linear transaction costs.Keywords: arbitrage pricing theory, transaction costs, fundamental theorems of arbitrage, financial markets
Procedia PDF Downloads 3601573 On the Determinants of Women’s Intrahousehold Decision-Making Power and the Impact of Diverging from Community Standards: A Generalised Ordered Logit Approach
Authors: Alma Sobrevilla
Abstract:
Using panel data from Mexico, this paper studies the determinants of women’s intrahousehold decision-making power using a generalised ordered logit model. Fixed effects estimations are also carried out to solve potential endogeneity coming from unobservable time-invariant factors. Finally, the paper analyses quadratic and community divergence effects of education on power. Results show heterogeneity in the effect of each of the determinants across different levels of decision-making power and suggest the presence of a significant quadratic effect of education. Having more education than the community average has a negative effect on power, supporting the notion that women tend to compensate their success outside the household with submissive attitudes at home.Keywords: women, decision-making power, intrahousehold, Mexico
Procedia PDF Downloads 3531572 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region
Authors: Mohammad Bakhshi, Firas Al Janabi
Abstract:
High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.Keywords: DiMoN Tool, disaggregation, exceedance probability, Kolmogorov-Smirnov test, rainfall
Procedia PDF Downloads 2011571 Assortative Education and Working Arrangement among Married Couples in Indonesia
Authors: Ratu Khabiba, Qisha Quarina
Abstract:
This study aims to analyse the effect of married couples’ assortative educational attainments on the division of economic activities among themselves in the household. This study contributes to the literature on women’s participation in employment, especially among married women, to see whether the traditional values about gender roles in the household still continue to shape the employment participation among married women in Indonesia, despite increasing women’s human capital through education. This study utilizes the Indonesian National Socioeconomic Survey (SUSENAS) 2016 and estimates the results using the multinomial logit model. Our results show that compared to high-educated educational homogamy couples, educational heterogamy couples, especially hypergamy, have a higher probability of being a single-worker type. Moreover, the high-educated educational homogamy couples have the highest probability of being a dual-worker type. Thus, we found evidence that the traditional values of gender role segregation seem to still play a significant role in married women’s employment decision in Indonesia, particularly for couples’ with educational heterogamy and low-educated educational homogamy couples.Keywords: assortative education, dual-worker, hypergamy, homogamy, traditional values, women labor participation
Procedia PDF Downloads 1181570 A Hybrid Based Algorithm to Solve the Multi-objective Minimum Spanning Tree Problem
Authors: Boumesbah Asma, Chergui Mohamed El-amine
Abstract:
Since it has been shown that the multi-objective minimum spanning tree problem (MOST) is NP-hard even with two criteria, we propose in this study a hybrid NSGA-II algorithm with an exact mutation operator, which is only used with low probability, to find an approximation to the Pareto front of the problem. In a connected graph G, a spanning tree T of G being a connected and cycle-free graph, if k edges of G\T are added to T, we obtain a partial graph H of G inducing a reduced size multi-objective spanning tree problem compared to the initial one. With a weak probability for the mutation operator, an exact method for solving the reduced MOST problem considering the graph H is then used to give birth to several mutated solutions from a spanning tree T. Then, the selection operator of NSGA-II is activated to obtain the Pareto front approximation. Finally, an adaptation of the VNS metaheuristic is called for further improvements on this front. It allows finding good individuals to counterbalance the diversification and the intensification during the optimization search process. Experimental comparison studies with an exact method show promising results and indicate that the proposed algorithm is efficient.Keywords: minimum spanning tree, multiple objective linear optimization, combinatorial optimization, non-sorting genetic algorithm, variable neighborhood search
Procedia PDF Downloads 911569 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio
Authors: Urvee B. Trivedi, U. D. Dalal
Abstract:
As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.Keywords: cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary user (PU), secondary user (SU), fast Fourier transform (FFT), signal to noise ratio (SNR)
Procedia PDF Downloads 3451568 Occupational Diseases in the Automotive Industry in Czechia
Authors: J. Jarolímek, P. Urban, P. Pavlínek, D. Dzúrová
Abstract:
The industry constitutes a dominant economic sector in Czechia. The automotive industry represents the most important industrial sector in terms of gross value added and the number of employees. The objective of this study was to analyse the occurrence of occupational diseases (OD) in the automotive industry in Czechia during the 2001-2014 period. Whereas the occurrence of OD in other sectors has generally been decreasing, it has been increasing in the automotive industry, including growing spatial discrepancies. Data on OD cases were retrieved from the National Registry of Occupational Diseases. Further, we conducted a survey in automotive companies with a focus on occupational health services and positions of the companies in global production networks (GPNs). An analysis of OD distribution in the automotive industry was performed (age, gender, company size and its role in GPNs, regional distribution of studied companies, and regional unemployment rate), and was accompanied by an assessment of the quality and range of occupational health services. The employees older than 40 years had nearly 2.5 times higher probability of OD occurrence compared with employees younger than 40 years (OR 2.41; 95% CI: 2.05-2.85). The OD occurrence probability was 3 times higher for women than for men (OR 3.01; 95 % CI: 2.55-3.55). The OD incidence rate was increasing with the size of the company. An association between the OD incidence and the unemployment rate was not confirmed.Keywords: occupational diseases, automotive industry, health geography, unemployment
Procedia PDF Downloads 2501567 Four-Way Coupled CFD-Dem Simulation of Concrete Pipe Flow Using a Non-Newtonian Rheological Model: Investigating the Simulation of Lubrication Layer Formation and Plug Flow Zones
Authors: Tooran Tavangar, Masoud Hosseinpoor, Jeffrey S. Marshall, Ammar Yahia, Kamal Henri Khayat
Abstract:
In this study, a four-way coupled CFD-DEM methodology was used to simulate the behavior of concrete pipe flow. Fresh concrete, characterized as a biphasic suspension, features aggregates comprising the solid-suspended phase with diverse particle-size distributions (PSD) within a non-Newtonian cement paste/mortar matrix forming the liquid phase. The fluid phase was simulated using CFD, while the aggregates were modeled using DEM. Interaction forces between the fluid and solid particles were considered through CFD-DEM computations. To capture the viscoelastic characteristics of the suspending fluid, a bi-viscous approach was adopted, incorporating a critical shear rate proportional to the yield stress of the mortar. In total, three diphasic suspensions were simulated, each featuring distinct particle size distributions and a concentration of 10% for five subclasses of spherical particles ranging from 1 to 17 mm in a suspending fluid. The adopted bi-viscous approach successfully simulated both un-sheared (plug flow) and sheared zones. Furthermore, shear-induced particle migration (SIPM) was assessed by examining coefficients of variation in particle concentration across the pipe. These SIPM values were then compared with results obtained using CFD-DEM under the Newtonian assumption. The study highlighted the crucial role of yield stress in the mortar phase, revealing that lower yield stress values can lead to increased flow rates and higher SIPM across the pipe.Keywords: computational fluid dynamics, concrete pumping, coupled CFD-DEM, discrete element method, plug flow, shear-induced particle migration.
Procedia PDF Downloads 671566 Vehicle Activity Characterization Approach to Quantify On-Road Mobile Source Emissions
Authors: Hatem Abou-Senna, Essam Radwan
Abstract:
Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. Other methods provided better accuracy utilizing annual average estimates. Travel demand models provided an intermediate level of detail through average daily volumes. Currently, higher accuracy can be established utilizing microscopic analyses by splitting the network links into sub-links and utilizing second-by-second trajectories to calculate emissions. The need to accurately quantify transportation-related emissions from vehicles is essential. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited access highway in Orlando, Florida. First, (at the most basic level), emissions were estimated for the entire 10-mile section 'by hand' using one average traffic volume and average speed. Then, three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NOx, PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach.Keywords: limited access highways, MOVES, operating mode distribution (OPMODE), transportation emissions, vehicle specific power (VSP)
Procedia PDF Downloads 3391565 The 10-year Risk of Major Osteoporotic and Hip Fractures Among Indonesian People Living with HIV
Authors: Iqbal Pramukti, Mamat Lukman, Hasniatisari Harun, Kusman Ibrahim
Abstract:
Introduction: People living with HIV had a higher risk of osteoporotic fracture than the general population. The purpose of this study was to predict the 10-year risk of fracture among people living with HIV (PLWH) using FRAX™ and to identify characteristics related to the fracture risk. Methodology: This study consisted of 75 subjects. The ten-year probability of major osteoporotic fractures (MOF) and hip fractures was assessed using the FRAX™ algorithm. A cross-tabulation was used to identify the participant’s characteristics related to fracture risk. Results: The overall mean 10-year probability of fracture was 2.4% (1.7) for MOF and 0.4% (0.3) for hip fractures. For MOF score, participants with parents’ hip fracture history, smoking behavior and glucocorticoid use showed a higher MOF score than those who were not (3.1 vs. 2.5; 4.6 vs 2.5; and 3.4 vs 2.5, respectively). For HF score, participants with parents’ hip fracture history, smoking behavior and glucocorticoid use also showed a higher HF score than those who were not (0.5 vs. 0.3; 0.8 vs. 0.3; and 0.5 vs. 0.3, respectively). Conclusions: The 10-year risk of fracture was higher among PLWH with several factors, including the parent’s hip. Fracture history, smoking behavior and glucocorticoid used. Further analysis on determining factors using multivariate regression analysis with a larger sample size is required to confirm the factors associated with the high fracture risk.Keywords: HIV, PLWH, osteoporotic fractures, hip fractures, 10-year risk of fracture, FRAX
Procedia PDF Downloads 491564 Photon-Electron Interaction in the Different Medium
Authors: Vahid Borji
Abstract:
The interaction between photons and particles is a common phenomenon in nature that is discussed in order to obtain information about the environment and the conditions governing the phenomena. In the astrophysics, like others, we study these interactions to get useful knowledge and can be predict aftercoming events. One of the events is the transition of photon beam through medium with special conditions, like shocked medium. In our discussion, we have studied this situation and obtained results for different conditions that transition of photon depends on the energy of photon and distributions of electrons in medium.Keywords: cross section, astrophysics, GRB, photon
Procedia PDF Downloads 891563 Overconfidence and Self-Attribution Bias: The Difference among Economic Students at Different Stage of the Study and Non-Economic Students
Authors: Vera Jancurova
Abstract:
People are, in general, exposed to behavioral biases, however, the degree and impact are affected by experience, knowledge, and other characteristics. The purpose of this article is to study two of defined behavioral biases, the overconfidence and self-attribution bias, and its impact on economic and non-economic students at different stage of the study. The research method used for the purpose of this study is a controlled field study that contains questions on perception of own confidence and self-attribution and estimation of limits to analyse actual abilities. The results of the research show that economic students seem to be more overconfident than their non–economic colleagues, which seems to be caused by the fact the questionnaire was asking for predicting economic indexes and own knowledge and abilities in financial environment. Surprisingly, the most overconfidence was detected by the students at the beginning of their study (1st-semester students). However, the estimations of real numbers do not point out, that economic students have better results by the prediction itself. The study confirmed the presence of self-attribution bias at all of the respondents.Keywords: behavioral finance, overconfidence, self-attribution, heuristics and biases
Procedia PDF Downloads 2571562 The Impact of Exchange Rate Volatility on Real Total Export and Sub-Categories of Real Total Export of Malaysia
Authors: Wong Hock Tsen
Abstract:
This study aims to investigate the impact of exchange rate volatility on real export in Malaysia. The moving standard deviation with order three (MSD(3)) is used for the measurement of exchange rate volatility. The conventional and partially asymmetric autoregressive distributed lag (ARDL) models are used in the estimations. This study finds exchange rate volatility to have significant impact on real total export and some sub-categories of real total export. Moreover, this study finds that the positive or negative exchange rate volatility tends to have positive or negative impact on real export. Exchange rate volatility can be harmful to export of Malaysia.Keywords: exchange rate volatility, autoregressive distributed lag, export, Malaysia
Procedia PDF Downloads 3241561 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 901560 An Empirical Investigation into the Effect of Macroeconomic Policy on Economic Growth in Nigeria
Authors: Rakiya Abba
Abstract:
This paper investigates the effect of the money supply, exchange and interest rate on economic growth in Nigeria through the application of Augmented Dickey-Fuller technique in testing the unit root property of the series and Granger causality test of causation between GDP, money supply, the exchange, and interest rate. The results of unit root suggest that all the variables in the model are stationary at 1, 5 and 10 percent level of significance, and the results of Causality suggest that money supply and exchange granger cause IR, the result further reveals two – way causation existed between M2 and EXR while IR granger cause GDP the null hypothesis is rejected and GDP does not granger cause IR as indicated by their probability values of 0.4805 and confirmed by F-statistics values of 0.75483. The results revealed that M2 and EXR do not granger causes GDP, the null hypothesis is accepted at 75percent 18percent respectively as indicated by their probability values of 0.7472 and 0.1830 respectively; also, GDP does not granger cause M2 and EXR. The Johansen cointegration result indicates that despite GDP does not granger cause M2, IR, and EXR, but there existed 1 cointegrating equation, implying the existence of long-run relationship between GDP, M2 IR, and EXR. A major policy implication of this result is that economic growth is function of and money supply and exchange rate, effective monetary policies should direct on manipulating instruments and importance should be placed on justification for adopting a particular policy be rationalized in order to increase growth in economyKeywords: economic growth, money supply, interest rate, exchange rate, causality
Procedia PDF Downloads 267