Search results for: TensorFlow probability
1022 An Empirical Investigation into the Effect of Macroeconomic Policy on Economic Growth in Nigeria
Authors: Rakiya Abba
Abstract:
This paper investigates the effect of the money supply, exchange and interest rate on economic growth in Nigeria through the application of Augmented Dickey-Fuller technique in testing the unit root property of the series and Granger causality test of causation between GDP, money supply, the exchange, and interest rate. The results of unit root suggest that all the variables in the model are stationary at 1, 5 and 10 percent level of significance, and the results of Causality suggest that money supply and exchange granger cause IR, the result further reveals two – way causation existed between M2 and EXR while IR granger cause GDP the null hypothesis is rejected and GDP does not granger cause IR as indicated by their probability values of 0.4805 and confirmed by F-statistics values of 0.75483. The results revealed that M2 and EXR do not granger causes GDP, the null hypothesis is accepted at 75percent 18percent respectively as indicated by their probability values of 0.7472 and 0.1830 respectively; also, GDP does not granger cause M2 and EXR. The Johansen cointegration result indicates that despite GDP does not granger cause M2, IR, and EXR, but there existed 1 cointegrating equation, implying the existence of long-run relationship between GDP, M2 IR, and EXR. A major policy implication of this result is that economic growth is function of and money supply and exchange rate, effective monetary policies should direct on manipulating instruments and importance should be placed on justification for adopting a particular policy be rationalized in order to increase growth in economyKeywords: economic growth, money supply, interest rate, exchange rate, causality
Procedia PDF Downloads 2691021 Democratic Political Culture of the 5th and 6th Graders under the Authority of Dusit District Office, Bangkok
Authors: Vilasinee Jintalikhitdee, Phusit Phukamchanoad, Sakapas Saengchai
Abstract:
This research aims to study the level of democratic political culture and the factors that affect the democratic political culture of 5th and 6th graders under the authority of Dusit District Office, Bangkok by using stratified sampling for probability sampling and using purposive sampling for non-probability sampling to collect data toward the distribution of questionnaires to 300 respondents. This covers all of the schools under the authority of Dusit District Office. The researcher analyzed the data by using descriptive statistics which include arithmetic mean, standard deviation, and inferential statistics which are Independent Samples T-test (T-test) and One-Way ANOVA (F-test). The researcher also collected data by interviewing the target groups, and then analyzed the data by the use of descriptive analysis. The result shows that 5th and 6th graders under the authority of Dusit District Office, Bangkok have exposed to democratic political culture at high level in overall. When considering each part, it found out that the part that has highest mean is “the constitutional democratic governmental system is suitable for Thailand” statement. The part with the lowest mean is “corruption (cheat and defraud) is normal in Thai society” statement. The factor that affects democratic political culture is grade levels, occupations of mothers, and attention in news and political movements.Keywords: democratic, political culture, political movements, democratic governmental system
Procedia PDF Downloads 2661020 Failure Probability Assessment of Concrete Spherical Domes Subjected to Ventilation Controlled Fires Using BIM Tools
Authors: A. T. Kassem
Abstract:
Fires areconsidered a common hazardous action that any building may face. Most buildings’ structural elements are designed, taking into consideration precautions for fire safety, using deterministic design approaches. Public and highly important buildings are commonly designed considering standard fire rating and, in many cases, contain large compartments with central domes. Real fire scenarios are not commonly brought into action in structural design of buildings because of complexities in both scenarios and analysis tools. This paper presents a modern approach towards analysis of spherical domes in real fire condition via implementation of building information modelling, and adopting a probabilistic approach. BIMhas been implemented to bridge the gap between various software packages enabling them to function interactively to model both real fire and corresponding structural response. Ventilation controlled fires scenarios have been modeled using both “Revit” and “Pyrosim”. Monte Carlo simulation has been adopted to engage the probabilistic analysis approach in dealing with various parameters. Conclusions regarding failure probability and fire endurance, in addition to the effects of various parameters, have been extracted.Keywords: concrete, spherical domes, ventilation controlled fires, BIM, monte carlo simulation, pyrosim, revit
Procedia PDF Downloads 961019 Impact of Violence against Women on Small and Medium Enterprises (SMEs) in Rural Sindh: A Case Study of Kandhkot
Authors: Mohammad Shoaib Khan, Abdul Sattar Bahalkani
Abstract:
This research investigates the violence and their impact on SMEs in Sindh. The main objective of current research is to examine the women empowerment through women participation in small and medium enterprises in upper Sindh. The data were collected from 500 respondents from Kandhkot District, by using simple random technique. A structural questionnaire was designed as an instrument for measuring the impact of SMEs business in women empowerment in rural Sindh. It was revealed that the rural women is less confident and their husbands were always given them hard time once they are exposing themselves to outside the boundaries of the house. It was revealed that rural women have a major contribution in social, economic, and political development. It was further revealed that women are getting low wages and due to non-availability of market facility they are paying low wages. The negative impact of husbands’ income and having children at the age of 0-6 years old are also significant. High income of other household member raises the reservation wage of mothers, thus lowers the probability of participation when the objective of working is to help family’s financial need. The impact of childcare on mothers’ labor force participation is significant but not as the theory predicted. The probability of participation in labor force is significantly higher for women who lived in the urban areas where job opportunities are greater compared to the rural.Keywords: empowerment, violence against women, SMEs, rural
Procedia PDF Downloads 3331018 Merging Appeal to Ignorance, Composition, and Division Argument Schemes with Bayesian Networks
Authors: Kong Ngai Pei
Abstract:
The argument scheme approach to argumentation has two components. One is to identify the recurrent patterns of inferences used in everyday discourse. The second is to devise critical questions to evaluate the inferences in these patterns. Although this approach is intuitive and contains many insightful ideas, it has been noted to be not free of problems. One is that due to its disavowing the probability calculus, it cannot give the exact strength of an inference. In order to tackle this problem, thereby paving the way to a more complete normative account of argument strength, it has been proposed, the most promising way is to combine the scheme-based approach with Bayesian networks (BNs). This paper pursues this line of thought, attempting to combine three common schemes, Appeal to Ignorance, Composition, and Division, with BNs. In the first part, it is argued that most (if not all) formulations of the critical questions corresponding to these schemes in the current argumentation literature are incomplete and not very informative. To remedy these flaws, more thorough and precise formulations of these questions are provided. In the second part, how to use graphical idioms (e.g. measurement and synthesis idioms) to translate the schemes as well as their corresponding critical questions to graphical structure of BNs, and how to define probability tables of the nodes using functions of various sorts are shown. In the final part, it is argued that many misuses of these schemes, traditionally called fallacies with the same names as the schemes, can indeed be adequately accounted for by the BN models proposed in this paper.Keywords: appeal to ignorance, argument schemes, Bayesian networks, composition, division
Procedia PDF Downloads 2881017 Feasibility Study of Wind Energy Potential in Turkey: Case Study of Catalca District in Istanbul
Authors: Mohammed Wadi, Bedri Kekezoglu, Mustafa Baysal, Mehmet Rida Tur, Abdulfetah Shobole
Abstract:
This paper investigates the technical evaluation of the wind potential for present and future investments in Turkey taking into account the feasibility of sites, installments, operation, and maintenance. This evaluation based on the hourly measured wind speed data for the three years 2008–2010 at 30 m height for Çatalca district. These data were obtained from national meteorology station in Istanbul–Republic of Turkey are analyzed in order to evaluate the feasibility of wind power potential and to assure supreme assortment of wind turbines installing for the area of interest. Furthermore, the data are extrapolated and analyzed at 60 m and 80 m regarding the variability of roughness factor. Weibull bi-parameter probability function is used to approximate monthly and annually wind potential and power density based on three calculation methods namely, the approximated, the graphical and the energy pattern factor methods. The annual mean wind power densities were to be 400.31, 540.08 and 611.02 W/m² for 30, 60, and 80 m heights respectively. Simulation results prove that the analyzed area is an appropriate place for constructing large-scale wind farms.Keywords: wind potential in Turkey, Weibull bi-parameter probability function, the approximated method, the graphical method, the energy pattern factor method, capacity factor
Procedia PDF Downloads 2591016 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 2221015 Statistical Analysis of Extreme Flow (Regions of Chlef)
Authors: Bouthiba Amina
Abstract:
The estimation of the statistics bound to the precipitation represents a vast domain, which puts numerous challenges to meteorologists and hydrologists. Sometimes, it is necessary, to approach in value the extreme events for sites where there is little, or no datum, as well as their periods of return. The search for a model of the frequency of the heights of daily rains dresses a big importance in operational hydrology: It establishes a basis for predicting the frequency and intensity of floods by estimating the amount of precipitation in past years. The most known and the most common approach is the statistical approach, It consists in looking for a law of probability that fits best the values observed by the random variable " daily maximal rain " after a comparison of various laws of probability and methods of estimation by means of tests of adequacy. Therefore, a frequent analysis of the annual series of daily maximal rains was realized on the data of 54 pluviometric stations of the pond of high and average. This choice was concerned with five laws usually applied to the study and the analysis of frequent maximal daily rains. The chosen period is from 1970 to 2013. It was of use to the forecast of quantiles. The used laws are the law generalized by extremes to three components, those of the extreme values to two components (Gumbel and log-normal) in two parameters, the law Pearson typifies III and Log-Pearson III in three parameters. In Algeria, Gumbel's law has been used for a long time to estimate the quantiles of maximum flows. However, and we will check and choose the most reliable law.Keywords: return period, extreme flow, statistics laws, Gumbel, estimation
Procedia PDF Downloads 791014 Performance Evaluation of a Prioritized, Limited Multi-Server Processor-Sharing System that Includes Servers with Various Capacities
Authors: Yoshiaki Shikata, Nobutane Hanayama
Abstract:
We present a prioritized, limited multi-server processor sharing (PS) system where each server has various capacities, and N (≥2) priority classes are allowed in each PS server. In each prioritized, limited server, different service ratio is assigned to each class request, and the number of requests to be processed is limited to less than a certain number. Routing strategies of such prioritized, limited multi-server PS systems that take into account the capacity of each server are also presented, and a performance evaluation procedure for these strategies is discussed. Practical performance measures of these strategies, such as loss probability, mean waiting time, and mean sojourn time, are evaluated via simulation. In the PS server, at the arrival (or departure) of a request, the extension (shortening) of the remaining sojourn time of each request receiving service can be calculated by using the number of requests of each class and the priority ratio. Utilising a simulation program which executes these events and calculations, the performance of the proposed prioritized, limited multi-server PS rule can be analyzed. From the evaluation results, most suitable routing strategy for the loss or waiting system is clarified.Keywords: processor sharing, multi-server, various capacity, N-priority classes, routing strategy, loss probability, mean sojourn time, mean waiting time, simulation
Procedia PDF Downloads 3321013 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment
Authors: Isabela Moreira Queiroz
Abstract:
Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management.Keywords: probabilistic methods, risk assessment, risk management, slope stability
Procedia PDF Downloads 3921012 Timing and Probability of Presurgical Teledermatology: Survival Analysis
Authors: Felipa de Mello-Sampayo
Abstract:
The aim of this study is to undertake, from patient’s perspective, the timing and probability of using teledermatology, comparing it with a conventional referral system. The dynamic stochastic model’s main value-added consists of the concrete application to patients waiting for dermatology surgical intervention. Patients with low health level uncertainty must use teledermatology treatment as soon as possible, which is precisely when the teledermatology is least valuable. The results of the model were then tested empirically with the teledermatology network covering the area served by the Hospital Garcia da Horta, Portugal, links the primary care centers of 24 health districts with the hospital’s dermatology department via the corporate intranet of the Portuguese healthcare system. Health level volatility can be understood as the hazard of developing skin cancer and the trend of health level as the bias of developing skin lesions. The results of the survival analysis suggest that the theoretical model can explain the use of teledermatology. It depends negatively on the volatility of patients' health, and positively on the trend of health, i.e., the lower the risk of developing skin cancer and the younger the patients, the more presurgical teledermatology one expects to occur. Presurgical teledermatology also depends positively on out-of-pocket expenses and negatively on the opportunity costs of teledermatology, i.e., the lower the benefit missed by using teledermatology, the more presurgical teledermatology one expects to occur.Keywords: teledermatology, wait time, uncertainty, opportunity cost, survival analysis
Procedia PDF Downloads 1291011 A Multi-Objective Programming Model to Supplier Selection and Order Allocation Problem in Stochastic Environment
Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh
Abstract:
This paper aims at developing a multi-objective model for supplier selection and order allocation problem in stochastic environment, where purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. In this regard, dependent chance programming is used which maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. The abovementioned stochastic multi-objective programming problem is then transformed into a stochastic single objective programming problem using minimum deviation method. In the next step, the further problem is solved applying a genetic algorithm, which performs a simulation process in order to calculate the stochastic objective function as its fitness function. Finally, the impact of stochastic parameters on the given solution is examined via a sensitivity analysis exploiting coefficient of variation. The results show that whatever stochastic parameters have greater coefficients of variation, the value of the objective function in the stochastic single objective programming problem is deteriorated.Keywords: supplier selection, order allocation, dependent chance programming, genetic algorithm
Procedia PDF Downloads 3131010 The Effect of Training and Development Practice on Employees’ Performance
Authors: Sifen Abreham
Abstract:
Employees are resources in organizations; as such, they need to be trained and developed properly to achieve an organization's goals and expectations. The initial development of the human resource management concept is based on the effective utilization of people to treat them as resources, leading to the realization of business strategies and organizational objectives. The study aimed to assess the effect of training and development practices on employee performance. The researcher used an explanatory research design, which helps to explain, understand, and predict the relationship between variables. To collect the data from the respondents, the study used probability sampling. From the probability, the researcher used stratified random sampling, which can branch off the entire population into homogenous groups. The result was analyzed and presented by using the statistical package for the social science (SPSS) version 26. The major finding of the study was that the training has an impact on employees' job performance to achieve organizational objectives. The district has a policy and procedure for training and development, but it doesn’t apply actively, and it’s not suitable for district-advised reform this policy and procedure and applied actively; the district gives training for the majority of its employees, but most of the time, the training is theoretical the district advised to use practical training method to see positive change, the district gives evaluation after the employees take training and development, but the result is not encouraging the district advised to assess employees skill gap and feel that gap, the district has a budget, but it’s not adequate the district advised to strengthen its financial ground.Keywords: training, development, employees, performance, policy
Procedia PDF Downloads 621009 A Mathematical Analysis of a Model in Capillary Formation: The Roles of Endothelial, Pericyte and Macrophages in the Initiation of Angiogenesis
Authors: Serdal Pamuk, Irem Cay
Abstract:
Our model is based on the theory of reinforced random walks coupled with Michealis-Menten mechanisms which view endothelial cell receptors as the catalysts for transforming both tumor and macrophage derived tumor angiogenesis factor (TAF) into proteolytic enzyme which in turn degrade the basal lamina. The model consists of two main parts. First part has seven differential equations (DE’s) in one space dimension over the capillary, whereas the second part has the same number of DE’s in two space dimensions in the extra cellular matrix (ECM). We connect these two parts via some boundary conditions to move the cells into the ECM in order to initiate capillary formation. But, when does this movement begin? To address this question we estimate the thresholds that activate the transport equations in the capillary. We do this by using steady-state analysis of TAF equation under some assumptions. Once these equations are activated endothelial, pericyte and macrophage cells begin to move into the ECM for the initiation of angiogenesis. We do believe that our results play an important role for the mechanisms of cell migration which are crucial for tumor angiogenesis. Furthermore, we estimate the long time tendency of these three cells, and find that they tend to the transition probability functions as time evolves. We provide our numerical solutions which are in good agreement with our theoretical results.Keywords: angiogenesis, capillary formation, mathematical analysis, steady-state, transition probability function
Procedia PDF Downloads 1571008 The Integrated Strategy of Maintenance with a Scientific Analysis
Authors: Mahmoud Meckawey
Abstract:
This research is dealing with one of the most important aspects of maintenance fields, that is Maintenance Strategy. It's the branch which concerns the concepts and the schematic thoughts in how to manage maintenance and how to deal with the defects in the engineering products (buildings, machines, etc.) in general. Through the papers we will act with the followings: i) The Engineering Product & the Technical Systems: When we act with the maintenance process, in a strategic view, we act with an (engineering product) which consists of multi integrated systems. In fact, there is no engineering product with only one system. We will discuss and explain this topic, through which we will derivate a developed definition for the maintenance process. ii) The factors or basis of the functionality efficiency: That is the main factors affect the functional efficiency of the systems and the engineering products, then by this way we can give a technical definition of defects and how they occur. iii) The legality of occurrence of defects (Legal defects and Illegal defects): with which we assume that all the factors of the functionality efficiency been applied, and then we will discuss the results. iv) The Guarantee, the Functional Span Age and the Technical surplus concepts: In the complementation with the above topic, and associated with the Reliability theorems, where we act with the Probability of Failure state, with which we almost interest with the design stages, that is to check and adapt the design of the elements. But in Maintainability we act in a different way as we act with the actual state of the systems. So, we act with the rest of the story that means we have to act with the complementary part of the probability of failure term which refers to the actual surplus of the functionality for the systems.Keywords: engineering product and technical systems, functional span age, legal and illegal defects, technical and functional surplus
Procedia PDF Downloads 4751007 The Probability of Smallholder Broiler Chicken Farmers' Participation in the Mainstream Market within Maseru District in Lesotho
Authors: L. E. Mphahama, A. Mushunje, A. Taruvinga
Abstract:
Although broiler production does not generate any large incomes among the smallholder community, it represents the main source of livelihood and part of nutritional requirement. As a result, market for broiler meat is growing faster than that of any other meat products and is projected to continue growing in the coming decades. However, the implication is that a multitude of factors manipulates transformation of smallholder broiler farmers participating in the mainstream markets. From 217 smallholder broiler farmers, socio-economic and institutional factors in broiler farming were incorporated into Binary model to estimate the probability of broiler farmers’ participation in the mainstream markets within the Maseru district in Lesotho. Of the thirteen (13) predictor variables fitted into the model, six (6) variables (household size, number of years in broiler business, stock size, access to transport, access to extension services and access to market information) had significant coefficients while seven (7) variables (level of education, marital status, price of broilers, poultry association, access to contract, access to credit and access to storage) did not have a significant impact. It is recommended that smallholder broiler farmers organize themselves into cooperatives which will act as a vehicle through which they can access contracts and formal markets. These cooperatives will also enable easy training and workshops for broiler rearing and marketing/markets through extension visits.Keywords: broiler chicken, mainstream market, Maseru district, participation, smallholder farmers
Procedia PDF Downloads 1521006 Electro-Fenton Degradation of Erythrosine B Using Carbon Felt as a Cathode: Doehlert Design as an Optimization Technique
Authors: Sourour Chaabane, Davide Clematis, Marco Panizza
Abstract:
This study investigates the oxidation of Erythrosine B (EB) food dye by a homogeneous electro-Fenton process using iron (II) sulfate heptahydrate as a catalyst, carbon felt as cathode, and Ti/RuO2. The treated synthetic wastewater contains 100 mg L⁻¹ of EB and has a pH = 3. The effects of three independent variables have been considered for process optimization, such as applied current intensity (0.1 – 0.5 A), iron concentration (1 – 10 mM), and stirring rate (100 – 1000 rpm). Their interactions were investigated considering response surface methodology (RSM) based on Doehlert design as optimization method. EB removal efficiency and energy consumption were considered model responses after 30 minutes of electrolysis. Analysis of variance (ANOVA) revealed that the quadratic model was adequately fitted to the experimental data with R² (0.9819), adj-R² (0.9276) and low Fisher probability (< 0.0181) for EB removal model, and R² (0.9968), adj-R² (0.9872) and low Fisher probability (< 0.0014) relative to the energy consumption model reflected a robust statistical significance. The energy consumption model significantly depends on current density, as expected. The foregoing results obtained by RSM led to the following optimal conditions for EB degradation: current intensity of 0.2 A, iron concentration of 9.397 mM, and stirring rate of 500 rpm, which gave a maximum decolorization rate of 98.15 % with a minimum energy consumption of 0.74 kWh m⁻³ at 30 min of electrolysis.Keywords: electrofenton, erythrosineb, dye, response serface methdology, carbon felt
Procedia PDF Downloads 741005 Learning a Bayesian Network for Situation-Aware Smart Home Service: A Case Study with a Robot Vacuum Cleaner
Authors: Eu Tteum Ha, Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
The smart home environment backed up by IoT (internet of things) technologies enables intelligent services based on the awareness of the situation a user is currently in. One of the convenient sensors for recognizing the situations within a home is the smart meter that can monitor the status of each electrical appliance in real time. This paper aims at learning a Bayesian network that models the causal relationship between the user situations and the status of the electrical appliances. Using such a network, we can infer the current situation based on the observed status of the appliances. However, learning the conditional probability tables (CPTs) of the network requires many training examples that cannot be obtained unless the user situations are closely monitored by any means. This paper proposes a method for learning the CPT entries of the network relying only on the user feedbacks generated occasionally. In our case study with a robot vacuum cleaner, the feedback comes in whenever the user gives an order to the robot adversely from its preprogrammed setting. Given a network with randomly initialized CPT entries, our proposed method uses this feedback information to adjust relevant CPT entries in the direction of increasing the probability of recognizing the desired situations. Simulation experiments show that our method can rapidly improve the recognition performance of the Bayesian network using a relatively small number of feedbacks.Keywords: Bayesian network, IoT, learning, situation -awareness, smart home
Procedia PDF Downloads 5241004 Constructions of Linear and Robust Codes Based on Wavelet Decompositions
Authors: Alla Levina, Sergey Taranov
Abstract:
The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability
Procedia PDF Downloads 4911003 Effectiveness of Variable Speed Limit Signs in Reducing Crash Rates on Roadway Construction Work Zones in Alaska
Authors: Osama Abaza, Tanay Datta Chowdhury
Abstract:
As a driver's speed increases, so do the probability of an incident and likelihood of injury. The presence of equipment, personnel, and a changing landscape in construction zones create greater potential for incident. This is especially concerning in Alaska, where summer construction activity, coinciding with the peak annual traffic volumes, cannot be avoided. In order to reduce vehicular speeding in work zones, and therefore the probability of crash and incident occurrence, variable speed limit (VSL) systems can be implemented in the form of radar speed display trailers since the radar speed display trailers were shown to be effective at reducing vehicular speed in construction zones. Allocation of VSL not only help reduce the 85th percentile speed but also it will predominantly reduce mean speed as well. Total of 2147 incidents along with 385 crashes occurred only in one month around the construction zone in the Alaska which seriously requires proper attention. This research provided a thorough crash analysis to better understand the cause and provide proper countermeasures. Crashes were predominantly recoded as vehicle- object collision and sideswipe type and thus significant amount of crashes fall in the group of no injury to minor injury type in the severity class. But still, 35 major crashes with 7 fatal ones in a one month period require immediate action like the implementation of the VSL system as it proved to be a speed reducer in the construction zone on Alaskan roadways.Keywords: speed, construction zone, crash, severity
Procedia PDF Downloads 2531002 Supplier Selection and Order Allocation Using a Stochastic Multi-Objective Programming Model and Genetic Algorithm
Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh
Abstract:
In this paper, we develop a supplier selection and order allocation multi-objective model in stochastic environment in which purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. To do so, we use dependent chance programming (DCP) that maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. After transforming the above mentioned stochastic multi-objective programming problem into a stochastic single objective problem using minimum deviation method, we apply a genetic algorithm to get the later single objective problem solved. The employed genetic algorithm performs a simulation process in order to calculate the stochastic objective function as its fitness function. At the end, we explore the impact of stochastic parameters on the given solution via a sensitivity analysis exploiting coefficient of variation. The results show that as stochastic parameters have greater coefficients of variation, the value of objective function in the stochastic single objective programming problem is worsened.Keywords: dependent chance programming, genetic algorithm, minimum deviation method, order allocation, supplier selection
Procedia PDF Downloads 2561001 A Framework Based on Dempster-Shafer Theory of Evidence Algorithm for the Analysis of the TV-Viewers’ Behaviors
Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi
Abstract:
In this paper, we propose an approach of detecting the behavior of the viewers of a TV program in a non-controlled environment. The experiment we propose is based on the use of three types of connected objects (smartphone, smart watch, and a connected remote control). 23 participants were observed while watching their TV programs during three phases: before, during and after watching a TV program. Their behaviors were detected using an approach based on The Dempster Shafer Theory (DST) in two phases. The first phase is to approximate dynamically the mass functions using an approach based on the correlation coefficient. The second phase is to calculate the approximate mass functions. To approximate the mass functions, two approaches have been tested: the first approach was to divide each features data space into cells; each one has a specific probability distribution over the behaviors. The probability distributions were computed statistically (estimated by empirical distribution). The second approach was to predict the TV-viewing behaviors through the use of classifiers algorithms and add uncertainty to the prediction based on the uncertainty of the model. Results showed that mixing the fusion rule with the computation of the initial approximate mass functions using a classifier led to an overall of 96%, 95% and 96% success rate for the first, second and third TV-viewing phase respectively. The results were also compared to those found in the literature. This study aims to anticipate certain actions in order to maintain the attention of TV viewers towards the proposed TV programs with usual connected objects, taking into account the various uncertainties that can be generated.Keywords: Iot, TV-viewing behaviors identification, automatic classification, unconstrained environment
Procedia PDF Downloads 2291000 Advanced Numerical and Analytical Methods for Assessing Concrete Sewers and Their Remaining Service Life
Authors: Amir Alani, Mojtaba Mahmoodian, Anna Romanova, Asaad Faramarzi
Abstract:
Pipelines are extensively used engineering structures which convey fluid from one place to another. Most of the time, pipelines are placed underground and are encumbered by soil weight and traffic loads. Corrosion of pipe material is the most common form of pipeline deterioration and should be considered in both the strength and serviceability analysis of pipes. The study in this research focuses on concrete pipes in sewage systems (concrete sewers). This research firstly investigates how to involve the effect of corrosion as a time dependent process of deterioration in the structural and failure analysis of this type of pipe. Then three probabilistic time dependent reliability analysis methods including the first passage probability theory, the gamma distributed degradation model and the Monte Carlo simulation technique are discussed and developed. Sensitivity analysis indexes which can be used to identify the most important parameters that affect pipe failure are also discussed. The reliability analysis methods developed in this paper contribute as rational tools for decision makers with regard to the strengthening and rehabilitation of existing pipelines. The results can be used to obtain a cost-effective strategy for the management of the sewer system.Keywords: reliability analysis, service life prediction, Monte Carlo simulation method, first passage probability theory, gamma distributed degradation model
Procedia PDF Downloads 457999 Using Cyclic Structure to Improve Inference on Network Community Structure
Authors: Behnaz Moradijamei, Michael Higgins
Abstract:
Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.Keywords: hypothesis testing, RNBRW, network inference, community structure
Procedia PDF Downloads 152998 Life Time Improvement of Clamp Structural by Using Fatigue Analysis
Authors: Pisut Boonkaew, Jatuporn Thongsri
Abstract:
In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability
Procedia PDF Downloads 235997 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate
Authors: Susan Diamond
Abstract:
Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare.Keywords: deep learning, machine learning, cognitive computing, model training
Procedia PDF Downloads 209996 Selecting the Best Risk Exposure to Assess Collision Risks in Container Terminals
Authors: Mohammad Ali Hasanzadeh, Thierry Van Elslander, Eddy Van De Voorde
Abstract:
About 90 percent of world merchandise trade by volume being carried by sea. Maritime transport remains as back bone behind the international trade and globalization meanwhile all seaborne goods need using at least two ports as origin and destination. Amid seaborne traded cargos, container traffic is a prosperous market with about 16% in terms of volume. Albeit containerized cargos are less in terms of tonnage but, containers carry the highest value cargos amongst all. That is why efficient handling of containers in ports is very important. Accidents are the foremost causes that lead to port inefficiency and a surge in total transport cost. Having different port safety management systems (PSMS) in place, statistics on port accidents show that numerous accidents occur in ports. Some of them claim peoples’ life; others damage goods, vessels, port equipment and/or the environment. Several accident investigation illustrate that the most common accidents take place throughout transport operation, it sometimes accounts for 68.6% of all events, therefore providing a safer workplace depends on reducing collision risk. In order to quantify risks at the port area different variables can be used as exposure measurement. One of the main motives for defining and using exposure in studies related to infrastructure is to account for the differences in intensity of use, so as to make comparisons meaningful. In various researches related to handling containers in ports and intermodal terminals, different risk exposures and also the likelihood of each event have been selected. Vehicle collision within the port area (10-7 per kilometer of vehicle distance travelled) and dropping containers from cranes, forklift trucks, or rail mounted gantries (1 x 10-5 per lift) are some examples. According to the objective of the current research, three categories of accidents selected for collision risk assessment; fall of container during ship to shore operation, dropping container during transfer operation and collision between vehicles and objects within terminal area. Later on various consequences, exposure and probability identified for each accident. Hence, reducing collision risks profoundly rely on picking the right risk exposures and probability of selected accidents, to prevent collision accidents in container terminals and in the framework of risk calculations, such risk exposures and probabilities can be useful in assessing the effectiveness of safety programs in ports.Keywords: container terminal, collision, seaborne trade, risk exposure, risk probability
Procedia PDF Downloads 377995 Teleconnection between El Nino-Southern Oscillation and Seasonal Flow of the Surma River and Possibilities of Long Range Flood Forecasting
Authors: Monika Saha, A. T. M. Hasan Zobeyer, Nasreen Jahan
Abstract:
El Nino-Southern Oscillation (ENSO) is the interaction between atmosphere and ocean in tropical Pacific which causes inconsistent warm/cold weather in tropical central and eastern Pacific Ocean. Due to the impact of climate change, ENSO events are becoming stronger in recent times, and therefore it is very important to study the influence of ENSO in climate studies. Bangladesh, being in the low-lying deltaic floodplain, experiences the worst consequences due to flooding every year. To reduce the catastrophe of severe flooding events, non-structural measures such as flood forecasting can be helpful in taking adequate precautions and steps. Forecasting seasonal flood with a longer lead time of several months is a key component of flood damage control and water management. The objective of this research is to identify the possible strength of teleconnection between ENSO and river flow of Surma and examine the potential possibility of long lead flood forecasting in the wet season. Surma is one of the major rivers of Bangladesh and is a part of the Surma-Meghna river system. In this research, sea surface temperature (SST) has been considered as the ENSO index and the lead time is at least a few months which is greater than the basin response time. The teleconnection has been assessed by the correlation analysis between July-August-September (JAS) flow of Surma and SST of Nino 4 region of the corresponding months. Cumulative frequency distribution of standardized JAS flow of Surma has also been determined as part of assessing the possible teleconnection. Discharge data of Surma river from 1975 to 2015 is used in this analysis, and remarkable increased value of correlation coefficient between flow and ENSO has been observed from 1985. From the cumulative frequency distribution of the standardized JAS flow, it has been marked that in any year the JAS flow has approximately 50% probability of exceeding the long-term average JAS flow. During El Nino year (warm episode of ENSO) this probability of exceedance drops to 23% and while in La Nina year (cold episode of ENSO) it increases to 78%. Discriminant analysis which is known as 'Categoric Prediction' has been performed to identify the possibilities of long lead flood forecasting. It has helped to categorize the flow data (high, average and low) based on the classification of predicted SST (warm, normal and cold). From the discriminant analysis, it has been found that for Surma river, the probability of a high flood in the cold period is 75% and the probability of a low flood in the warm period is 33%. A synoptic parameter, forecasting index (FI) has also been calculated here to judge the forecast skill and to compare different forecasts. This study will help the concerned authorities and the stakeholders to take long-term water resources decisions and formulate policies on river basin management which will reduce possible damage of life, agriculture, and property.Keywords: El Nino-Southern Oscillation, sea surface temperature, surma river, teleconnection, cumulative frequency distribution, discriminant analysis, forecasting index
Procedia PDF Downloads 156994 Vulnerability Assessment of Reinforced Concrete Frames Based on Inelastic Spectral Displacement
Authors: Chao Xu
Abstract:
Selecting ground motion intensity measures reasonably is one of the very important issues to affect the input ground motions selecting and the reliability of vulnerability analysis results. In this paper, inelastic spectral displacement is used as an alternative intensity measure to characterize the ground motion damage potential. The inelastic spectral displacement is calculated based modal pushover analysis and inelastic spectral displacement based incremental dynamic analysis is developed. Probability seismic demand analysis of a six story and an eleven story RC frame are carried out through cloud analysis and advanced incremental dynamic analysis. The sufficiency and efficiency of inelastic spectral displacement are investigated by means of regression and residual analysis, and compared with elastic spectral displacement. Vulnerability curves are developed based on inelastic spectral displacement. The study shows that inelastic spectral displacement reflects the impact of different frequency components with periods larger than fundamental period on inelastic structural response. The damage potential of ground motion on structures with fundamental period prolonging caused by structural soften can be caught by inelastic spectral displacement. To be compared with elastic spectral displacement, inelastic spectral displacement is a more sufficient and efficient intensity measure, which reduces the uncertainty of vulnerability analysis and the impact of input ground motion selection on vulnerability analysis result.Keywords: vulnerability, probability seismic demand analysis, ground motion intensity measure, sufficiency, efficiency, inelastic time history analysis
Procedia PDF Downloads 354993 Enhancing the Pricing Expertise of an Online Distribution Channel
Authors: Luis N. Pereira, Marco P. Carrasco
Abstract:
Dynamic pricing is a revenue management strategy in which hotel suppliers define, over time, flexible and different prices for their services for different potential customers, considering the profile of e-consumers and the demand and market supply. This means that the fundamentals of dynamic pricing are based on economic theory (price elasticity of demand) and market segmentation. This study aims to define a dynamic pricing strategy and a contextualized offer to the e-consumers profile in order to improve the number of reservations of an online distribution channel. Segmentation methods (hierarchical and non-hierarchical) were used to identify and validate an optimal number of market segments. A profile of the market segments was studied, considering the characteristics of the e-consumers and the probability of reservation a room. In addition, the price elasticity of demand was estimated for each segment using econometric models. Finally, predictive models were used to define rules for classifying new e-consumers into pre-defined segments. The empirical study illustrates how it is possible to improve the intelligence of an online distribution channel system through an optimal dynamic pricing strategy and a contextualized offer to the profile of each new e-consumer. A database of 11 million e-consumers of an online distribution channel was used in this study. The results suggest that an appropriate policy of market segmentation in using of online reservation systems is benefit for the service suppliers because it brings high probability of reservation and generates more profit than fixed pricing.Keywords: dynamic pricing, e-consumers segmentation, online reservation systems, predictive analytics
Procedia PDF Downloads 235