Search results for: Weibull bi-parameter probability function
5561 Coupling Random Demand and Route Selection in the Transportation Network Design Problem
Authors: Shabnam Najafi, Metin Turkay
Abstract:
Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.Keywords: epsilon-constraint, multi-objective, network design, stochastic
Procedia PDF Downloads 6475560 Stochastic Nuisance Flood Risk for Coastal Areas
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.Keywords: flood risk, nuisance flooding, urban flooding, FMEA
Procedia PDF Downloads 1005559 Second Order Optimality Conditions in Nonsmooth Analysis on Riemannian Manifolds
Authors: Seyedehsomayeh Hosseini
Abstract:
Much attention has been paid over centuries to understanding and solving the problem of minimization of functions. Compared to linear programming and nonlinear unconstrained optimization problems, nonlinear constrained optimization problems are much more difficult. Since the procedure of finding an optimizer is a search based on the local information of the constraints and the objective function, it is very important to develop techniques using geometric properties of the constraints and the objective function. In fact, differential geometry provides a powerful tool to characterize and analyze these geometric properties. Thus, there is clearly a link between the techniques of optimization on manifolds and standard constrained optimization approaches. Furthermore, there are manifolds that are not defined as constrained sets in R^n an important example is the Grassmann manifolds. Hence, to solve optimization problems on these spaces, intrinsic methods are used. In a nondifferentiable problem, the gradient information of the objective function generally cannot be used to determine the direction in which the function is decreasing. Therefore, techniques of nonsmooth analysis are needed to deal with such a problem. As a manifold, in general, does not have a linear structure, the usual techniques, which are often used in nonsmooth analysis on linear spaces, cannot be applied and new techniques need to be developed. This paper presents necessary and sufficient conditions for a strict local minimum of extended real-valued, nonsmooth functions defined on Riemannian manifolds.Keywords: Riemannian manifolds, nonsmooth optimization, lower semicontinuous functions, subdifferential
Procedia PDF Downloads 3615558 Current Drainage Attack Correction via Adjusting the Attacking Saw-Function Asymmetry
Authors: Yuri Boiko, Iluju Kiringa, Tet Yeap
Abstract:
Current drainage attack suggested previously is further studied in regular settings of closed-loop controlled Brushless DC (BLDC) motor with Kalman filter in the feedback loop. Modeling and simulation experiments are conducted in a Matlab environment, implementing the closed-loop control model of BLDC motor operation in position sensorless mode under Kalman filter drive. The current increase in the motor windings is caused by the controller (p-controller in our case) affected by false data injection of substitution of the angular velocity estimates with distorted values. Operation of multiplication to distortion coefficient, values of which are taken from the distortion function synchronized in its periodicity with the rotor’s position change. A saw function with a triangular tooth shape is studied herewith for the purpose of carrying out the bias injection with current drainage consequences. The specific focus here is on how the asymmetry of the tooth in the saw function affects the flow of current drainage. The purpose is two-fold: (i) to produce and collect the signature of an asymmetric saw in the attack for further pattern recognition process, and (ii) to determine conditions of improving stealthiness of such attack via regulating asymmetry in saw function used. It is found that modification of the symmetry in the saw tooth affects the periodicity of current drainage modulation. Specifically, the modulation frequency of the drained current for a fully asymmetric tooth shape coincides with the saw function modulation frequency itself. Increasing the symmetry parameter for the triangle tooth shape leads to an increase in the modulation frequency for the drained current. Moreover, such frequency reaches the switching frequency of the motor windings for fully symmetric triangular shapes, thus becoming undetectable and improving the stealthiness of the attack. Therefore, the collected signatures of the attack can serve for attack parameter identification via the pattern recognition route.Keywords: bias injection attack, Kalman filter, BLDC motor, control system, closed loop, P-controller, PID-controller, current drainage, saw-function, asymmetry
Procedia PDF Downloads 805557 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling
Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines
Abstract:
The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances
Procedia PDF Downloads 1565556 A Parallel Implementation of k-Means in MATLAB
Authors: Dimitris Varsamis, Christos Talagkozis, Alkiviadis Tsimpiris, Paris Mastorocostas
Abstract:
The aim of this work is the parallel implementation of k-means in MATLAB, in order to reduce the execution time. Specifically, a new function in MATLAB for serial k-means algorithm is developed, which meets all the requirements for the conversion to a function in MATLAB with parallel computations. Additionally, two different variants for the definition of initial values are presented. In the sequel, the parallel approach is presented. Finally, the performance tests for the computation times respect to the numbers of features and classes are illustrated.Keywords: K-means algorithm, clustering, parallel computations, Matlab
Procedia PDF Downloads 3855555 The 10-year Risk of Major Osteoporotic and Hip Fractures Among Indonesian People Living with HIV
Authors: Iqbal Pramukti, Mamat Lukman, Hasniatisari Harun, Kusman Ibrahim
Abstract:
Introduction: People living with HIV had a higher risk of osteoporotic fracture than the general population. The purpose of this study was to predict the 10-year risk of fracture among people living with HIV (PLWH) using FRAX™ and to identify characteristics related to the fracture risk. Methodology: This study consisted of 75 subjects. The ten-year probability of major osteoporotic fractures (MOF) and hip fractures was assessed using the FRAX™ algorithm. A cross-tabulation was used to identify the participant’s characteristics related to fracture risk. Results: The overall mean 10-year probability of fracture was 2.4% (1.7) for MOF and 0.4% (0.3) for hip fractures. For MOF score, participants with parents’ hip fracture history, smoking behavior and glucocorticoid use showed a higher MOF score than those who were not (3.1 vs. 2.5; 4.6 vs 2.5; and 3.4 vs 2.5, respectively). For HF score, participants with parents’ hip fracture history, smoking behavior and glucocorticoid use also showed a higher HF score than those who were not (0.5 vs. 0.3; 0.8 vs. 0.3; and 0.5 vs. 0.3, respectively). Conclusions: The 10-year risk of fracture was higher among PLWH with several factors, including the parent’s hip. Fracture history, smoking behavior and glucocorticoid used. Further analysis on determining factors using multivariate regression analysis with a larger sample size is required to confirm the factors associated with the high fracture risk.Keywords: HIV, PLWH, osteoporotic fractures, hip fractures, 10-year risk of fracture, FRAX
Procedia PDF Downloads 495554 A Hybrid Data-Handler Module Based Approach for Prioritization in Quality Function Deployment
Authors: P. Venu, Joeju M. Issac
Abstract:
Quality Function Deployment (QFD) is a systematic technique that creates a platform where the customer responses can be positively converted to design attributes. The accuracy of a QFD process heavily depends on the data that it is handling which is captured from customers or QFD team members. Customized computer programs that perform Quality Function Deployment within a stipulated time have been used by various companies across the globe. These programs heavily rely on storage and retrieval of the data on a common database. This database must act as a perfect source with minimum missing values or error values in order perform actual prioritization. This paper introduces a missing/error data handler module which uses Genetic Algorithm and Fuzzy numbers. The prioritization of customer requirements of sesame oil is illustrated and a comparison is made between proposed data handler module-based deployment and manual deployment.Keywords: hybrid data handler, QFD, prioritization, module-based deployment
Procedia PDF Downloads 2975553 Thermodynamics during the Deconfining Phase Transition
Authors: Amal Ait El Djoudi
Abstract:
A thermodynamical model of coexisting hadronic and quark–gluon plasma (QGP) phases is used to study the thermally driven deconfining phase transition occurring between the two phases. A color singlet partition function is calculated for the QGP phase with two massless quarks, as in our previous work, but now the finite extensions of the hadrons are taken into account in the equation of state of the hadronic phase. In the present work, the finite-size effects on the system are examined by probing the behavior of some thermodynamic quantities, called response functions, as order parameter, energy density and their derivatives, on a range of temperature around the transition at different volumes. It turns out that the finiteness of the system size has as effects the rounding of the transition and the smearing of all the singularities occurring in the thermodynamic limit, and the additional finite-size effect introduced by the requirement of exact color-singletness involves a shift of the transition point. This shift as well as the smearing of the transition region and the maxima of both susceptibility and specific heat show a scaling behavior with the volume characterized by scaling exponents. Another striking result is the large similarity noted between the behavior of these response functions and that of the cumulants of the probability density. This similarity is worked to try to extract information concerning the occurring phase transition.Keywords: equation of state, thermodynamics, deconfining phase transition, quark–gluon plasma (QGP)
Procedia PDF Downloads 4275552 Covariance of the Queue Process Fed by Isonormal Gaussian Input Process
Authors: Samaneh Rahimirshnani, Hossein Jafari
Abstract:
In this paper, we consider fluid queueing processes fed by an isonormal Gaussian process. We study the correlation structure of the queueing process and the rate of convergence of the running supremum in the queueing process. The Malliavin calculus techniques are applied to obtain relations that show the workload process inherits the dependence properties of the input process. As examples, we consider two isonormal Gaussian processes, the sub-fractional Brownian motion (SFBM) and the fractional Brownian motion (FBM). For these examples, we obtain upper bounds for the covariance function of the queueing process and its rate of convergence to zero. We also discover that the rate of convergence of the queueing process is related to the structure of the covariance function of the input process.Keywords: queue length process, Malliavin calculus, covariance function, fractional Brownian motion, sub-fractional Brownian motion
Procedia PDF Downloads 655551 On Differential Growth Equation to Stochastic Growth Model Using Hyperbolic Sine Function in Height/Diameter Modeling of Pines
Authors: S. O. Oyamakin, A. U. Chukwu
Abstract:
Richard's growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richard's growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richard's growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richard's nonlinear growth models better than the classical Richard's growth model.Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, Richard's, stochastic
Procedia PDF Downloads 4805550 A Nonlinear Stochastic Differential Equation Model for Financial Bubbles and Crashes with Finite-Time Singularities
Authors: Haowen Xi
Abstract:
We propose and solve exactly a class of non-linear generalization of the Black-Scholes process of stochastic differential equations describing price bubble and crashes dynamics. As a result of nonlinear positive feedback, the faster-than-exponential price positive growth (bubble forming) and negative price growth (crash forming) are found to be the power-law finite-time singularity in which bubbles and crashes price formation ending at finite critical time tc. While most literature on the market bubble and crash process focuses on the nonlinear positive feedback mechanism aspect, very few studies concern the noise level on the same process. The present work adds to the market bubble and crashes literature by studying the external sources noise influence on the critical time tc of the bubble forming and crashes forming. Two main results will be discussed: (1) the analytical expression of expected value of the critical timeKeywords: bubble, crash, finite-time-singular, numerical simulation, price dynamics, stochastic differential equations
Procedia PDF Downloads 1325549 Democratic Political Culture of the 5th and 6th Graders under the Authority of Dusit District Office, Bangkok
Authors: Vilasinee Jintalikhitdee, Phusit Phukamchanoad, Sakapas Saengchai
Abstract:
This research aims to study the level of democratic political culture and the factors that affect the democratic political culture of 5th and 6th graders under the authority of Dusit District Office, Bangkok by using stratified sampling for probability sampling and using purposive sampling for non-probability sampling to collect data toward the distribution of questionnaires to 300 respondents. This covers all of the schools under the authority of Dusit District Office. The researcher analyzed the data by using descriptive statistics which include arithmetic mean, standard deviation, and inferential statistics which are Independent Samples T-test (T-test) and One-Way ANOVA (F-test). The researcher also collected data by interviewing the target groups, and then analyzed the data by the use of descriptive analysis. The result shows that 5th and 6th graders under the authority of Dusit District Office, Bangkok have exposed to democratic political culture at high level in overall. When considering each part, it found out that the part that has highest mean is “the constitutional democratic governmental system is suitable for Thailand” statement. The part with the lowest mean is “corruption (cheat and defraud) is normal in Thai society” statement. The factor that affects democratic political culture is grade levels, occupations of mothers, and attention in news and political movements.Keywords: democratic, political culture, political movements, democratic governmental system
Procedia PDF Downloads 2665548 Analysis of Adipose Tissue-Derived Mesenchymal Stem Cells under Atherosclerosis Microenvironment
Authors: Do Khanh Vy, Vuong Cat Khanh, Osamu Ohneda
Abstract:
During atherosclerosis (AS) progression, perivascular adipose tissue-derived mesenchymal stem cells (PVAT-MSCs) are exposed to the hypoxic environment due to the oxygenic deprivation which might influence the adipose tissue-derived mesenchymal stem cells (AT-MSCs) function. Additionally, it has been reported that the angiogenic ability of subcutaneous AT-MSCs (SAT-MSCs) was impaired in the AS patients. However, up to now, the effects of AS on the characteristics and function of PVAT-MSCs have not been clarified yet. In the present study, we analyzed the AS microenvironment effects on the characteristics and function of AT-MSCs. We found that there was no significant difference in cellular morphology and differentiation ability between SAT-MSCs and PVAT-MSCs in AS patients. However, the proliferation of AS-derived PVAT-MSCs was less than those of AS-derived SAT-MSCs. Importantly, the migration of AS-derived PVAT-MSCs was faster than AS-derived SAT-MSCs. Of note, AS-derived PVAT-MSCs showed the upregulation of SDF1, which is related to the homing, and VEGF, which is related to the angiogenesis compared to those of AS-derived SAT-MSCs. Consistent with these results, AS-derived PVAT-MSCs showed the higher ability to recruit EPCs and ECs than AS-derived SAT-MSCs. In addition, EPCs and ECs which cultured in the presence of AS-derived PVAT-MSC conditioned medium showed the higher angiogenic function of the tube formation compared to those cultured in AS-derived SAT-MSC conditioned medium. This result suggests that the higher paracrine effects of AS-derived PVAT-MSCs support the angiogenic function of the target cells. Our data showed the different characteristics and functions of AT-MSCs derived from different sources of tissues. Under the AS microenvironment, it seems that the characteristics and functions of PVAT-MSCs might reflect the progression of AS. Further study will be necessary to clarify the mechanism in the future.Keywords: atherosclerosis, mesenchymal stem cells, perivascular adipose tissue, subcutaneous adipose tissue
Procedia PDF Downloads 1615547 Modelling Hydrological Time Series Using Wakeby Distribution
Authors: Ilaria Lucrezia Amerise
Abstract:
The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.Keywords: generalized extreme values, likelihood estimation, precipitation data, Wakeby distribution
Procedia PDF Downloads 1395546 Frame to Frameless: Stereotactic Operation Progress in Robot Time
Authors: Zengmin Tian, Bin Lv, Rui Hui, Yupeng Liu, Chuan Wang, Qing Liu, Hongyu Li, Yan Qi, Li Song
Abstract:
Objective Robot was used for replacement of the frame in recent years. The paper is to investigate the safety and effectiveness of frameless stereotactic surgery in the treatment of children with cerebral palsy. Methods Clinical data of 425 children with spastic cerebral palsy were retrospectively analyzed. The patients were treated with robot-assistant frameless stereotactic surgery of nuclear mass destruction. The motor function was evaluated by gross motor function measure-88 (GMFM-88) before the operation, 1 week and 3 months after the operation respectively. The statistical analysis was performed. Results The postoperative CT showed that the destruction area covered the predetermined target in all the patients. Minimal bleeding of puncture channel occurred in 2 patient, and mild fever in 3 cases. Otherwise, there was no severe surgical complication occurred. The GMFM-88 scores were 49.1±22.5 before the operation, 52.8±24.2 and 64.2±21.4 at the time of 1 week and 3 months after the operation, respectively. There was statistical difference between before and after the operation (P<0.01). After 3 months, the total effective rate was 98.1%, and the average improvement rate of motor function was 24.3% . Conclusion Replaced the traditional frame, the robot-assistant frameless stereotactic surgery is safe and reliable for children with spastic cerebral palsy, which has positive significance in improving patients’ motor function.Keywords: cerebral palsy, robotics, stereotactic techniques, frameless operation
Procedia PDF Downloads 895545 Impact of Violence against Women on Small and Medium Enterprises (SMEs) in Rural Sindh: A Case Study of Kandhkot
Authors: Mohammad Shoaib Khan, Abdul Sattar Bahalkani
Abstract:
This research investigates the violence and their impact on SMEs in Sindh. The main objective of current research is to examine the women empowerment through women participation in small and medium enterprises in upper Sindh. The data were collected from 500 respondents from Kandhkot District, by using simple random technique. A structural questionnaire was designed as an instrument for measuring the impact of SMEs business in women empowerment in rural Sindh. It was revealed that the rural women is less confident and their husbands were always given them hard time once they are exposing themselves to outside the boundaries of the house. It was revealed that rural women have a major contribution in social, economic, and political development. It was further revealed that women are getting low wages and due to non-availability of market facility they are paying low wages. The negative impact of husbands’ income and having children at the age of 0-6 years old are also significant. High income of other household member raises the reservation wage of mothers, thus lowers the probability of participation when the objective of working is to help family’s financial need. The impact of childcare on mothers’ labor force participation is significant but not as the theory predicted. The probability of participation in labor force is significantly higher for women who lived in the urban areas where job opportunities are greater compared to the rural.Keywords: empowerment, violence against women, SMEs, rural
Procedia PDF Downloads 3315544 Parameter Estimation for the Mixture of Generalized Gamma Model
Authors: Wikanda Phaphan
Abstract:
Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method
Procedia PDF Downloads 2195543 Exploring the Entrepreneur-Function in Uncertainty: Towards a Revised Definition
Authors: Johan Esbach
Abstract:
The entrepreneur has traditionally been defined through various historical lenses, emphasising individual traits, risk-taking, speculation, innovation and firm creation. However, these definitions often fail to address the dynamic nature of the modern entrepreneurial functions, which respond to unpredictable uncertainties and transition to routine management as certainty is achieved. This paper proposes a revised definition, positioning the entrepreneur as a dynamic function rather than a human construct, that emerges to address specific uncertainties in economic systems, but fades once uncertainty is resolved. By examining historical definitions and its limitations, including the works of Cantillon, Say, Schumpeter, and Knight, this paper identifies a gap in literature and develops a generalised definition for the entrepreneur. The revised definition challenges conventional thought by shifting focus from static attributes such as alertness, traits, firm creation, etc., to a dynamic role that includes reliability, adaptation, scalability, and adaptability. The methodology of this paper employs a mixed approach, combining theoretical analysis and case study examination to explore the dynamic nature of the entrepreneurial function in relation to uncertainty. The selection of case studies includes companies like Airbnb, Uber, Netflix, and Tesla, as these firms demonstrate a clear transition from entrepreneurial uncertainty to routine certainty. The data from the case studies is then analysed qualitatively, focusing on the patterns of entrepreneurial function across the selected companies. These results are then validated using quantitative analysis, derived from an independent survey. The primary finding of the paper will validate the entrepreneur as a dynamic function rather than a static, human-centric role. In considering the transition from uncertainty to certainty in companies like Airbnb, Uber, Netflix, and Tesla, the study shows that the entrepreneurial function emerges explicitly to address market, technological, or social uncertainties. Once these uncertainties are resolved and a certainty in the operating environment is established, the need for the entrepreneurial function ceases, giving way to routine management and business operations. The paper emphasises the need for a definitive model that responds to the temporal and contextualised nature of the entrepreneur. In adopting the revised definition, the entrepreneur is positioned to play a crucial role in the reduction of uncertainties within economic systems. Once the uncertainties are addressed, certainty is manifested in new combinations or new firms. Finally, the paper outlines policy implications for fostering environments that enables the entrepreneurial function and transition theory.Keywords: dynamic function, uncertainty, revised definition, transition
Procedia PDF Downloads 215542 Merging Appeal to Ignorance, Composition, and Division Argument Schemes with Bayesian Networks
Authors: Kong Ngai Pei
Abstract:
The argument scheme approach to argumentation has two components. One is to identify the recurrent patterns of inferences used in everyday discourse. The second is to devise critical questions to evaluate the inferences in these patterns. Although this approach is intuitive and contains many insightful ideas, it has been noted to be not free of problems. One is that due to its disavowing the probability calculus, it cannot give the exact strength of an inference. In order to tackle this problem, thereby paving the way to a more complete normative account of argument strength, it has been proposed, the most promising way is to combine the scheme-based approach with Bayesian networks (BNs). This paper pursues this line of thought, attempting to combine three common schemes, Appeal to Ignorance, Composition, and Division, with BNs. In the first part, it is argued that most (if not all) formulations of the critical questions corresponding to these schemes in the current argumentation literature are incomplete and not very informative. To remedy these flaws, more thorough and precise formulations of these questions are provided. In the second part, how to use graphical idioms (e.g. measurement and synthesis idioms) to translate the schemes as well as their corresponding critical questions to graphical structure of BNs, and how to define probability tables of the nodes using functions of various sorts are shown. In the final part, it is argued that many misuses of these schemes, traditionally called fallacies with the same names as the schemes, can indeed be adequately accounted for by the BN models proposed in this paper.Keywords: appeal to ignorance, argument schemes, Bayesian networks, composition, division
Procedia PDF Downloads 2865541 Fuzzy Availability Analysis of a Battery Production System
Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz
Abstract:
In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)
Procedia PDF Downloads 2245540 Evaluation and Association of Thyroid Function Tests with Liver Function Parameters LDL and LDH Level Before and after I131 Therapy
Authors: Sabika Rafiq, Rubaida Mehmood, Sajid Hussain, Atia Iqbal
Abstract:
Background and objectives: The pathogenesis of liver function abnormalities and cardiac dysfunction in hyperthyroid patients after I131 treatment is still unclear. This study aimed to determine the effects of radioiodine I131 on liver function parameters, lactate dehydrogenase (LDH) and low-density lipoproteins (LDL) before and after I131 therapy hyperthyroidism patients. Material & Methods: A total of 52 patients of hyperthyroidism recommended for I131were involved in this study with ages ranging from 12–65 years (mean age=38.6±14.8 & BMI=11.5±3.7). The significance of the differences between the results of 1st, 2nd and 3rd-time serum analysis was assessed by unpaired student’s t-test. Associations between the parameters were assessed by Spearman correlation analysis. Results: Significant variations were observed for thyroid profile free FT3 (p=0.04), FT4 (p=0.01), TSH (p=0.005) during the follow-up treatment. Before taking I131 (serum analyzed at 1st time), negative correlation of FT3 with AST (r=-0.458, p=0.032) and LDL (r=-0.454, p=0.039) were observed. During 2nd time (after stopping carbimazole), no correlation was assessed. Two months after the administration of I131 drops, a significant negative association of FT3 (r=-0.62, p=0.04) and FT4(r=-0.61, p=0.02) with ALB were observed. FT3(r=-0.82, p=0.00) & FT4 (r=-0.71, p=0.00) also showed negative correlation with LDL after I131 therapy. Whereas TSH showed significant positive association with ALB (r=0.61, p=0.01) and LDL (r=0.70, p=0.00) respectively. Conclusion: Current findings suggested that the association of TFTs with biochemical parameters in patients with goiter recommended for iodine therapy is an important diagnostic and therapeutic tool. The significant changes increased in transaminases and low-density lipoprotein levels after taking I131drops are alarming signs for heart and liver function abnormalities and warrant physicians' attention on an urgent basis.Keywords: hyperthyroidism, carbimazole, radioiodine I131, liver functions, low-density lipoprotein
Procedia PDF Downloads 1555539 Group Decision Making through Interval-Valued Intuitionistic Fuzzy Soft Set TOPSIS Method Using New Hybrid Score Function
Authors: Syed Talib Abbas Raza, Tahseen Ahmed Jilani, Saleem Abdullah
Abstract:
This paper presents interval-valued intuitionistic fuzzy soft sets based TOPSIS method for group decision making. The interval-valued intuitionistic fuzzy soft set is a mutation of an interval-valued intuitionistic fuzzy set and soft set. In group decision making problems IVIFSS makes the process much more algebraically elegant. We have used weighted arithmetic averaging operator for aggregating the information and define a new Hybrid Score Function as metric tool for comparison between interval-valued intuitionistic fuzzy values. In an illustrative example we have applied the developed method to a criminological problem. We have developed a group decision making model for integrating the imprecise and hesitant evaluations of multiple law enforcement agencies working on target killing cases in the country.Keywords: group decision making, interval-valued intuitionistic fuzzy soft set, TOPSIS, score function, criminology
Procedia PDF Downloads 6045538 A Distribution Free Test for Censored Matched Pairs
Authors: Ayman Baklizi
Abstract:
This paper discusses the problem of testing hypotheses about the lifetime distributions of a matched pair based on censored data. A distribution free test based on a runs statistic is proposed. Its null distribution and power function are found in a simple convenient form. Some properties of the test statistic and its power function are studied.Keywords: censored data, distribution free, matched pair, runs statistics
Procedia PDF Downloads 2875537 Cellular Automata Using Fractional Integral Model
Authors: Yasser F. Hassan
Abstract:
In this paper, a proposed model of cellular automata is studied by means of fractional integral function. A cellular automaton is a decentralized computing model providing an excellent platform for performing complex computation with the help of only local information. The paper discusses how using fractional integral function for representing cellular automata memory or state. The architecture of computing and learning model will be given and the results of calibrating of approach are also given.Keywords: fractional integral, cellular automata, memory, learning
Procedia PDF Downloads 4135536 Wind Energy Potential of Southern Sindh, Pakistan for Power Generation
Authors: M. Akhlaque Ahmed, Maliha Afshan Siddiqui
Abstract:
A study has been carried out to see the prospect of wind power potential of southern Sindh namely Karachi, Hawksbay, Norriabad, Hyderabad, Ketibander and Shahbander using local wind speed data. The monthly average wind speed for these area ranges from 4.5m/sec to 8.5m/sec at 30m height from ground. Extractable wind power, wind energy and Weibul parameter for above mentioned areas have been examined. Furthermore, the power output using fast and slow wind machine using different blade diameter along with the 4Kw and 20 Kw aero-generator were examined to see the possible use for deep well pumping and electricity supply to remote villages. The analysis reveals that in this wind corridor of southern Sindh Hawksbay, Ketibander and Shahbander belongs to wind power class-3 Hyderabad and Nooriabad belongs to wind power class-5 and Karachi belongs to wind power class-2. The result shows that the that higher wind speed values occur between June till August. It was found that considering maximum wind speed location, Hawksbay,Noriabad are the best location for setting up wind machines for power generation.Keywords: wind energy generation, Southern Sindh, seasonal change, Weibull parameter, wind machines
Procedia PDF Downloads 1495535 Transitivity, Mood and Modality Analysis in Malaysian News Headlines on Healthy Eating
Authors: Faith Fang Xi Ooi, Kam-Fong Lee
Abstract:
Headlines are generally the summary of the content of news articles. With the added influence of hectic lifestyles, readers may rely solely on the headlines for information. In the media, what is reported concerning health issues are government responses and community involvement. There is a need for a call to action to curb health issues and not just reporting on what the government is doing about these health-related issues. In other words, linguistic elements of persuasive communicative function should be realized when reporting on health issues. Hence, this paper aims at identifying and analyzing the transitivity, Mood and Modality systems in two hundred news headlines from two Malaysian online news portals, namely The Star Online and New Straits Times. This study employs the purposive sampling method to obtain the news headlines on healthy eating using the search keyword ‘healthy eating’ and is based on Halliday’s Systemic Functional Linguistics (SFL) framework. The results show that the Material process dominates the process types along with its participants of Scope and Goal. The mood type that constitutes most of the headlines in the two newspapers is the declarative mood. Moreover, for Modality, the median Probability constitutes the highest in the headlines on healthy eating. This study contributes to the implications of being a source of reference for news writers and producers in constructing news headlines and for health campaign strategists to realize the persuasive appeals to influence behaviors and attitudes of the public towards healthy eating.Keywords: healthy eating, modality, mood, news headlines, SFL
Procedia PDF Downloads 1725534 Establishing a Computational Screening Framework to Identify Environmental Exposures Using Untargeted Gas-Chromatography High-Resolution Mass Spectrometry
Authors: Juni C. Kim, Anna R. Robuck, Douglas I. Walker
Abstract:
The human exposome, which includes chemical exposures over the lifetime and their effects, is now recognized as an important measure for understanding human health; however, the complexity of the data makes the identification of environmental chemicals challenging. The goal of our project was to establish a computational workflow for the improved identification of environmental pollutants containing chlorine or bromine. Using the “pattern. search” function available in the R package NonTarget, we wrote a multifunctional script that searches mass spectral clusters from untargeted gas-chromatography high-resolution mass spectrometry (GC-HRMS) for the presence of spectra consistent with chlorine and bromine-containing organic compounds. The “pattern. search” function was incorporated into a different function that allows the evaluation of clusters containing multiple analyte fragments, has multi-core support, and provides a simplified output identifying listing compounds containing chlorine and/or bromine. The new function was able to process 46,000 spectral clusters in under 8 seconds and identified over 150 potential halogenated spectra. We next applied our function to a deidentified dataset from patients diagnosed with primary biliary cholangitis (PBC), primary sclerosing cholangitis (PSC), and healthy controls. Twenty-two spectra corresponded to potential halogenated compounds in the PSC and PBC dataset, including six significantly different in PBC patients, while four differed in PSC patients. We have developed an improved algorithm for detecting halogenated compounds in GC-HRMS data, providing a strategy for prioritizing exposures in the study of human disease.Keywords: exposome, metabolome, computational metabolomics, high-resolution mass spectrometry, exposure, pollutants
Procedia PDF Downloads 1385533 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 2215532 Drying Kinetics, Energy Requirement, Bioactive Composition, and Mathematical Modeling of Allium Cepa Slices
Authors: Felix U. Asoiro, Meshack I. Simeon, Chinenye E. Azuka, Harami Solomon, Chukwuemeka J. Ohagwu
Abstract:
The drying kinetics, specific energy consumed (SEC), effective moisture diffusivity (EMD), flavonoid, phenolic, and vitamin C contents of onion slices dried under convective oven drying (COD) were compared with microwave drying (MD). Drying was performed with onion slice thicknesses of 2, 4, 6, and 8 mm; air drying temperatures of 60, 80, and 100°C for COD, and microwave power of 450 W for MD. A decrease in slice thickness and an increase in drying air temperature led to a drop in the drying time. As thickness increased from 2 – 8 mm, EMD rose from 1.1-4.35 x 10⁻⁸ at 60°C, 1.1-5.6 x 10⁻⁸ at 80°C, and 1.25-6.12 x 10⁻⁸ at 100°C with MD treatments yielding the highest mean value (6.65 x 10⁻⁸ m² s⁻¹) at 8 mm. Maximum SEC for onion slices in COD was 238.27 kWh/kg H₂O (2 mm thickness), and the minimum was 39.4 kWh/kg H₂O (8 mm thickness) whereas maximum during MD was 25.33 kWh/kg H₂O (8 mm thickness) and minimum, 18.7 kWh/kg H₂O (2 mm thickness). MD treatment gave a significant (p 0.05) increase in the flavonoid (39.42 – 64.4%), phenolic (38.0 – 46.84%), and vitamin C (3.7 – 4.23 mg 100 g⁻¹) contents, while COD treatment at 60°C and 100°C had positive effects on only vitamin C and phenolic contents, respectively. In comparison, the Weibull model gave the overall best fit (highest R²=0.999; lowest SSE=0.0002, RSME=0.0123, and χ²= 0.0004) when drying 2 mm onion slices at 100°C.Keywords: allium cepa, drying kinetics, specific energy consumption, flavonoid, vitamin C, microwave oven drying
Procedia PDF Downloads 137