Search results for: coverage probability
1521 Assortative Education and Working Arrangement among Married Couples in Indonesia
Authors: Ratu Khabiba, Qisha Quarina
Abstract:
This study aims to analyse the effect of married couples’ assortative educational attainments on the division of economic activities among themselves in the household. This study contributes to the literature on women’s participation in employment, especially among married women, to see whether the traditional values about gender roles in the household still continue to shape the employment participation among married women in Indonesia, despite increasing women’s human capital through education. This study utilizes the Indonesian National Socioeconomic Survey (SUSENAS) 2016 and estimates the results using the multinomial logit model. Our results show that compared to high-educated educational homogamy couples, educational heterogamy couples, especially hypergamy, have a higher probability of being a single-worker type. Moreover, the high-educated educational homogamy couples have the highest probability of being a dual-worker type. Thus, we found evidence that the traditional values of gender role segregation seem to still play a significant role in married women’s employment decision in Indonesia, particularly for couples’ with educational heterogamy and low-educated educational homogamy couples.Keywords: assortative education, dual-worker, hypergamy, homogamy, traditional values, women labor participation
Procedia PDF Downloads 1191520 A Hybrid Based Algorithm to Solve the Multi-objective Minimum Spanning Tree Problem
Authors: Boumesbah Asma, Chergui Mohamed El-amine
Abstract:
Since it has been shown that the multi-objective minimum spanning tree problem (MOST) is NP-hard even with two criteria, we propose in this study a hybrid NSGA-II algorithm with an exact mutation operator, which is only used with low probability, to find an approximation to the Pareto front of the problem. In a connected graph G, a spanning tree T of G being a connected and cycle-free graph, if k edges of G\T are added to T, we obtain a partial graph H of G inducing a reduced size multi-objective spanning tree problem compared to the initial one. With a weak probability for the mutation operator, an exact method for solving the reduced MOST problem considering the graph H is then used to give birth to several mutated solutions from a spanning tree T. Then, the selection operator of NSGA-II is activated to obtain the Pareto front approximation. Finally, an adaptation of the VNS metaheuristic is called for further improvements on this front. It allows finding good individuals to counterbalance the diversification and the intensification during the optimization search process. Experimental comparison studies with an exact method show promising results and indicate that the proposed algorithm is efficient.Keywords: minimum spanning tree, multiple objective linear optimization, combinatorial optimization, non-sorting genetic algorithm, variable neighborhood search
Procedia PDF Downloads 911519 The Trumping of Science: Exploratory Study into Discrepancy between Politician and Scientist Sources in American Covid-19 News Coverage
Authors: Wafa Unus
Abstract:
Science journalism has been vanishing from America’s national newspapers for decades. Reportage on scientific topics is limited to only a handful of newspapers and of those, few employ dedicated science journalists to cover stories that require this specialized expertise. News organizations' lack of readiness to convey complex scientific concepts to a mass populace becomes particularly problematic when events like the Covid-19 pandemic occur. The lack of coverage of Covid-19 prior to its onset in the United States, suggests something more troubling - that the deprioritization of reporting on hard science as an educational tool in favor of political frames of coverage, places dangerous blinders on the American public. This research looks at the disparity between voices of health and science experts in news articles and the voices of political figures, in order to better understand the approach of American newspapers in conveying expert opinion on Covid-19. A content analysis of 300 articles on Covid-19 by major newspapers in the United States between January 1st, 2020 and April 30th, 2020 illuminates this investigation. The Boston Globe, the New York Times, and the Los Angeles Times are included in the content analysis. Initial findings reveal a significant disparity in the number of articles that mention Anthony Fauci, the director of the National Institute Allergy and Infectious Disease, and the number that make reference to political figures. Covid-related articles in the New York Times that focused on health topics (as opposed to economic or social issues) contained the voices of 54 different politicians who were mentioned a total of 608 times. Only five members of the scientific community were mentioned a total of 24 times (out of 674 articles). In the Boston Globe, 36 different politicians were mentioned a total of 147 times, and only two members of the scientific community, one being Anthony Fauci, were mentioned a total of nine times (out of 423 articles). In the Los Angeles Times, 52 different politicians were mentioned a total of 600 times, and only six members of the scientific community were included and were mentioned a total of 82 times with Fauci being mentioned 48 times (out of 851 articles). Results provide a better understanding of the frames in which American journalists in Covid hotspots conveyed information of expert analysis on Covid-19 during one of the most pressing news events of the century. Ultimately, the objective of this study is to utilize the exploratory data to evaluate the nature, extent and impact of Covid-19 reporting in the context of trustworthiness and scientific expertise. Secondarily, this data will illuminate the degree to which Covid-19 reporting focused on politics over science.Keywords: science reporting, science journalism, covid, misinformation, news
Procedia PDF Downloads 2181518 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio
Authors: Urvee B. Trivedi, U. D. Dalal
Abstract:
As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.Keywords: cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary user (PU), secondary user (SU), fast Fourier transform (FFT), signal to noise ratio (SNR)
Procedia PDF Downloads 3451517 Occupational Diseases in the Automotive Industry in Czechia
Authors: J. Jarolímek, P. Urban, P. Pavlínek, D. Dzúrová
Abstract:
The industry constitutes a dominant economic sector in Czechia. The automotive industry represents the most important industrial sector in terms of gross value added and the number of employees. The objective of this study was to analyse the occurrence of occupational diseases (OD) in the automotive industry in Czechia during the 2001-2014 period. Whereas the occurrence of OD in other sectors has generally been decreasing, it has been increasing in the automotive industry, including growing spatial discrepancies. Data on OD cases were retrieved from the National Registry of Occupational Diseases. Further, we conducted a survey in automotive companies with a focus on occupational health services and positions of the companies in global production networks (GPNs). An analysis of OD distribution in the automotive industry was performed (age, gender, company size and its role in GPNs, regional distribution of studied companies, and regional unemployment rate), and was accompanied by an assessment of the quality and range of occupational health services. The employees older than 40 years had nearly 2.5 times higher probability of OD occurrence compared with employees younger than 40 years (OR 2.41; 95% CI: 2.05-2.85). The OD occurrence probability was 3 times higher for women than for men (OR 3.01; 95 % CI: 2.55-3.55). The OD incidence rate was increasing with the size of the company. An association between the OD incidence and the unemployment rate was not confirmed.Keywords: occupational diseases, automotive industry, health geography, unemployment
Procedia PDF Downloads 2511516 Racial and Ethnic Health Disparities: An Investigation of the Relationship between Race, Ethnicity, Health Care Access, and Health Status
Authors: Dorcas Matowe
Abstract:
Inequality in health care for racial and ethnic minorities continues to be a growing concern for many Americans. Some of the barriers hindering the elimination of health disparities include lack of insurance, socioeconomic status (SES), and racism. This study will specifically focus on the association between some of these factors- health care access, which includes insurance coverage and frequency of doctor visits, race, ethnicity, and health status. The purpose of this study will be to address the following questions: is having health insurance associated with increased doctor visits? Are racial and ethnic minorities with health insurance more or less likely to see a doctor? Is the association between having health insurance moderated by being an ethnic minority? Given the current implications of the 2010 Affordable Care Act, this study will highlight the need to prioritize health care access for minorities and confront institutional racism. Critical Race Theory (CRT) will demonstrate how racism has reinforced these health disparities. This quantitative study design will analyze secondary data from the 2015 Behavioral Risk Factor Surveillance System (BRFSS) questionnaire, a telephone survey conducted annually in all 50 states and three US territories by state health departments in conjunction with the Center for Disease Control (CDC). Non-identifying health-related data is gathered annually from over 400,000 adults 18 years and above about their health status and use of preventative services. Through Structural Equation Modeling (SEM), the relationship between the predictor variables of health care access, race, and ethnicity, the criterion variable of health status, and the latent variables of emotional support and life satisfaction will be examined. It is hypothesized that there will be an interaction between certain racial and ethnic minorities who went to see a doctor, had insurance coverage, experienced racism, and the quality of their health status, emotional support, and life satisfaction.Keywords: ethnic minorities, health disparities, health access, racism
Procedia PDF Downloads 2741515 The 10-year Risk of Major Osteoporotic and Hip Fractures Among Indonesian People Living with HIV
Authors: Iqbal Pramukti, Mamat Lukman, Hasniatisari Harun, Kusman Ibrahim
Abstract:
Introduction: People living with HIV had a higher risk of osteoporotic fracture than the general population. The purpose of this study was to predict the 10-year risk of fracture among people living with HIV (PLWH) using FRAX™ and to identify characteristics related to the fracture risk. Methodology: This study consisted of 75 subjects. The ten-year probability of major osteoporotic fractures (MOF) and hip fractures was assessed using the FRAX™ algorithm. A cross-tabulation was used to identify the participant’s characteristics related to fracture risk. Results: The overall mean 10-year probability of fracture was 2.4% (1.7) for MOF and 0.4% (0.3) for hip fractures. For MOF score, participants with parents’ hip fracture history, smoking behavior and glucocorticoid use showed a higher MOF score than those who were not (3.1 vs. 2.5; 4.6 vs 2.5; and 3.4 vs 2.5, respectively). For HF score, participants with parents’ hip fracture history, smoking behavior and glucocorticoid use also showed a higher HF score than those who were not (0.5 vs. 0.3; 0.8 vs. 0.3; and 0.5 vs. 0.3, respectively). Conclusions: The 10-year risk of fracture was higher among PLWH with several factors, including the parent’s hip. Fracture history, smoking behavior and glucocorticoid used. Further analysis on determining factors using multivariate regression analysis with a larger sample size is required to confirm the factors associated with the high fracture risk.Keywords: HIV, PLWH, osteoporotic fractures, hip fractures, 10-year risk of fracture, FRAX
Procedia PDF Downloads 491514 Dosimetric Comparison of Conventional Optimization Methods with Inverse Planning Simulated Annealing Technique
Authors: Shraddha Srivastava, N. K. Painuly, S. P. Mishra, Navin Singh, Muhsin Punchankandy, Kirti Srivastava, M. L. B. Bhatt
Abstract:
Various optimization methods used in interstitial brachytherapy are based on dwell positions and dwell weights alteration to produce dose distribution based on the implant geometry. Since these optimization schemes are not anatomy based, they could lead to deviations from the desired plan. This study was henceforth carried out to compare anatomy-based Inverse Planning Simulated Annealing (IPSA) optimization technique with graphical and geometrical optimization methods in interstitial high dose rate brachytherapy planning of cervical carcinoma. Six patients with 12 CT data sets of MUPIT implants in HDR brachytherapy of cervical cancer were prospectively studied. HR-CTV and organs at risk (OARs) were contoured in Oncentra treatment planning system (TPS) using GYN GEC-ESTRO guidelines on cervical carcinoma. Three sets of plans were generated for each fraction using IPSA, graphical optimization (GrOPT) and geometrical optimization (GOPT) methods. All patients were treated to a dose of 20 Gy in 2 fractions. The main objective was to cover at least 95% of HR-CTV with 100% of the prescribed dose (V100 ≥ 95% of HR-CTV). IPSA, GrOPT, and GOPT based plans were compared in terms of target coverage, OAR doses, homogeneity index (HI) and conformity index (COIN) using dose-volume histogram (DVH). Target volume coverage (mean V100) was found to be 93.980.87%, 91.341.02% and 85.052.84% for IPSA, GrOPT and GOPT plans respectively. Mean D90 (minimum dose received by 90% of HR-CTV) values for IPSA, GrOPT and GOPT plans were 10.19 ± 1.07 Gy, 10.17 ± 0.12 Gy and 7.99 ± 1.0 Gy respectively, while D100 (minimum dose received by 100% volume of HR-CTV) for IPSA, GrOPT and GOPT plans was 6.55 ± 0.85 Gy, 6.55 ± 0.65 Gy, 4.73 ± 0.14 Gy respectively. IPSA plans resulted in lower doses to the bladder (D₂Keywords: cervical cancer, HDR brachytherapy, IPSA, MUPIT
Procedia PDF Downloads 1881513 The Future of Insurance: P2P Innovation versus Traditional Business Model
Authors: Ivan Sosa Gomez
Abstract:
Digitalization has impacted the entire insurance value chain, and the growing movement towards P2P platforms and the collaborative economy is also beginning to have a significant impact. P2P insurance is defined as innovation, enabling policyholders to pool their capital, self-organize, and self-manage their own insurance. In this context, new InsurTech start-ups are emerging as peer-to-peer (P2P) providers, based on a model that differs from traditional insurance. As a result, although P2P platforms do not change the fundamental basis of insurance, they do enable potentially more efficient business models to be established in terms of ensuring the coverage of risk. It is therefore relevant to determine whether p2p innovation can have substantial effects on the future of the insurance sector. For this purpose, it is considered necessary to develop P2P innovation from a business perspective, as well as to build a comparison between a traditional model and a P2P model from an actuarial perspective. Objectives: The objectives are (1) to represent P2P innovation in the business model compared to the traditional insurance model and (2) to establish a comparison between a traditional model and a P2P model from an actuarial perspective. Methodology: The research design is defined as action research in terms of understanding and solving the problems of a collectivity linked to an environment, applying theory and best practices according to the approach. For this purpose, the study is carried out through the participatory variant, which involves the collaboration of the participants, given that in this design, participants are considered experts. For this purpose, prolonged immersion in the field is carried out as the main instrument for data collection. Finally, an actuarial model is developed relating to the calculation of premiums that allows for the establishment of projections of future scenarios and the generation of conclusions between the two models. Main Contributions: From an actuarial and business perspective, we aim to contribute by developing a comparison of the two models in the coverage of risk in order to determine whether P2P innovation can have substantial effects on the future of the insurance sector.Keywords: Insurtech, innovation, business model, P2P, insurance
Procedia PDF Downloads 931512 An Empirical Investigation into the Effect of Macroeconomic Policy on Economic Growth in Nigeria
Authors: Rakiya Abba
Abstract:
This paper investigates the effect of the money supply, exchange and interest rate on economic growth in Nigeria through the application of Augmented Dickey-Fuller technique in testing the unit root property of the series and Granger causality test of causation between GDP, money supply, the exchange, and interest rate. The results of unit root suggest that all the variables in the model are stationary at 1, 5 and 10 percent level of significance, and the results of Causality suggest that money supply and exchange granger cause IR, the result further reveals two – way causation existed between M2 and EXR while IR granger cause GDP the null hypothesis is rejected and GDP does not granger cause IR as indicated by their probability values of 0.4805 and confirmed by F-statistics values of 0.75483. The results revealed that M2 and EXR do not granger causes GDP, the null hypothesis is accepted at 75percent 18percent respectively as indicated by their probability values of 0.7472 and 0.1830 respectively; also, GDP does not granger cause M2 and EXR. The Johansen cointegration result indicates that despite GDP does not granger cause M2, IR, and EXR, but there existed 1 cointegrating equation, implying the existence of long-run relationship between GDP, M2 IR, and EXR. A major policy implication of this result is that economic growth is function of and money supply and exchange rate, effective monetary policies should direct on manipulating instruments and importance should be placed on justification for adopting a particular policy be rationalized in order to increase growth in economyKeywords: economic growth, money supply, interest rate, exchange rate, causality
Procedia PDF Downloads 2691511 Is More Inclusive More Effective? The 'New Style' Public Distribution System in India
Authors: Avinash Kishore, Suman Chakrabarti
Abstract:
In September 2013, the parliament of India enacted the National Food Security Act (NFSA) which entitles two-thirds of India’s population to five kilograms of rice, wheat or coarse cereals per person per month at one to three rupees per kilogram. Five states in India—Andhra Pradesh, Chhattisgarh, Tamil Nadu, Odisha and West Bengal—had already implemented somewhat similar changes in the TPDS a few years earlier using their own budgetary resources. They made rice—coincidentally, all five states are predominantly rice-eating—available in fair price shops to a majority of their population at very low prices (less than Rs.3/kg). This paper tries to account for the changes in household consumption patterns associated with the change in TPDS policy in these states using data from household consumption surveys by the National Sample Survey Organization (NSSO). NSS data show improvement in the coverage of TPDS and average off-take of grains from fair price shops between 2004-05 and 2009-10 across all states of India. However, the increase in coverage and off-take was significantly higher in four out of these five states than in the rest of India. An average household in these states purchased three kilos more rice per month from fair price shops than its counterpart in non-treated states as a result of more generous TPDS policies backed by administrative reforms. The increase in consumption of PDS rice was the highest in Chhattisgarh, the poster state of PDS reforms. Households in Chhattisgarh used money saved on rice to spend more on pulses, edible oil, vegetables and sugar and other non-food items. We also find evidence that making TPDS more inclusive and more generous is not enough unless it is supported by administrative reforms to improve grain delivery and control diversion to open markets.Keywords: public distribution system, social safety-net, national food security act, diet quality, Chhattisgarh
Procedia PDF Downloads 3751510 Democratic Political Culture of the 5th and 6th Graders under the Authority of Dusit District Office, Bangkok
Authors: Vilasinee Jintalikhitdee, Phusit Phukamchanoad, Sakapas Saengchai
Abstract:
This research aims to study the level of democratic political culture and the factors that affect the democratic political culture of 5th and 6th graders under the authority of Dusit District Office, Bangkok by using stratified sampling for probability sampling and using purposive sampling for non-probability sampling to collect data toward the distribution of questionnaires to 300 respondents. This covers all of the schools under the authority of Dusit District Office. The researcher analyzed the data by using descriptive statistics which include arithmetic mean, standard deviation, and inferential statistics which are Independent Samples T-test (T-test) and One-Way ANOVA (F-test). The researcher also collected data by interviewing the target groups, and then analyzed the data by the use of descriptive analysis. The result shows that 5th and 6th graders under the authority of Dusit District Office, Bangkok have exposed to democratic political culture at high level in overall. When considering each part, it found out that the part that has highest mean is “the constitutional democratic governmental system is suitable for Thailand” statement. The part with the lowest mean is “corruption (cheat and defraud) is normal in Thai society” statement. The factor that affects democratic political culture is grade levels, occupations of mothers, and attention in news and political movements.Keywords: democratic, political culture, political movements, democratic governmental system
Procedia PDF Downloads 2661509 Offenders and Victims in Public Focus: Media Coverage about Crime and Its Consequences
Authors: Melanie Verhovnik
Abstract:
Media shape the image of crime, peoples’ believes, attitudes and sometimes also behaviors. Media not only gives the impression that crime is increasing, it also suggest that very violent crime is more common than it actually is. It is also no wonder that humans are more afraid of being involved in a crime committed by strangers than committed by somebody they know – because this is the media construct. With the help of three case studies, the paper analyzes how media frames crime and criminals and gives valuable hints as to what better reporting could look like.Keywords: court reporting, offenders in media, quantitative content analysis, victims in media
Procedia PDF Downloads 3861508 Failure Probability Assessment of Concrete Spherical Domes Subjected to Ventilation Controlled Fires Using BIM Tools
Authors: A. T. Kassem
Abstract:
Fires areconsidered a common hazardous action that any building may face. Most buildings’ structural elements are designed, taking into consideration precautions for fire safety, using deterministic design approaches. Public and highly important buildings are commonly designed considering standard fire rating and, in many cases, contain large compartments with central domes. Real fire scenarios are not commonly brought into action in structural design of buildings because of complexities in both scenarios and analysis tools. This paper presents a modern approach towards analysis of spherical domes in real fire condition via implementation of building information modelling, and adopting a probabilistic approach. BIMhas been implemented to bridge the gap between various software packages enabling them to function interactively to model both real fire and corresponding structural response. Ventilation controlled fires scenarios have been modeled using both “Revit” and “Pyrosim”. Monte Carlo simulation has been adopted to engage the probabilistic analysis approach in dealing with various parameters. Conclusions regarding failure probability and fire endurance, in addition to the effects of various parameters, have been extracted.Keywords: concrete, spherical domes, ventilation controlled fires, BIM, monte carlo simulation, pyrosim, revit
Procedia PDF Downloads 961507 Impact of Violence against Women on Small and Medium Enterprises (SMEs) in Rural Sindh: A Case Study of Kandhkot
Authors: Mohammad Shoaib Khan, Abdul Sattar Bahalkani
Abstract:
This research investigates the violence and their impact on SMEs in Sindh. The main objective of current research is to examine the women empowerment through women participation in small and medium enterprises in upper Sindh. The data were collected from 500 respondents from Kandhkot District, by using simple random technique. A structural questionnaire was designed as an instrument for measuring the impact of SMEs business in women empowerment in rural Sindh. It was revealed that the rural women is less confident and their husbands were always given them hard time once they are exposing themselves to outside the boundaries of the house. It was revealed that rural women have a major contribution in social, economic, and political development. It was further revealed that women are getting low wages and due to non-availability of market facility they are paying low wages. The negative impact of husbands’ income and having children at the age of 0-6 years old are also significant. High income of other household member raises the reservation wage of mothers, thus lowers the probability of participation when the objective of working is to help family’s financial need. The impact of childcare on mothers’ labor force participation is significant but not as the theory predicted. The probability of participation in labor force is significantly higher for women who lived in the urban areas where job opportunities are greater compared to the rural.Keywords: empowerment, violence against women, SMEs, rural
Procedia PDF Downloads 3321506 Merging Appeal to Ignorance, Composition, and Division Argument Schemes with Bayesian Networks
Authors: Kong Ngai Pei
Abstract:
The argument scheme approach to argumentation has two components. One is to identify the recurrent patterns of inferences used in everyday discourse. The second is to devise critical questions to evaluate the inferences in these patterns. Although this approach is intuitive and contains many insightful ideas, it has been noted to be not free of problems. One is that due to its disavowing the probability calculus, it cannot give the exact strength of an inference. In order to tackle this problem, thereby paving the way to a more complete normative account of argument strength, it has been proposed, the most promising way is to combine the scheme-based approach with Bayesian networks (BNs). This paper pursues this line of thought, attempting to combine three common schemes, Appeal to Ignorance, Composition, and Division, with BNs. In the first part, it is argued that most (if not all) formulations of the critical questions corresponding to these schemes in the current argumentation literature are incomplete and not very informative. To remedy these flaws, more thorough and precise formulations of these questions are provided. In the second part, how to use graphical idioms (e.g. measurement and synthesis idioms) to translate the schemes as well as their corresponding critical questions to graphical structure of BNs, and how to define probability tables of the nodes using functions of various sorts are shown. In the final part, it is argued that many misuses of these schemes, traditionally called fallacies with the same names as the schemes, can indeed be adequately accounted for by the BN models proposed in this paper.Keywords: appeal to ignorance, argument schemes, Bayesian networks, composition, division
Procedia PDF Downloads 2881505 Feasibility Study of Wind Energy Potential in Turkey: Case Study of Catalca District in Istanbul
Authors: Mohammed Wadi, Bedri Kekezoglu, Mustafa Baysal, Mehmet Rida Tur, Abdulfetah Shobole
Abstract:
This paper investigates the technical evaluation of the wind potential for present and future investments in Turkey taking into account the feasibility of sites, installments, operation, and maintenance. This evaluation based on the hourly measured wind speed data for the three years 2008–2010 at 30 m height for Çatalca district. These data were obtained from national meteorology station in Istanbul–Republic of Turkey are analyzed in order to evaluate the feasibility of wind power potential and to assure supreme assortment of wind turbines installing for the area of interest. Furthermore, the data are extrapolated and analyzed at 60 m and 80 m regarding the variability of roughness factor. Weibull bi-parameter probability function is used to approximate monthly and annually wind potential and power density based on three calculation methods namely, the approximated, the graphical and the energy pattern factor methods. The annual mean wind power densities were to be 400.31, 540.08 and 611.02 W/m² for 30, 60, and 80 m heights respectively. Simulation results prove that the analyzed area is an appropriate place for constructing large-scale wind farms.Keywords: wind potential in Turkey, Weibull bi-parameter probability function, the approximated method, the graphical method, the energy pattern factor method, capacity factor
Procedia PDF Downloads 2591504 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 2221503 Statistical Analysis of Extreme Flow (Regions of Chlef)
Authors: Bouthiba Amina
Abstract:
The estimation of the statistics bound to the precipitation represents a vast domain, which puts numerous challenges to meteorologists and hydrologists. Sometimes, it is necessary, to approach in value the extreme events for sites where there is little, or no datum, as well as their periods of return. The search for a model of the frequency of the heights of daily rains dresses a big importance in operational hydrology: It establishes a basis for predicting the frequency and intensity of floods by estimating the amount of precipitation in past years. The most known and the most common approach is the statistical approach, It consists in looking for a law of probability that fits best the values observed by the random variable " daily maximal rain " after a comparison of various laws of probability and methods of estimation by means of tests of adequacy. Therefore, a frequent analysis of the annual series of daily maximal rains was realized on the data of 54 pluviometric stations of the pond of high and average. This choice was concerned with five laws usually applied to the study and the analysis of frequent maximal daily rains. The chosen period is from 1970 to 2013. It was of use to the forecast of quantiles. The used laws are the law generalized by extremes to three components, those of the extreme values to two components (Gumbel and log-normal) in two parameters, the law Pearson typifies III and Log-Pearson III in three parameters. In Algeria, Gumbel's law has been used for a long time to estimate the quantiles of maximum flows. However, and we will check and choose the most reliable law.Keywords: return period, extreme flow, statistics laws, Gumbel, estimation
Procedia PDF Downloads 791502 Performance Evaluation of a Prioritized, Limited Multi-Server Processor-Sharing System that Includes Servers with Various Capacities
Authors: Yoshiaki Shikata, Nobutane Hanayama
Abstract:
We present a prioritized, limited multi-server processor sharing (PS) system where each server has various capacities, and N (≥2) priority classes are allowed in each PS server. In each prioritized, limited server, different service ratio is assigned to each class request, and the number of requests to be processed is limited to less than a certain number. Routing strategies of such prioritized, limited multi-server PS systems that take into account the capacity of each server are also presented, and a performance evaluation procedure for these strategies is discussed. Practical performance measures of these strategies, such as loss probability, mean waiting time, and mean sojourn time, are evaluated via simulation. In the PS server, at the arrival (or departure) of a request, the extension (shortening) of the remaining sojourn time of each request receiving service can be calculated by using the number of requests of each class and the priority ratio. Utilising a simulation program which executes these events and calculations, the performance of the proposed prioritized, limited multi-server PS rule can be analyzed. From the evaluation results, most suitable routing strategy for the loss or waiting system is clarified.Keywords: processor sharing, multi-server, various capacity, N-priority classes, routing strategy, loss probability, mean sojourn time, mean waiting time, simulation
Procedia PDF Downloads 3321501 Optimization of Acid Treatments by Assessing Diversion Strategies in Carbonate and Sandstone Formations
Authors: Ragi Poyyara, Vijaya Patnana, Mohammed Alam
Abstract:
When acid is pumped into damaged reservoirs for damage removal/stimulation, distorted inflow of acid into the formation occurs caused by acid preferentially traveling into highly permeable regions over low permeable regions, or (in general) into the path of least resistance. This can lead to poor zonal coverage and hence warrants diversion to carry out an effective placement of acid. Diversion is desirably a reversible technique of temporarily reducing the permeability of high perm zones, thereby forcing the acid into lower perm zones. The uniqueness of each reservoir can pose several challenges to engineers attempting to devise optimum and effective diversion strategies. Diversion techniques include mechanical placement and/or chemical diversion of treatment fluids, further sub-classified into ball sealers, bridge plugs, packers, particulate diverters, viscous gels, crosslinked gels, relative permeability modifiers (RPMs), foams, and/or the use of placement techniques, such as coiled tubing (CT) and the maximum pressure difference and injection rate (MAPDIR) methodology. It is not always realized that the effectiveness of diverters greatly depends on reservoir properties, such as formation type, temperature, reservoir permeability, heterogeneity, and physical well characteristics (e.g., completion type, well deviation, length of treatment interval, multiple intervals, etc.). This paper reviews the mechanisms by which each variety of diverter functions and discusses the effect of various reservoir properties on the efficiency of diversion techniques. Guidelines are recommended to help enhance productivity from zones of interest by choosing the best methods of diversion while pumping an optimized amount of treatment fluid. The success of an overall acid treatment often depends on the effectiveness of the diverting agents.Keywords: diversion, reservoir, zonal coverage, carbonate, sandstone
Procedia PDF Downloads 4321500 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment
Authors: Isabela Moreira Queiroz
Abstract:
Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management.Keywords: probabilistic methods, risk assessment, risk management, slope stability
Procedia PDF Downloads 3921499 Hybrid Renewable Energy System Development Towards Autonomous Operation: The Deployment Potential in Greece
Authors: Afroditi Zamanidou, Dionysios Giannakopoulos, Konstantinos Manolitsis
Abstract:
A notable amount of electrical energy demand in many countries worldwide is used to cover public energy demand for road, square and other public spaces’ lighting. Renewable energy can contribute in a significant way to the electrical energy demand coverage for public lighting. This paper focuses on the sizing and design of a hybrid energy system (HES) exploiting the solar-wind energy potential to meet the electrical energy needs of lighting roads, squares and other public spaces. Moreover, the proposed HES provides coverage of the electrical energy demand for a Wi-Fi hotspot and a charging hotspot for the end-users. Alongside the sizing of the energy production system of the proposed HES, in order to ensure a reliable supply without interruptions, a storage system is added and sized. Multiple scenarios of energy consumption are assumed and applied in order to optimize the sizing of the energy production system and the energy storage system. A database with meteorological prediction data for 51 areas in Greece is developed in order to assess the possible deployment of the proposed HES. Since there are detailed meteorological prediction data for all 51 areas under investigation, the use of these data is evaluated, comparing them to real meteorological data. The meteorological prediction data are exploited to form three hourly production profiles for each area for every month of the year; minimum, average and maximum energy production. The energy production profiles are combined with the energy consumption scenarios and the sizing results of the energy production system and the energy storage system are extracted and presented for every area. Finally, the economic performance of the proposed HES in terms of Levelized cost of energy is estimated by calculating and assessing construction, operation and maintenance costs.Keywords: energy production system sizing, Greece’s deployment potential, meteorological prediction data, wind-solar hybrid energy system, levelized cost of energy
Procedia PDF Downloads 1561498 Timing and Probability of Presurgical Teledermatology: Survival Analysis
Authors: Felipa de Mello-Sampayo
Abstract:
The aim of this study is to undertake, from patient’s perspective, the timing and probability of using teledermatology, comparing it with a conventional referral system. The dynamic stochastic model’s main value-added consists of the concrete application to patients waiting for dermatology surgical intervention. Patients with low health level uncertainty must use teledermatology treatment as soon as possible, which is precisely when the teledermatology is least valuable. The results of the model were then tested empirically with the teledermatology network covering the area served by the Hospital Garcia da Horta, Portugal, links the primary care centers of 24 health districts with the hospital’s dermatology department via the corporate intranet of the Portuguese healthcare system. Health level volatility can be understood as the hazard of developing skin cancer and the trend of health level as the bias of developing skin lesions. The results of the survival analysis suggest that the theoretical model can explain the use of teledermatology. It depends negatively on the volatility of patients' health, and positively on the trend of health, i.e., the lower the risk of developing skin cancer and the younger the patients, the more presurgical teledermatology one expects to occur. Presurgical teledermatology also depends positively on out-of-pocket expenses and negatively on the opportunity costs of teledermatology, i.e., the lower the benefit missed by using teledermatology, the more presurgical teledermatology one expects to occur.Keywords: teledermatology, wait time, uncertainty, opportunity cost, survival analysis
Procedia PDF Downloads 1291497 The Power of in situ Characterization Techniques in Heterogeneous Catalysis: A Case Study of Deacon Reaction
Authors: Ramzi Farra, Detre Teschner, Marc Willinger, Robert Schlögl
Abstract:
Introduction: The conventional approach of characterizing solid catalysts under static conditions, i.e., before and after reaction, does not provide sufficient knowledge on the physicochemical processes occurring under dynamic conditions at the molecular level. Hence, the necessity of improving new in situ characterizing techniques with the potential of being used under real catalytic reaction conditions is highly desirable. In situ Prompt Gamma Activation Analysis (PGAA) is a rapidly developing chemical analytical technique that enables us experimentally to assess the coverage of surface species under catalytic turnover and correlate these with the reactivity. The catalytic HCl oxidation (Deacon reaction) over bulk ceria will serve as our example. Furthermore, the in situ Transmission Electron Microscopy is a powerful technique that can contribute to the study of atmosphere and temperature induced morphological or compositional changes of a catalyst at atomic resolution. The application of such techniques (PGAA and TEM) will pave the way to a greater and deeper understanding of the dynamic nature of active catalysts. Experimental/Methodology: In situ Prompt Gamma Activation Analysis (PGAA) experiments were carried out to determine the Cl uptake and the degree of surface chlorination under reaction conditions by varying p(O2), p(HCl), p(Cl2), and the reaction temperature. The abundance and dynamic evolution of OH groups on working catalyst under various steady-state conditions were studied by means of in situ FTIR with a specially designed homemade transmission cell. For real in situ TEM we use a commercial in situ holder with a home built gas feeding system and gas analytics. Conclusions: Two complimentary in situ techniques, namely in situ PGAA and in situ FTIR were utilities to investigate the surface coverage of the two most abundant species (Cl and OH). The OH density and Cl uptake were followed under multiple steady-state conditions as a function of p(O2), p(HCl), p(Cl2), and temperature. These experiments have shown that, the OH density positively correlates with the reactivity whereas Cl negatively. The p(HCl) experiments give rise to increased activity accompanied by Cl-coverage increase (opposite trend to p(O2) and T). Cl2 strongly inhibits the reaction, but no measurable increase of the Cl uptake was found. After considering all previous observations we conclude that only a minority of the available adsorption sites contribute to the reactivity. In addition, the mechanism of the catalysed reaction was proposed. The chlorine-oxygen competition for the available active sites renders re-oxidation as the rate-determining step of the catalysed reaction. Further investigations using in situ TEM are planned and will be conducted in the near future. Such experiments allow us to monitor active catalysts at the atomic scale under the most realistic conditions of temperature and pressure. The talk will shed a light on the potential and limitations of in situ PGAA and in situ TEM in the study of catalyst dynamics.Keywords: CeO2, deacon process, in situ PGAA, in situ TEM, in situ FTIR
Procedia PDF Downloads 2921496 Quality is the Matter of All
Authors: Mohamed Hamza, Alex Ohoussou
Abstract:
At JAWDA, our primary focus is on ensuring the satisfaction of our clients worldwide. We are committed to delivering new features on our SaaS platform as quickly as possible while maintaining high-quality standards. In this paper, we highlight two key aspects of testing that represent an evolution of current methods and a potential trend for the future, which have enabled us to uphold our commitment effectively. These aspects are: "One Sandbox per Pull Request" (dynamic test environments instead of static ones) and "QA for All.".Keywords: QA for all, dynamic sandboxes, QAOPS, CICD, continuous testing, all testers, QA matters for all, 1 sandbox per PR, utilization rate, coverage rate
Procedia PDF Downloads 341495 A Multi-Objective Programming Model to Supplier Selection and Order Allocation Problem in Stochastic Environment
Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh
Abstract:
This paper aims at developing a multi-objective model for supplier selection and order allocation problem in stochastic environment, where purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. In this regard, dependent chance programming is used which maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. The abovementioned stochastic multi-objective programming problem is then transformed into a stochastic single objective programming problem using minimum deviation method. In the next step, the further problem is solved applying a genetic algorithm, which performs a simulation process in order to calculate the stochastic objective function as its fitness function. Finally, the impact of stochastic parameters on the given solution is examined via a sensitivity analysis exploiting coefficient of variation. The results show that whatever stochastic parameters have greater coefficients of variation, the value of the objective function in the stochastic single objective programming problem is deteriorated.Keywords: supplier selection, order allocation, dependent chance programming, genetic algorithm
Procedia PDF Downloads 3131494 The Effect of Training and Development Practice on Employees’ Performance
Authors: Sifen Abreham
Abstract:
Employees are resources in organizations; as such, they need to be trained and developed properly to achieve an organization's goals and expectations. The initial development of the human resource management concept is based on the effective utilization of people to treat them as resources, leading to the realization of business strategies and organizational objectives. The study aimed to assess the effect of training and development practices on employee performance. The researcher used an explanatory research design, which helps to explain, understand, and predict the relationship between variables. To collect the data from the respondents, the study used probability sampling. From the probability, the researcher used stratified random sampling, which can branch off the entire population into homogenous groups. The result was analyzed and presented by using the statistical package for the social science (SPSS) version 26. The major finding of the study was that the training has an impact on employees' job performance to achieve organizational objectives. The district has a policy and procedure for training and development, but it doesn’t apply actively, and it’s not suitable for district-advised reform this policy and procedure and applied actively; the district gives training for the majority of its employees, but most of the time, the training is theoretical the district advised to use practical training method to see positive change, the district gives evaluation after the employees take training and development, but the result is not encouraging the district advised to assess employees skill gap and feel that gap, the district has a budget, but it’s not adequate the district advised to strengthen its financial ground.Keywords: training, development, employees, performance, policy
Procedia PDF Downloads 611493 A Mathematical Analysis of a Model in Capillary Formation: The Roles of Endothelial, Pericyte and Macrophages in the Initiation of Angiogenesis
Authors: Serdal Pamuk, Irem Cay
Abstract:
Our model is based on the theory of reinforced random walks coupled with Michealis-Menten mechanisms which view endothelial cell receptors as the catalysts for transforming both tumor and macrophage derived tumor angiogenesis factor (TAF) into proteolytic enzyme which in turn degrade the basal lamina. The model consists of two main parts. First part has seven differential equations (DE’s) in one space dimension over the capillary, whereas the second part has the same number of DE’s in two space dimensions in the extra cellular matrix (ECM). We connect these two parts via some boundary conditions to move the cells into the ECM in order to initiate capillary formation. But, when does this movement begin? To address this question we estimate the thresholds that activate the transport equations in the capillary. We do this by using steady-state analysis of TAF equation under some assumptions. Once these equations are activated endothelial, pericyte and macrophage cells begin to move into the ECM for the initiation of angiogenesis. We do believe that our results play an important role for the mechanisms of cell migration which are crucial for tumor angiogenesis. Furthermore, we estimate the long time tendency of these three cells, and find that they tend to the transition probability functions as time evolves. We provide our numerical solutions which are in good agreement with our theoretical results.Keywords: angiogenesis, capillary formation, mathematical analysis, steady-state, transition probability function
Procedia PDF Downloads 1571492 Dosimetric Dependence on the Collimator Angle in Prostate Volumetric Modulated Arc Therapy
Authors: Muhammad Isa Khan, Jalil Ur Rehman, Muhammad Afzal Khan Rao, James Chow
Abstract:
Purpose: This study investigates the dose-volume variations in planning target volume (PTV) and organs-at-risk (OARs) using different collimator angles for smart arc prostate volumetric modulated arc therapy (VMAT). Awareness of the collimator angle for PTV and OARs sparing is essential for the planner because optimization contains numerous treatment constraints producing a complex, unstable and computationally challenging problem throughout its examination of an optimal plan in a rational time. Materials and Methods: Single arc VMAT plans at different collimator angles varied systematically (0°-90°) were performed on a Harold phantom and a new treatment plan is optimized for each collimator angle. We analyzed the conformity index (CI), homogeneity index (HI), gradient index (GI), monitor units (MUs), dose-volume histogram, mean and maximum doses to PTV. We also explored OARs (e.g. bladder, rectum and femoral heads), dose-volume criteria in the treatment plan (e.g. D30%, D50%, V30Gy and V38Gy of bladder and rectum; D5%,V14Gy and V22Gy of femoral heads), dose-volume histogram, mean and maximum doses for smart arc VMAT at different collimator angles. Results: There was no significance difference found in VMAT optimization at all studied collimator angles. However, if 0.5% accuracy is concerned then collimator angle = 45° provides higher CI and lower HI. Collimator angle = 15° also provides lower HI values like collimator angle 45°. It is seen that collimator angle = 75° is established as a good for rectum and right femur sparing. Collimator angle = 90° and collimator angle = 30° were found good for rectum and left femur sparing respectively. The PTV dose coverage statistics for each plan are comparatively independent of the collimator angles. Conclusion: It is concluded that this study will help the planner to have freedom to choose any collimator angle from (0°-90°) for PTV coverage and select a suitable collimator angle to spare OARs.Keywords: VMAT, dose-volume histogram, collimator angle, organs-at-risk
Procedia PDF Downloads 512