Search results for: probability to breach the SCR
1004 Bias-Corrected Estimation Methods for Receiver Operating Characteristic Surface
Authors: Khanh To Duc, Monica Chiogna, Gianfranco Adimari
Abstract:
With three diagnostic categories, assessment of the performance of diagnostic tests is achieved by the analysis of the receiver operating characteristic (ROC) surface, which generalizes the ROC curve for binary diagnostic outcomes. The volume under the ROC surface (VUS) is a summary index usually employed for measuring the overall diagnostic accuracy. When the true disease status can be exactly assessed by means of a gold standard (GS) test, unbiased nonparametric estimators of the ROC surface and VUS are easily obtained. In practice, unfortunately, disease status verification via the GS test could be unavailable for all study subjects, due to the expensiveness or invasiveness of the GS test. Thus, often only a subset of patients undergoes disease verification. Statistical evaluations of diagnostic accuracy based only on data from subjects with verified disease status are typically biased. This bias is known as verification bias. Here, we consider the problem of correcting for verification bias when continuous diagnostic tests for three-class disease status are considered. We assume that selection for disease verification does not depend on disease status, given test results and other observed covariates, i.e., we assume that the true disease status, when missing, is missing at random. Under this assumption, we discuss several solutions for ROC surface analysis based on imputation and re-weighting methods. In particular, verification bias-corrected estimators of the ROC surface and of VUS are proposed, namely, full imputation, mean score imputation, inverse probability weighting and semiparametric efficient estimators. Consistency and asymptotic normality of the proposed estimators are established, and their finite sample behavior is investigated by means of Monte Carlo simulation studies. Two illustrations using real datasets are also given.Keywords: imputation, missing at random, inverse probability weighting, ROC surface analysis
Procedia PDF Downloads 4161003 Probabilistic Crash Prediction and Prevention of Vehicle Crash
Authors: Lavanya Annadi, Fahimeh Jafari
Abstract:
Transportation brings immense benefits to society, but it also has its costs. Costs include such as the cost of infrastructure, personnel and equipment, but also the loss of life and property in traffic accidents on the road, delays in travel due to traffic congestion and various indirect costs in terms of air transport. More research has been done to identify the various factors that affect road accidents, such as road infrastructure, traffic, sociodemographic characteristics, land use, and the environment. The aim of this research is to predict the probabilistic crash prediction of vehicles using machine learning due to natural and structural reasons by excluding spontaneous reasons like overspeeding etc., in the United States. These factors range from weather factors, like weather conditions, precipitation, visibility, wind speed, wind direction, temperature, pressure, and humidity to human made structures like road structure factors like bump, roundabout, no exit, turning loop, give away, etc. Probabilities are dissected into ten different classes. All the predictions are based on multiclass classification techniques, which are supervised learning. This study considers all crashes that happened in all states collected by the US government. To calculate the probability, multinomial expected value was used and assigned a classification label as the crash probability. We applied three different classification models, including multiclass Logistic Regression, Random Forest and XGBoost. The numerical results show that XGBoost achieved a 75.2% accuracy rate which indicates the part that is being played by natural and structural reasons for the crash. The paper has provided in-deep insights through exploratory data analysis.Keywords: road safety, crash prediction, exploratory analysis, machine learning
Procedia PDF Downloads 1131002 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics
Authors: Maria Arechavaleta, Mark Halpin
Abstract:
In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems
Procedia PDF Downloads 2341001 Does Clinical Guidelines Affect Healthcare Quality and Populational Health: Quebec Colorectal Cancer Screening Program
Authors: Nizar Ghali, Bernard Fortin, Guy Lacroix
Abstract:
In Quebec, colonoscopies volumes have continued to rise in recent years in the absence of effective monitoring mechanism for the appropriateness and the quality of these exams. In 2010, November, Quebec Government introduced the colorectal cancer-screening program in the objective to control for volume and cost imperfection. This program is based on clinical standards and was initiated for first group of institutions. One year later, Government adds financial incentives for participants institutions. In this analysis, we want to assess for the causal effect of the two components of this program: clinical pathways and financial incentives. Especially we assess for the reform effect on healthcare quality and population health in the context that medical remuneration is not directly dependent on this additional funding offered by the program. We have data on admissions episodes and deaths for 8 years. We use multistate model analog to difference in difference approach to estimate reform effect on the transition probability between different states for each patient. Our results show that the reform reduced length of stay without deterioration in hospital mortality or readmission rate. In the other hand, the program contributed to decrease the hospitalization rate and a less invasive treatment approach for colorectal surgeries. This is a sign of healthcare quality and population health improvement. We demonstrate in this analysis that physicians’ behavior can be affected by both clinical standards and financial incentives even if offered to facilities.Keywords: multi-state and multi-episode transition model, healthcare quality, length of stay, transition probability, difference in difference
Procedia PDF Downloads 2141000 Development of a Framework for Assessing Public Health Risk Due to Pluvial Flooding: A Case Study of Sukhumvit, Bangkok
Authors: Pratima Pokharel
Abstract:
When sewer overflow due to rainfall in urban areas, this leads to public health risks when an individual is exposed to that contaminated floodwater. Nevertheless, it is still unclear the extent to which the infections pose a risk to public health. This study analyzed reported diarrheal cases by month and age in Bangkok, Thailand. The results showed that the cases are reported higher in the wet season than in the dry season. It was also found that in Bangkok, the probability of infection with diarrheal diseases in the wet season is higher for the age group between 15 to 44. However, the probability of infection is highest for kids under 5 years, but they are not influenced by wet weather. Further, this study introduced a vulnerability that leads to health risks from urban flooding. This study has found some vulnerability variables that contribute to health risks from flooding. Thus, for vulnerability analysis, the study has chosen two variables, economic status, and age, that contribute to health risk. Assuming that the people's economic status depends on the types of houses they are living in, the study shows the spatial distribution of economic status in the vulnerability maps. The vulnerability map result shows that people living in Sukhumvit have low vulnerability to health risks with respect to the types of houses they are living in. In addition, from age the probability of infection of diarrhea was analyzed. Moreover, a field survey was carried out to validate the vulnerability of people. It showed that health vulnerability depends on economic status, income level, and education. The result depicts that people with low income and poor living conditions are more vulnerable to health risks. Further, the study also carried out 1D Hydrodynamic Advection-Dispersion modelling with 2-year rainfall events to simulate the dispersion of fecal coliform concentration in the drainage network as well as 1D/2D Hydrodynamic model to simulate the overland flow. The 1D result represents higher concentrations for dry weather flows and a large dilution of concentration on the commencement of a rainfall event, resulting in a drop of the concentration due to runoff generated after rainfall, whereas the model produced flood depth, flood duration, and fecal coliform concentration maps, which were transferred to ArcGIS to produce hazard and risk maps. In addition, the study also simulates the 5-year and 10-year rainfall simulations to show the variation in health hazards and risks. It was found that even though the hazard coverage is very high with a 10-year rainfall events among three rainfall events, the risk was observed to be the same with a 5-year and 10-year rainfall events.Keywords: urban flooding, risk, hazard, vulnerability, health risk, framework
Procedia PDF Downloads 76999 Predictors of Pelvic Vascular Injuries in Patients with Pelvic Fractures from Major Blunt Trauma
Authors: Osama Zayed
Abstract:
Aim of the work: The aim of this study is to assess the predictors of pelvic vascular injuries in patients with pelvic fractures from major blunt trauma. Methods: This study was conducted as a tool-assessment study. Forty six patients with pelvic fractures from major blunt trauma will be recruited to the study arriving to department of emergency, Suez Canal University Hospital. Data were collected from questionnaire including; personal data of the studied patients and full medical history, clinical examinations, outcome measures (The Physiological and Operative Severity Score for enumeration of Mortality and morbidity (POSSUM), laboratory and imaging studies. Patients underwent surgical interventions or further investigations based on the conventional standards for interventions. All patients were followed up during conservative, operative and post-operative periods in the hospital for interpretation the predictive scores of vascular injuries. Results: Significant predictors of vascular injuries according to computed tomography (CT) scan include age, male gender, lower Glasgow coma (GCS) scores, occurrence of hypotension, mortality rate, higher physical POSSUM scores, presence of ultrasound collection, type of management, higher systolic blood pressure (SBP) and diastolic blood pressure (DBP) POSSUM scores, presence of abdominal injuries, and poor outcome. Conclusions: There was higher frequency of males than females in the studied patients. There were high probability of morbidity and low probability of mortality among patients. Our study demonstrates that POSSUM score can be used as a predictor of vascular injury in pelvis fracture patients.Keywords: predictors, pelvic vascular injuries, pelvic fractures, major blunt trauma, POSSUM
Procedia PDF Downloads 342998 Future Projection of Glacial Lake Outburst Floods Hazard: A Hydrodynamic Study of the Highest Lake in the Dhauliganga Basin, Uttarakhand
Authors: Ashim Sattar, Ajanta Goswami, Anil V. Kulkarni
Abstract:
Glacial lake outburst floods (GLOF) highly contributes to mountain hazards in the Himalaya. Over the past decade, high altitude lakes in the Himalaya has been showing notable growth in their size and number. The key reason is rapid retreat of its glacier front. Hydrodynamic modeling GLOF using shallow water equations (SWE) would result in understanding its impact in the downstream region. The present study incorporates remote sensing based ice thickness modeling to determine the future extent of the Dhauliganga Lake to map the over deepening extent around the highest lake in the Dhauliganga basin. The maximum future volume of the lake calculated using area-volume scaling is used to model a GLOF event. The GLOF hydrograph is routed along the channel using one dimensional and two dimensional model to understand the flood wave propagation till it reaches the 1st hydropower station located 72 km downstream of the lake. The present extent of the lake calculated using SENTINEL 2 images is 0.13 km². The maximum future extent of the lake, mapped by investigating the glacier bed has a calculated scaled volume of 3.48 x 106 m³. The GLOF modeling releasing the future volume of the lake resulted in a breach hydrograph with a peak flood of 4995 m³/s at just downstream of the lake. Hydraulic routingKeywords: GLOF, glacial lake outburst floods, mountain hazard, Central Himalaya, future projection
Procedia PDF Downloads 162997 Effect of Dimensional Reinforcement Probability on Discrimination of Visual Compound Stimuli by Pigeons
Authors: O. V. Vyazovska
Abstract:
Behavioral efficiency is one of the main principles to be successful in nature. Accuracy of visual discrimination is determined by the attention, learning experience, and memory. In the experimental condition, pigeons’ responses to visual stimuli presented on the screen of the monitor are behaviorally manifested by pecking or not pecking the stimulus, by the number of pecking, reaction time, etc. The higher the probability of rewarding is, the more likely pigeons will respond to the stimulus. We trained 8 pigeons (Columba livia) on a stagewise go/no-go visual discrimination task.16 visual stimuli were created from all possible combinations of four binary dimensions: brightness (dark/bright), size (large/small), line orientation (vertical/horizontal), and shape (circle/square). In the first stage, we presented S+ and 4 S-stimuli: the first that differed in all 4-dimensional values from S+, the second with brightness dimension sharing with S+, the third sharing brightness and orientation with S+, the fourth sharing brightness, orientation and size. Then all 16 stimuli were added. Pigeons rejected correctly 6-8 of 11 new added S-stimuli at the beginning of the second stage. The results revealed that pigeons’ behavior at the beginning of the second stage was controlled by probabilities of rewarding for 4 dimensions learned in the first stage. More or fewer mistakes with dimension discrimination at the beginning of the second stage depended on the number S- stimuli sharing the dimension with S+ in the first stage. A significant inverse correlation between the number of S- stimuli sharing dimension values with S+ in the first stage and the dimensional learning rate at the beginning of the second stage was found. Pigeons were more confident in discrimination of shape and size dimensions. They made mistakes at the beginning of the second stage, which were not associated with these dimensions. Thus, the received results help elucidate the principles of dimensional stimulus control during learning compound multidimensional visual stimuli.Keywords: visual go/no go discrimination, selective attention, dimensional stimulus control, pigeon
Procedia PDF Downloads 142996 Assessing Children’s Probabilistic and Creative Thinking in a Non-formal Learning Context
Authors: Ana Breda, Catarina Cruz
Abstract:
Daily, we face unpredictable events, often attributed to chance, as there is no justification for such an occurrence. Chance, understood as a source of uncertainty, is present in several aspects of human life, such as weather forecasts, dice rolling, and lottery. Surprisingly, humans and some animals can quickly adjust their behavior to handle efficiently doubly stochastic processes (random events with two layers of randomness, like unpredictable weather affecting dice rolling). This adjustment ability suggests that the human brain has built-in mechanisms for perceiving, understanding, and responding to simple probabilities. It also explains why current trends in mathematics education include probability concepts in official curriculum programs, starting from the third year of primary education onwards. In the first years of schooling, children learn to use a certain type of (specific) vocabulary, such as never, always, rarely, perhaps, likely, and unlikely, to help them to perceive and understand the probability of some events. These are keywords of crucial importance for their perception and understanding of probabilities. The development of the probabilistic concepts comes from facts and cause-effect sequences resulting from the subject's actions, as well as the notion of chance and intuitive estimates based on everyday experiences. As part of a junior summer school program, which took place at a Portuguese university, a non-formal learning experiment was carried out with 18 children in the 5th and 6th grades. This experience was designed to be implemented in a dynamic of a serious ice-breaking game, to assess their levels of probabilistic, critical, and creative thinking in understanding impossible, certain, equally probable, likely, and unlikely events, and also to gain insight into how the non-formal learning context influenced their achievements. The criteria used to evaluate probabilistic thinking included the creative ability to conceive events classified in the specified categories, the ability to properly justify the categorization, the ability to critically assess the events classified by other children, and the ability to make predictions based on a given probability. The data analysis employs a qualitative, descriptive, and interpretative-methods approach based on students' written productions, audio recordings, and researchers' field notes. This methodology allowed us to conclude that such an approach is an appropriate and helpful formative assessment tool. The promising results of this initial exploratory study require a future research study with children from these levels of education, from different regions, attending public or private schools, to validate and expand our findings.Keywords: critical and creative thinking, non-formal mathematics learning, probabilistic thinking, serious game
Procedia PDF Downloads 28995 Evaluation of Actual Nutrition Patients of Osteoporosis
Authors: Aigul Abduldayeva, Gulnar Tuleshova
Abstract:
Osteoporosis (OP) is a major socio-economic problem and is a major cause of disability, reduced quality of life and premature death of elderly people. In Astana, the study involved 93 respondents, of whom 17 were men (18.3%), and 76 were women (81.7%). Age distribution of the respondents is as follows: 40-59 (66.7%), 60-75 (29.0%), 75-90 (4.3%). In the city of Astana general breach of bone mass (CCM) was determined in 83.8% (nationwide figure - RRP - 79.0%) of the patients, and normal levels of ultrasound densitometry were detected in 16.1% (RRP 21.0%) of the patients. OP was diagnosed in 20.4% of people over 40 (RRP for citizens is 19.0%), 25.4% in the group older than 50 (23.4% PIU), 22,6% in the group older than 60 (RRP 32.6%), 25.0% in the group older than 70 (47.6% of RRP). OPN was detected in 63.4% (RRP 59.6%) of the surveyed population. These data indicate that, there is no sharp difference between Astana and other cities in the country regarding the incidence of OP, that is, the situation with the OP is not aggravated by any regional characteristics. In the distribution of respondents by clusters it was found that 80.0% of the respondents with CCM were in the "best urban cluster", 93.8% were in "average urban cluster", and 77.4% were in a "poor urban cluster". There is a high rate construction of new buildings in Astana, presumably, that the new settlers inhabit the outskirts of the city, and very difficult to trace the socio-economic differences there. Based on these data the following conclusions can be made: 1. According to the ultrasound densitometry of the calcaneus the prevalence rate of NCM among the residents of Astana is 83.3%, OP - 20.4%, which generally coincides with data elsewhere in the country. 2. The urban population of Astana is under a high degree of risk for low energetic fracture, 46.2% of the population had medium and high risks of fracture, while the nationwide index is 26.7%. 3. In the development of CCM residents of Akmola region play a significant role gender, age, ethnic factors. According to the ultrasound densitometry women are more prone to Astana OP - 22.4% of respondents than men - 11.8% of respondents.Keywords: nutrition, osteoporosis, elderly, urban population
Procedia PDF Downloads 473994 Decision Making in Medicine and Treatment Strategies
Authors: Kamran Yazdanbakhsh, Somayeh Mahmoudi
Abstract:
Three reasons make good use of the decision theory in medicine: 1. Increased medical knowledge and their complexity makes it difficult treatment information effectively without resorting to sophisticated analytical methods, especially when it comes to detecting errors and identify opportunities for treatment from databases of large size. 2. There is a wide geographic variability of medical practice. In a context where medical costs are, at least in part, by the patient, these changes raise doubts about the relevance of the choices made by physicians. These differences are generally attributed to differences in estimates of probabilities of success of treatment involved, and differing assessments of the results on success or failure. Without explicit criteria for decision, it is difficult to identify precisely the sources of these variations in treatment. 3. Beyond the principle of informed consent, patients need to be involved in decision-making. For this, the decision process should be explained and broken down. A decision problem is to select the best option among a set of choices. The problem is what is meant by "best option ", or know what criteria guide the choice. The purpose of decision theory is to answer this question. The systematic use of decision models allows us to better understand the differences in medical practices, and facilitates the search for consensus. About this, there are three types of situations: situations certain, risky situations, and uncertain situations: 1. In certain situations, the consequence of each decision are certain. 2. In risky situations, every decision can have several consequences, the probability of each of these consequences is known. 3. In uncertain situations, each decision can have several consequences, the probability is not known. Our aim in this article is to show how decision theory can usefully be mobilized to meet the needs of physicians. The decision theory can make decisions more transparent: first, by clarifying the data systematically considered the problem and secondly by asking a few basic principles should guide the choice. Once the problem and clarified the decision theory provides operational tools to represent the available information and determine patient preferences, and thus assist the patient and doctor in their choices.Keywords: decision making, medicine, treatment strategies, patient
Procedia PDF Downloads 579993 Psychological Contract and Job Embeddedness Perspectives to Understand Cynicism as a Behavioural Response to Pressures in the Workplace
Authors: Merkouche Wassila, Marchand Alain, Renaud Stéphane
Abstract:
Organizations are facing competitive pressures constraining them to modify their practices and change initial work conditions of employees, however, these modifications have to sustain initial quality of work and engagements toward the workforce. We focus on the importance of promises in the perspective of psychological contract. According to this perspective, employees perceiving a breach of the expected obligations from the employer may become unsatisfied at work and develop organizational withdrawal behaviors. These are negative counterproductive behaviours aiming to damage the organisation according to the principle of reciprocity and social exchange. We present an integrative model of the determinants and manifestations of organizational withdrawal (OW), a set of behaviors allowing the employee to leave his job or avoid his assigned work. OW contains two main components often studied in silos: work withdrawal (delays, absenteeism and other adverse behaviors) and job withdrawal (turnover). We use the systemic micro, meso and macro sociological approach designing the individual at the heart of a system containing individual, organizational, and environmental determinants. Under the influence of these different factors, the individual assesses the type of behavior to adopt. We provide better lighting for understanding OW using both psychological contract approach through the perception of its respect by the organization and job embeddedness approach which explains why the employee does not leave the organization and then remains in his post while practicing negative and counterproductive behaviors such as OW. We study specifically cynicism as a type of OW as it is a dimension of burnout. We focus on the antecedents of cynicism to try to prevent it in the workplace.Keywords: burnout, cynicism, job embeddedness, organizational withdrawal, psychological contract
Procedia PDF Downloads 252992 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients
Authors: Ainura Tursunalieva, Irene Hudson
Abstract:
Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence
Procedia PDF Downloads 153991 Aging and Falls Profile from Hospital Databases
Authors: Nino Chikhladze, Tamar Dochviri, Nato Pitskhelauri, Maia Bitskhinashvili
Abstract:
Population aging is a key social and demographic trend of the 21st century. Falls represent a prevalent geriatric syndrome that poses significant risks to the health and independence of older adults. The World Health Organization notes a lack of comprehensive data on falls in low- and middle-income countries, complicating the creation of effective prevention programs. To the authors’ best knowledge, no such studies have been conducted in Georgia. The aim of the study is to explore the epidemiology of falls in the elderly population. The hospitalization database of the National Center for Disease Control and Public Health of Georgia was used for the retrospective study. Falls-related injuries were identified using ICD-10 classifications using the class XIX (S and T codes) and class XX for the type of injury (V-Y codes). Statistical data analyses were done using SPSS software version 23.0. The total number of fall-related hospitalizations for individuals aged 65 and older from 2015 to 2021 was 29,697. The study revealed that falls accounted for an average of 63% (ranging from 59% to 66%) of all hospitalizations and 68% (ranging from 65% to 70%) of injury-related hospitalizations during this period. The 69% of all patients were women and 31%-men (Chi2=4482.1, p<0.001). The highest rate of hospitalization was in the age groups 80-84 and 75-79. The probability of fall-related hospitalization was significantly higher in women (p<0.001) compared to men in all age groups except 65-69 years. In the target age group of 65 years and older, the probability of hospitalization increased significantly with an increase in age (p<0.001). The study's results can be leveraged to create evidence-based awareness programs, design targeted multi-domain interventions addressing specific risk factors, and enhance the quality of geriatric healthcare services in Georgia.Keywords: elderly population, falls, geriatric patients, hospitalization, injuries
Procedia PDF Downloads 31990 Reliability Analysis of Construction Schedule Plan Based on Building Information Modelling
Authors: Lu Ren, You-Liang Fang, Yan-Gang Zhao
Abstract:
In recent years, the application of BIM (Building Information Modelling) to construction schedule plan has been the focus of more and more researchers. In order to assess the reasonable level of the BIM-based construction schedule plan, that is whether the schedule can be completed on time, some researchers have introduced reliability theory to evaluate. In the process of evaluation, the uncertain factors affecting the construction schedule plan are regarded as random variables, and probability distributions of the random variables are assumed to be normal distribution, which is determined using two parameters evaluated from the mean and standard deviation of statistical data. However, in practical engineering, most of the uncertain influence factors are not normal random variables. So the evaluation results of the construction schedule plan will be unreasonable under the assumption that probability distributions of random variables submitted to the normal distribution. Therefore, in order to get a more reasonable evaluation result, it is necessary to describe the distribution of random variables more comprehensively. For this purpose, cubic normal distribution is introduced in this paper to describe the distribution of arbitrary random variables, which is determined by the first four moments (mean, standard deviation, skewness and kurtosis). In this paper, building the BIM model firstly according to the design messages of the structure and making the construction schedule plan based on BIM, then the cubic normal distribution is used to describe the distribution of the random variables due to the collecting statistical data of the random factors influencing construction schedule plan. Next the reliability analysis of the construction schedule plan based on BIM can be carried out more reasonably. Finally, the more accurate evaluation results can be given providing reference for the implementation of the actual construction schedule plan. In the last part of this paper, the more efficiency and accuracy of the proposed methodology for the reliability analysis of the construction schedule plan based on BIM are conducted through practical engineering case.Keywords: BIM, construction schedule plan, cubic normal distribution, reliability analysis
Procedia PDF Downloads 148989 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management
Authors: Michael McCann
Abstract:
Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.Keywords: corporate governance, boards of directors, agency theory, earnings management
Procedia PDF Downloads 236988 Robust ANOVA: An Illustrative Study in Horticultural Crop Research
Authors: Dinesh Inamadar, R. Venugopalan, K. Padmini
Abstract:
An attempt has been made in the present communication to elucidate the efficacy of robust ANOVA methods to analyze horticultural field experimental data in the presence of outliers. Results obtained fortify the use of robust ANOVA methods as there was substantiate reduction in error mean square, and hence the probability of committing Type I error, as compared to the regular approach.Keywords: outliers, robust ANOVA, horticulture, cook distance, type I error
Procedia PDF Downloads 391987 The Test of Memory Malingering and Offence Severity
Authors: Kenji Gwee
Abstract:
In Singapore, the death penalty remains in active use for murder and drug trafficking of controlled drugs such as heroin. As such, the psychological assessment of defendants can often be of high stakes. The Test of Memory Malingering (TOMM) is employed by government psychologists to determine the degree of effort invested by defendants, which in turn inform on the veracity of overall psychological findings that can invariably determine the life and death of defendants. The purpose of this study was to find out if defendants facing the death penalty were more likely to invest less effort during psychological assessment (to fake bad in hopes of escaping the death sentence) compared to defendants facing lesser penalties. An archival search of all forensic cases assessed in 2012-2013 by Singapore’s designated forensic psychiatric facility yielded 186 defendants’ TOMM scores. Offence severity, coded into 6 rank-ordered categories, was analyzed in a one-way ANOVA with TOMM score as the dependent variable. There was a statistically significant difference (F(5,87) = 2.473, p = 0.038). A Tukey post-hoc test with Bonferroni correction revealed that defendants facing lower charges (Theft, shoplifting, criminal breach of trust) invested less test-taking effort (TOMM = 37.4±12.3, p = 0.033) compared to those facing the death penalty (TOMM = 46.2±8.1). The surprising finding that those facing death penalties actually invested more test taking effort than those facing relatively minor charges could be due to higher levels of cooperation when faced with death. Alternatively, other legal avenues to escape the death sentence may have been preferred over the mitigatory chance of a psychiatric defence.Keywords: capital sentencing, offence severity, Singapore, Test of Memory Malingering
Procedia PDF Downloads 436986 Detection of Resistive Faults in Medium Voltage Overhead Feeders
Authors: Mubarak Suliman, Mohamed Hassan
Abstract:
Detection of downed conductors occurring with high fault resistance (reaching kilo-ohms) has always been a challenge, especially in countries like Saudi Arabia, on which earth resistivity is very high in general (reaching more than 1000 Ω-meter). The new approaches for the detection of resistive and high impedance faults are based on the analysis of the fault current waveform. These methods are still under research and development, and they are currently lacking security and dependability. The other approach is communication-based solutions which depends on voltage measurement at the end of overhead line branches and communicate the measured signals to substation feeder relay or a central control center. However, such a detection method is costly and depends on the availability of communication medium and infrastructure. The main objective of this research is to utilize the available standard protection schemes to increase the probability of detection of downed conductors occurring with a low magnitude of fault currents and at the same time avoiding unwanted tripping in healthy conditions and feeders. By specifying the operating region of the faulty feeder, use of tripping curve for discrimination between faulty and healthy feeders, and with proper selection of core balance current transformer (CBCT) and voltage transformers with fewer measurement errors, it is possible to set the pick-up of sensitive earth fault current to minimum values of few amps (i.e., Pick-up Settings = 3 A or 4 A, …) for the detection of earth faults with fault resistance more than (1 - 2 kΩ) for 13.8kV overhead network and more than (3-4) kΩ fault resistance in 33kV overhead network. By implementation of the outcomes of this study, the probability of detection of downed conductors is increased by the utilization of existing schemes (i.e., Directional Sensitive Earth Fault Protection).Keywords: sensitive earth fault, zero sequence current, grounded system, resistive fault detection, healthy feeder
Procedia PDF Downloads 116985 Harvesting Energy from Lightning Strikes
Authors: Vaishakh Medikeri
Abstract:
Lightning, the marvelous, spectacular and the awesome truth of nature is one of the greatest energy sources left unharnessed since ages. A single lightning bolt of lightning contains energy of about 15 billion joules. This huge amount of energy cannot be harnessed completely but partially. This paper proposes to harness the energy from lightning strikes. Throughout the globe the frequency of lightning is 40-50 flashes per second, totally 1.4 billion flashes per year; all of these flashes carrying an average energy of about 15 billion joules each. When a lightning bolt strikes the ground, tremendous amounts of energy is transferred to earth which propagates in the form of concentric circular energy waves. These waves have a frequency of about 7.83Hz. Harvesting the lightning bolt directly seems impossible, but harvesting the energy waves produced by the lightning is pretty easier. This can be done using a tricoil energy harnesser which is a new device which I have invented. We know that lightning bolt seeks the path which has minimum resistance down to the earth. For this we can make a lightning rod about 100 meters high. Now the lightning rod is attached to the tricoil energy harnesser. The tricoil energy harnesser contains three coils whose centers are collinear and all the coils are parallel to the ground. The first coil has one of its ends connected to the lightning rod and the other end grounded. There is a secondary coil wound on the first coil with one of its end grounded and the other end pointing to the ground and left unconnected and placed a little bit above the ground so that this end of the coil produces more intense currents, hence producing intense energy waves. The first coil produces very high magnetic fields and induces them in the second and third coils. Along with the magnetic fields induced by the first coil, the energy waves which are currents also flow through the second and the third coils. The second and the third coils are connected to a generator which in turn is connected to a capacitor which stores the electrical energy. The first coil is placed in the middle of the second and the third coil. The stored energy can be used for transmission of electricity. This new technique of harnessing the lightning strikes would be most efficient in places with more probability of the lightning strikes. Since we are using a lightning rod sufficiently long, the probability of cloud to ground strikes is increased. If the proposed apparatus is implemented, it would be a great source of pure and clean energy.Keywords: generator, lightning rod, tricoil energy harnesser, harvesting energy
Procedia PDF Downloads 382984 Satellite Solutions for Koshi Floods
Authors: Sujan Tyata, Alison Shilpakar, Nayan Bakhadyo, Kushal K. C., Abhas Maskey
Abstract:
The Koshi River, acknowledged as the "Sorrow of Bihar," poses intricate challenges characterized by recurrent flooding. Within the Koshi Basin, floods have historically inflicted damage on infrastructure, agriculture, and settlements. The Koshi River exhibits a highly braided pattern across a 48 km stretch to the south of Chatara. The devastating flood from the Koshi River, which began in Nepal's Sunsari District in 2008, led to significant casualties and the destruction of agricultural areas.The catastrophe was exacerbated by a levee breach, underscoring the vulnerability of the region's flood defenses. A comprehensive understanding of environmental changes in the area is unveiled through satellite imagery analysis. This analysis facilitates the identification of high-risk zones and their contributing factors. Employing remote sensing, the analysis specifically pinpoints locations vulnerable to levee breaches. Topographical features of the area along with longitudinal and cross sectional profiles of the river and levee obtained from digital elevation model are used in the hydrological analysis for assessment of flood. To mitigate the impact of floods, the strategy involves the establishment of reservoirs upstream. Leveraging satellite data, optimal locations for water storage are identified. This approach presents a dual opportunity to not only alleviate flood risks but also catalyze the implementation of pumped storage hydropower initiatives. This holistic approach addresses environmental challenges while championing sustainable energy solutions.Keywords: flood mitigation, levee, remote sensing, satellite imagery analysis, sustainable energy solutions
Procedia PDF Downloads 64983 Treating On-Demand Bonds as Cash-In-Hand: Analyzing the Use of “Unconscionability” as a Ground for Challenging Claims for Payment under On-Demand Bonds
Authors: Asanga Gunawansa, Shenella Fonseka
Abstract:
On-demand bonds, also known as unconditional bonds, are commonplace in the construction industry as a means of safeguarding the employer from any potential non-performance by a contractor. On-demand bonds may be obtained from commercial banks, and they serve as an undertaking by the issuing bank to honour payment on demand without questioning and/or considering any dispute between the employer and the contractor in relation to the underlying contract. Thus, whether or not a breach had occurred under the underlying contract, which triggers the demand for encashment by the employer, is not a question the bank needs to be concerned with. As a result, an unconditional bond allows the beneficiary to claim the money almost without any condition. Thus, an unconditional bond is as good as cash-in-hand. In the past, establishing fraud on the part of the employer, of which the bank had knowledge, was the only ground on which a bank could dishonour a claim made under an on-demand bond. However, recent jurisprudence in common law countries shows that courts are beginning to consider unconscionable conduct on the part of the employer in claiming under an on-demand bond as a ground that contractors could rely on the prevent the banks from honouring such claims. This has created uncertainty in connection with on-demand bonds and their liquidity. This paper analyzes recent judicial decisions in four common law jurisdictions, namely, England, Singapore, Hong Kong, and Sri Lanka, to identify the scope of using the concept of “unconscionability” as a ground for preventing unreasonable claims for encashment of on-demand bonds. The objective of this paper is to argue that on-demand bonds have lost their effectiveness as “cash-in-hand” and that this is, in fact, an advantage and not an impediment to international commerce, as the purpose of such bonds should not be to provide for illegal and unconscionable conduct by the beneficiaries.Keywords: fraud, performance guarantees, on-demand bonds, unconscionability
Procedia PDF Downloads 105982 Implementing of Indoor Air Quality Index in Hong Kong
Authors: Kwok W. Mui, Ling T. Wong, Tsz W. Tsang
Abstract:
Many Hong Kong people nowadays spend most of their lifetime working indoor. Since poor Indoor Air Quality (IAQ) potentially leads to discomfort, ill health, low productivity and even absenteeism in workplaces, a call for establishing statutory IAQ control to safeguard the well-being of residents is urgently required. Although policies, strategies, and guidelines for workplace IAQ diagnosis have been developed elsewhere and followed with remedial works, some of those workplaces or buildings have relatively late stage of the IAQ problems when the investigation or remedial work started. Screening for IAQ problems should be initiated as it will provide information as a minimum provision of IAQ baseline requisite to the resolution of the problems. It is not practical to sample all air pollutants that exit. Nevertheless, as a statutory control, reliable, rapid screening is essential in accordance with a compromise strategy, which balances costs against detection of key pollutants. This study investigates the feasibility of using an IAQ index as a parameter of IAQ control in Hong Kong. The index is a screening parameter to identify the unsatisfactory workplace IAQ and will highlight where a fully effective IAQ monitoring and assessment is needed for an intensive diagnosis. There already exist a number of representative common indoor pollutants based on some extensive IAQ assessments. The selection of pollutants is surrogate to IAQ control consists of dilution, mitigation, and emission control. The IAQ Index and assessment will look at high fractional quantities of these common measurement parameters. With the support of the existing comprehensive regional IAQ database and the IAQ Index by the research team as the pre-assessment probability, and the unsatisfactory IAQ prevalence as the post-assessment probability from this study, thresholds of maintaining the current measures and performing a further IAQ test or IAQ remedial measures will be proposed. With justified resources, the proposed IAQ Index and assessment protocol might be a useful tool for setting up a practical public IAQ surveillance programme and policy in Hong Kong.Keywords: assessment, index, indoor air quality, surveillance programme
Procedia PDF Downloads 268981 Socio-Demographic Factors and Testing Practices Are Associated with Spatial Patterns of Clostridium difficile Infection in the Australian Capital Territory, 2004-2014
Authors: Aparna Lal, Ashwin Swaminathan, Teisa Holani
Abstract:
Background: Clostridium difficile infections (CDIs) have been on the rise globally. In Australia, rates of CDI in all States and Territories have increased significantly since mid-2011. Identifying risk factors for CDI in the community can help inform targeted interventions to reduce infection. Methods: We examine the role of neighbourhood socio-economic status, demography, testing practices and the number of residential aged care facilities on spatial patterns in CDI incidence in the Australian Capital Territory. Data on all tests conducted for CDI were obtained from ACT Pathology by postcode for the period 1st January 2004 through 31 December 2014. Distribution of age groups and the neighbourhood Index of Relative Socio-economic Advantage Disadvantage (IRSAD) were obtained from the Australian Bureau of Statistics 2011 National Census data. A Bayesian spatial conditional autoregressive model was fitted at the postcode level to quantify the relationship between CDI and socio-demographic factors. To identify CDI hotspots, exceedance probabilities were set at a threshold of twice the estimated relative risk. Results: CDI showed a positive spatial association with the number of tests (RR=1.01, 95% CI 1.00, 1.02) and the resident population over 65 years (RR=1.00, 95% CI 1.00, 1.01). The standardized index of relative socio-economic advantage disadvantage (IRSAD) was significantly negatively associated with CDI (RR=0.74, 95% CI 0.56, 0.94). We identified three postcodes with high probability (0.8-1.0) of excess risk. Conclusions: Here, we demonstrate geographic variations in CDI in the ACT with a positive association of CDI with socioeconomic disadvantage and identify areas with a high probability of elevated risk compared with surrounding communities. These findings highlight community-based risk factors for CDI.Keywords: spatial, socio-demographic, infection, Clostridium difficile
Procedia PDF Downloads 322980 A Study on Net Profit Associated with Queueing System Subject to Catastrophical Events
Authors: M. Reni Sagayaraj, S. Anand Gnana Selvam, R. Reynald Susainathan
Abstract:
In this paper we study that the catastrophic events arrive independently at the service facility according to a Poisson process with rate λ. The nature of a catastrophic event is that upon its arrival at a service station, it destroys all the customers there waiting and in the service. We will derive the net profit associated with queuing system and obtain its probability of the busy period.Keywords: queueing system, net-profit, busy period, catastrophical events
Procedia PDF Downloads 364979 Adaptation of Requirement Engineering Practices in Pakistan
Authors: Waqas Ali, Nadeem Majeed
Abstract:
Requirement engineering is an essence of software development life cycle. The more time we spend on requirement engineering, higher the probability of success. Effective requirement engineering ensures and predicts successful software product. This paper presents the adaptation of requirement engineering practices in small and medium size companies of Pakistan. The study is conducted by questionnaires to show how much of requirement engineering models and practices are followed in Pakistan.Keywords: requirement engineering, Pakistan, models, practices, organizations
Procedia PDF Downloads 719978 Transverse Momentum Dependent Factorization and Evolution for Spin Physics
Authors: Bipin Popat Sonawane
Abstract:
After 1988 Electron muon Collaboration (EMC) announcement of measurement of spin dependent structure function, it has been found that it has become a need to understand spin structure of a hadron. In the study of three-dimensional spin structure of a proton, we need to understand the foundation of quantum field theory in terms of electro-weak and strong theories using rigorous mathematical theories and models. In the process of understanding the inner dynamical stricture of proton we need understand the mathematical formalism in perturbative quantum chromodynamics (pQCD). In QCD processes like proton-proton collision at high energy we calculate cross section using conventional collinear factorization schemes. In this calculations, parton distribution functions (PDFs) and fragmentation function are used which provide the information about probability density of finding quarks and gluons ( partons) inside the proton and probability density of finding final hadronic state from initial partons. In transverse momentum dependent (TMD) PDFs and FFs, collectively called as TMDs, take an account for intrinsic transverse motion of partons. The TMD factorization in the calculation of cross sections provide a scheme of hadronic and partonic states in the given QCD process. In this study we review Transverse Momentum Dependent (TMD) factorization scheme using Collins-Soper-Sterman (CSS) Formalism. CSS formalism considers the transverse momentum dependence of the partons, in this formalism the cross section is written as a Fourier transform over a transverse position variable which has physical interpretation as impact parameter. Along with this we compare this formalism with improved CSS formalism. In this work we study the TMD evolution schemes and their comparison with other schemes. This would provide description in the process of measurement of transverse single spin asymmetry (TSSA) in hadro-production and electro-production of J/psi meson at RHIC, LHC, ILC energy scales. This would surely help us to understand J/psi production mechanism which is an appropriate test of QCD. Procedia PDF Downloads 70977 Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis
Authors: H. Hajimolahoseini, J. Hashemi, D. Redfearn
Abstract:
Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.Keywords: atrial fibrillation, catheter ablation, probability distribution function, time-frequency characteristics
Procedia PDF Downloads 160976 Reducing Ambulance Offload Delay: A Quality Improvement Project at Princess Royal University Hospital
Authors: Fergus Wade, Jasmine Makker, Matthew Jankinson, Aminah Qamar, Gemma Morrelli, Shayan Shah
Abstract:
Background: Ambulance offload delays (AODs) affect patient outcomes. At baseline, the average AOD at Princess Royal University Hospital (PRUH) was 41 minutes, in breach of the 15-minute target. Aims: By February 2023, we aimed to reduce: the average AOD to 30 minutes percentage of AOD >30 minutes (PA30) to 25% and >60 minutes (PA60) to 10% Methods: Following a root-cause analysis, we implemented 2 Plan, Do, Study, Act (PDSA) cycles. PDSA-1 ‘Drop-and-run’: ambulances waiting >15 minutes for a handover left the patients in the Emergency Department (ED) and returned to the community. PDSA-2: Booking in the patients before the handover, allowing direct updates to online records, eliminating the need for handwritten notes. Outcome measures: AOD, PA30, and PA60, and process measures: total ambulances and patients in the ED were recorded for 16 weeks. Results: In PDSA-1, all parameters increased slightly despite unvarying ED crowding. In PDSA-2, two shifts in data were seen: initially, a sharp increase in the outcome measures consistent with increased ED crowding, followed by a downward shift when crowding returned to baseline (p<0.01). Within this interval, the AOD reduced to 29.9 minutes, and PA30 and PA60 were 31.2% and 9.2% respectively. Discussion/conclusion: PDSA-1 didn’t result in any significant changes; lack of compliance was a key cause. The initial upward shift in PDSA-2 is likely associated with NHS staff strikes. However, during the second interval, the AOD and the PA60 met our targets of 30 minutes and 10%, respectively, improving patient flow in the ED. This was sustained without further input and if maintained, saves 2 paramedic shifts every 3 days.Keywords: ambulance offload, district general hospital, handover, quality improvement
Procedia PDF Downloads 106975 Implementation of Statistical Parameters to Form an Entropic Mathematical Models
Authors: Gurcharan Singh Buttar
Abstract:
It has been discovered that although these two areas, statistics, and information theory, are independent in their nature, they can be combined to create applications in multidisciplinary mathematics. This is due to the fact that where in the field of statistics, statistical parameters (measures) play an essential role in reference to the population (distribution) under investigation. Information measure is crucial in the study of ambiguity, assortment, and unpredictability present in an array of phenomena. The following communication is a link between the two, and it has been demonstrated that the well-known conventional statistical measures can be used as a measure of information.Keywords: probability distribution, entropy, concavity, symmetry, variance, central tendency
Procedia PDF Downloads 156