Search results for: piecewise linear inputs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3937

Search results for: piecewise linear inputs

2827 Assessment of Work-Related Stress and Its Predictors in Ethiopian Federal Bureau of Investigation in Addis Ababa

Authors: Zelalem Markos Borko

Abstract:

Work-related stress is a reaction that occurs when the work weight progress toward becoming excessive. Therefore, unless properly managed, stress leads to high employee turnover, decreased performance, illness and absenteeism. Yet, little has been addressed regarding work-related stress and its predictors in the study area. Therefore, the objective of this study was to assess stress prevalence and its predictors in the study area. To that effect, a cross-sectional study design was conducted on 281 employees from the Ethiopian Federal Bureau of Investigation by using stratified random sampling techniques. Survey questionnaire scales were employed to collect data. Data were analyzed by percentage, Pearson correlation coefficients, simple linear regression, multiple linear regressions, independent t-test and one-way ANOVA statistical techniques. In the present study13.9% of participants faced high stress, whereas 13.5% of participants faced low stress and the rest 72.6% of officers experienced moderate stress. There is no significant group difference among workers due to age, gender, marital status, educational level, years of service and police rank. This study concludes that factors such as role conflict, performance over-utilization, role ambiguity, and qualitative and quantitative role overload together predict 39.6% of work-related stress. This indicates that 60.4% of the variation in stress is explained by other factors, so other additional research should be done to identify additional factors predicting stress. To prevent occupational stress among police, the Ethiopian Federal Bureau of Investigation should develop strategies based on factors that will help to develop stress reduction management.

Keywords: work-related stress, Ethiopian federal bureau of investigation, predictors, Addis Ababa

Procedia PDF Downloads 65
2826 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating

Procedia PDF Downloads 190
2825 Motivation and Criteria as Determinant Factors in Accepting New Talents on User-Generated Content (UGC): Youtube as a Platform

Authors: Shereen Nadira Binti Jasney, Mohd Syuhaidi Bin Abu Bakar, Hafizah Binti Rosli

Abstract:

This quantitative study explored factors that motivate the public to use YouTube; and the elements of criteria, which the public are looking for to accept new talents on User-Generated Content (UGC). There are mass inputs on the net but the publics are still being very selective in accepting new talents. Thus, it is important to identify determinant factors that contribute to the acceptance of new talents on UGC. A total number of 236 respondents have participated in this study using Simple Random Sampling and they were analyzed with descriptive analysis. The findings of this paper advocate that tremendous expansion; and diversification YouTube music offers are main factors that motivated public viewers in using YouTube on accepting new talents. It is also found that by being relatable and concurrently providing interesting contents, having the artist name and song title in the YouTube talent’s title video and the number of views and likes of the video are some of the criteria that the public are looking for in accepting new talents on the UGC. This paper introduces YouTube as a mean of discovering new talents in the music industry where the public, especially the younger generations, whom are actively engaged with current digital landscape that they’ve been presently silver-plated.

Keywords: motivation, criteria, new talents, UGC, YouTube

Procedia PDF Downloads 281
2824 Trajectory Optimization of Re-Entry Vehicle Using Evolutionary Algorithm

Authors: Muhammad Umar Kiani, Muhammad Shahbaz

Abstract:

Performance of any vehicle can be predicted by its design/modeling and optimization. Design optimization leads to efficient performance. Followed by horizontal launch, the air launch re-entry vehicle undergoes a launch maneuver by introducing a carefully selected angle of attack profile. This angle of attack profile is the basic element to complete a specified mission. Flight program of said vehicle is optimized under the constraints of the maximum allowed angle of attack, lateral and axial loads and with the objective of reaching maximum altitude. The main focus of this study is the endo-atmospheric phase of the ascent trajectory. A three degrees of freedom trajectory model is simulated in MATLAB. The optimization process uses evolutionary algorithm, because of its robustness and efficient capacity to explore the design space in search of the global optimum. Evolutionary Algorithm based trajectory optimization also offers the added benefit of being a generalized method that may work with continuous, discontinuous, linear, and non-linear performance matrix. It also eliminates the requirement of a starting solution. Optimization is particularly beneficial to achieve maximum advantage without increasing the computational cost and affecting the output of the system. For the case of launch vehicles we are immensely anxious to achieve maximum performance and efficiency under different constraints. In a launch vehicle, flight program means the prescribed variation of vehicle pitching angle during the flight which has substantial influence reachable altitude and accuracy of orbit insertion and aerodynamic loading. Results reveal that the angle of attack profile significantly affects the performance of the vehicle.

Keywords: endo-atmospheric, evolutionary algorithm, efficient performance, optimization process

Procedia PDF Downloads 401
2823 The Aesthetics of Time in Thus Spoke Zarathustra: A Reappraisal of the Eternal Recurrence of the Same

Authors: Melanie Tang

Abstract:

According to Nietzsche, the eternal recurrence is his most important idea. However, it is perhaps his most cryptic and difficult to interpret. Early readings considered it as a cosmological hypothesis about the cyclic nature of time. However, following Nehamas’s ‘Life as Literature’ (1985), it has become a widespread interpretation that the eternal recurrence never really had any theoretical dimensions, and is not actually a philosophy of time, but a practical thought experiment intended to measure the extent to which we have mastered and perfected our lives. This paper endeavours to challenge this line of thought becoming scholarly consensus, and to carry out a more complex analysis of the eternal recurrence as it is presented in Thus Spoke Zarathustra. In its wider scope, this research proposes that Thus Spoke Zarathustra — as opposed to The Birth of Tragedy — be taken as the primary source for a study of Nietzsche’s Aesthetics, due to its more intrinsic aesthetic qualities and expressive devices. The eternal recurrence is the central philosophy of a work that communicates its ideas in unprecedentedly experimental and aesthetic terms, and a more in-depth understanding of why Nietzsche chooses to present his conception of time in aesthetic terms is warranted. Through hermeneutical analysis of Thus Spoke Zarathustra and engagement with secondary sources such as those by Nehamas, Karl Löwith, and Jill Marsden, the present analysis challenges the ethics of self-perfection upon which current interpretations of the recurrence are based, as well as their reliance upon a linear conception of time. Instead, it finds the recurrence to be a cyclic interplay between the self and the world, rather than a metric pertaining solely to the self. In this interpretation, time is found to be composed of an intertemporal rather than linear multitude of will to power, which structures itself through tensional cycles into an experience of circular time that can be seen to have aesthetic dimensions. In putting forth this understanding of the eternal recurrence, this research hopes to reopen debate on this key concept in the field of Nietzsche studies.

Keywords: Nietzsche, eternal recurrence, Zarathustra, aesthetics, time

Procedia PDF Downloads 145
2822 Onmanee Prajuabjinda, Pakakrong Thondeeying, Jipisute Chunthorng-Orn, Bhanuz Dechayont, Arunporn Itharat

Authors: Ekrem Erdem, Can Tansel Tugcu

Abstract:

Improved resource efficiency of production is a key requirement for sustainable growth, worldwide. In this regards, by considering the energy and tourism as the extra inputs to the classical Coub-Douglas production function, this study aims at investigating the efficiency changes in the North African countries. To this end, the study uses panel data for the period 1995-2010 and adopts the Malmquist index based on the data envelopment analysis. Results show that tourism increases technical and scale efficiencies, while it decreases technological and total factor productivity changes. On the other hand, when the production function is augmented by the energy input, technical efficiency change decreases, while the technological change, scale efficiency change and total factor productivity change increase. Thus, in order to satisfy the needs for sustainable growth, North African governments should take some measures for increasing the contribution that the tourism makes to economic growth and some others for efficient use of resources in the energy sector.

Keywords: data envelopment analysis, economic efficiency, North African countries, sustainable growth

Procedia PDF Downloads 336
2821 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 120
2820 Free Vibration Analysis of Timoshenko Beams at Higher Modes with Central Concentrated Mass Using Coupled Displacement Field Method

Authors: K. Meera Saheb, K. Krishna Bhaskar

Abstract:

Complex structures used in many fields of engineering are made up of simple structural elements like beams, plates etc. These structural elements, sometimes carry concentrated masses at discrete points, and when subjected to severe dynamic environment tend to vibrate with large amplitudes. The frequency amplitude relationship is very much essential in determining the response of these structural elements subjected to the dynamic loads. For Timoshenko beams, the effects of shear deformation and rotary inertia are to be considered to evaluate the fundamental linear and nonlinear frequencies. A commonly used method for solving vibration problem is energy method, or a finite element analogue of the same. In the present Coupled Displacement Field method the number of undetermined coefficients is reduced to half when compared to the famous Rayleigh Ritz method, which significantly simplifies the procedure to solve the vibration problem. This is accomplished by using a coupling equation derived from the static equilibrium of the shear flexible structural element. The prime objective of the present paper here is to study, in detail, the effect of a central concentrated mass on the large amplitude free vibrations of uniform shear flexible beams. Accurate closed form expressions for linear frequency parameter for uniform shear flexible beams with a central concentrated mass was developed and the results are presented in digital form.

Keywords: coupled displacement field, coupling equation, large amplitude vibrations, moderately thick plates

Procedia PDF Downloads 222
2819 Exploring Dynamics of Regional Creative Economy

Authors: Ari Lindeman, Melina Maunula, Jani Kiviranta, Ronja Pölkki

Abstract:

The aim of this paper is to build a vision of the utilization of creative industry competences in industrial and services firms connected to Kymenlaakso region, Finland, smart specialization focus areas. Research indicates that creativity and the use of creative industry’s inputs can enhance innovation and competitiveness. Currently creative methods and services are underutilized in regional businesses and the added value they provide is not well grasped. Methodologically, the research adopts a qualitative exploratory approach. Data is collected in multiple ways including a survey, focus groups, and interviews. Theoretically, the paper contributes to the discussion about the use creative industry competences in regional development, and argues for building regional creative economy ecosystems in close co-operation with regional strategies and traditional industries rather than as treating regional creative industry ecosystem initiatives separate from them. The practical contribution of the paper is the creative vision for the use of regional authorities in updating smart specialization strategy as well as boosting industrial and creative & cultural sectors’ competitiveness. The paper also illustrates a research-based model of vision building.

Keywords: business, cooperation, creative economy, regional development, vision

Procedia PDF Downloads 126
2818 Predicting Growth of Eucalyptus Marginata in a Mediterranean Climate Using an Individual-Based Modelling Approach

Authors: S.K. Bhandari, E. Veneklaas, L. McCaw, R. Mazanec, K. Whitford, M. Renton

Abstract:

Eucalyptus marginata, E. diversicolor and Corymbia calophylla form widespread forests in south-west Western Australia (SWWA). These forests have economic and ecological importance, and therefore, tree growth and sustainable management are of high priority. This paper aimed to analyse and model the growth of these species at both stand and individual levels, but this presentation will focus on predicting the growth of E. Marginata at the individual tree level. More specifically, the study wanted to investigate how well individual E. marginata tree growth could be predicted by considering the diameter and height of the tree at the start of the growth period, and whether this prediction could be improved by also accounting for the competition from neighbouring trees in different ways. The study also wanted to investigate how many neighbouring trees or what neighbourhood distance needed to be considered when accounting for competition. To achieve this aim, the Pearson correlation coefficient was examined among competition indices (CIs), between CIs and dbh growth, and selected the competition index that can best predict the diameter growth of individual trees of E. marginata forest managed under different thinning regimes at Inglehope in SWWA. Furthermore, individual tree growth models were developed using simple linear regression, multiple linear regression, and linear mixed effect modelling approaches. Individual tree growth models were developed for thinned and unthinned stand separately. The developed models were validated using two approaches. In the first approach, models were validated using a subset of data that was not used in model fitting. In the second approach, the model of the one growth period was validated with the data of another growth period. Tree size (diameter and height) was a significant predictor of growth. This prediction was improved when the competition was included in the model. The fit statistic (coefficient of determination) of the model ranged from 0.31 to 0.68. The model with spatial competition indices validated as being more accurate than with non-spatial indices. The model prediction can be optimized if 10 to 15 competitors (by number) or competitors within ~10 m (by distance) from the base of the subject tree are included in the model, which can reduce the time and cost of collecting the information about the competitors. As competition from neighbours was a significant predictor with a negative effect on growth, it is recommended including neighbourhood competition when predicting growth and considering thinning treatments to minimize the effect of competition on growth. These model approaches are likely to be useful tools for the conservations and sustainable management of forests of E. marginata in SWWA. As a next step in optimizing the number and distance of competitors, further studies in larger size plots and with a larger number of plots than those used in the present study are recommended.

Keywords: competition, growth, model, thinning

Procedia PDF Downloads 121
2817 Inclusive Cities Decision Matrix Based on a Multidimensional Approach for Sustainable Smart Cities

Authors: Madhurima S. Waghmare, Shaleen Singhal

Abstract:

The concept of smartness, inclusion, sustainability is multidisciplinary and fuzzy, rooted in economic and social development theories and policies which get reflected in the spatial development of the cities. It is a challenge to convert these concepts from aspirations to transforming actions. There is a dearth of assessment and planning tools to support the city planners and administrators in developing smart, inclusive, and sustainable cities. To address this gap, this study develops an inclusive cities decision matrix based on an exploratory approach and using mixed methods. The matrix is soundly based on a review of multidisciplinary urban sector literature and refined and finalized based on inputs from experts and insights from case studies. The application of the decision matric on the case study cities in India suggests that the contemporary planning tools for cities need to be multidisciplinary and flexible to respond to the unique needs of the diverse contexts. The paper suggests that a multidimensional and inclusive approach to city planning can play an important role in building sustainable smart cities.

Keywords: inclusive-cities decision matrix, smart cities in India, city planning tools, sustainable cities

Procedia PDF Downloads 152
2816 Assessment of Forest Above Ground Biomass Through Linear Modeling Technique Using SAR Data

Authors: Arjun G. Koppad

Abstract:

The study was conducted in Joida taluk of Uttara Kannada district, Karnataka, India, to assess the land use land cover (LULC) and forest aboveground biomass using L band SAR data. The study area covered has dense, moderately dense, and sparse forests. The sampled area was 0.01 percent of the forest area with 30 sampling plots which were selected randomly. The point center quadrate (PCQ) method was used to select the tree and collected the tree growth parameters viz., tree height, diameter at breast height (DBH), and diameter at the tree base. The tree crown density was measured with a densitometer. Each sample plot biomass was estimated using the standard formula. In this study, the LULC classification was done using Freeman-Durden, Yamaghuchi and Pauli polarimetric decompositions. It was observed that the Freeman-Durden decomposition showed better LULC classification with an accuracy of 88 percent. An attempt was made to estimate the aboveground biomass using SAR backscatter. The ALOS-2 PALSAR-2 L-band data (HH, HV, VV &VH) fully polarimetric quad-pol SAR data was used. SAR backscatter-based regression model was implemented to retrieve forest aboveground biomass of the study area. Cross-polarization (HV) has shown a good correlation with forest above-ground biomass. The Multi Linear Regression analysis was done to estimate aboveground biomass of the natural forest areas of the Joida taluk. The different polarizations (HH &HV, VV &HH, HV & VH, VV&VH) combination of HH and HV polarization shows a good correlation with field and predicted biomass. The RMSE and value for HH & HV and HH & VV were 78 t/ha and 0.861, 81 t/ha and 0.853, respectively. Hence the model can be recommended for estimating AGB for the dense, moderately dense, and sparse forest.

Keywords: forest, biomass, LULC, back scatter, SAR, regression

Procedia PDF Downloads 22
2815 On Fourier Type Integral Transform for a Class of Generalized Quotients

Authors: A. S. Issa, S. K. Q. AL-Omari

Abstract:

In this paper, we investigate certain spaces of generalized functions for the Fourier and Fourier type integral transforms. We discuss convolution theorems and establish certain spaces of distributions for the considered integrals. The new Fourier type integral is well-defined, linear, one-to-one and continuous with respect to certain types of convergences. Many properties and an inverse problem are also discussed in some details.

Keywords: Boehmian, Fourier integral, Fourier type integral, generalized quotient

Procedia PDF Downloads 362
2814 Perceived Stigma, Perception of Burden and Psychological Distress among Parents of Intellectually Disable Children: Role of Perceived Social Support

Authors: Saima Shafiq, Najma Iqbal Malik

Abstract:

This study was aimed to explore the relationship of perceived stigma, perception of burden and psychological distress among parents of intellectually disabled children. The study also aimed to explore the moderating role of perceived social support on all the variables of the study. The sample of the study comprised of (N = 250) parents of intellectually disabled children. The present study utilized the co-relational research design. It consists of two phases. Phase-I consisted of two steps which contained the translation of two scales that were used in the present study and tried out on the sample of parents (N = 70). The Affiliated Stigma Scale and Care Giver Burden Inventory were translated into Urdu for the present study. Phase-1 revealed that translated scaled entailed satisfactory psychometric properties. Phase -II of the study was carried out in order to test the hypothesis. Correlation, linear regression analysis, and t-test were computed for hypothesis testing. Hierarchical regression analysis was applied to study the moderating effect of perceived social support. Findings revealed that there was a positive relationship between perceived stigma and psychological distress, perception of burden and psychological distress. Linear regression analysis showed that perceived stigma and perception of burden were positive predictors of psychological distress. The study did not show the moderating role of perceived social support among variables of the present study. The major limitation of the study is the sample size and the major implication is awareness regarding problems of parents of intellectually disabled children.

Keywords: perceived stigma, perception of burden, psychological distress, perceived social support

Procedia PDF Downloads 209
2813 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator

Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov

Abstract:

The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.

Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator

Procedia PDF Downloads 376
2812 Impact of Increased Radiology Staffing on After-Hours Radiology Reporting Efficiency and Quality

Authors: Peregrine James Dalziel, Philip Vu Tran

Abstract:

Objective / Introduction: Demand for radiology services from Emergency Departments (ED) continues to increase with greater demands placed on radiology staff providing reports for the management of complex cases. Queuing theory indicates that wide variability of process time with the random nature of request arrival increases the probability of significant queues. This can lead to delays in the time-to-availability of radiology reports (TTA-RR) and potentially impaired ED patient flow. In addition, greater “cognitive workload” of greater volume may lead to reduced productivity and increased errors. We sought to quantify the potential ED flow improvements obtainable from increased radiology providers serving 3 public hospitals in Melbourne Australia. We sought to assess the potential productivity gains, quality improvement and the cost-effectiveness of increased labor inputs. Methods & Materials: The Western Health Medical Imaging Department moved from single resident coverage on weekend days 8:30 am-10:30 pm to a limited period of 2 resident coverage 1 pm-6 pm on both weekend days. The TTA-RR for weekend CT scans was calculated from the PACs database for the 8 month period symmetrically around the date of staffing change. A multivariate linear regression model was developed to isolate the improvement in TTA-RR, between the two 4-months periods. Daily and hourly scan volume at the time of each CT scan was calculated to assess the impact of varying department workload. To assess any improvement in report quality/errors a random sample of 200 studies was assessed to compare the average number of clinically significant over-read addendums to reports between the 2 periods. Cost-effectiveness was assessed by comparing the marginal cost of additional staffing against a conservative estimate of the economic benefit of improved ED patient throughput using the Australian national insurance rebate for private ED attendance as a revenue proxy. Results: The primary resident on call and the type of scan accounted for most of the explained variability in time to report availability (R2=0.29). Increasing daily volume and hourly volume was associated with increased TTA-RR (1.5m (p<0.01) and 4.8m (p<0.01) respectively per additional scan ordered within each time frame. Reports were available 25.9 minutes sooner on average in the 4 months post-implementation of double coverage (p<0.01) with additional 23.6 minutes improvement when 2 residents were on-site concomitantly (p<0.01). The aggregate average improvement in TTA-RR was 24.8 hours per weekend day This represents the increased decision-making time available to ED physicians and potential improvement in ED bed utilisation. 5% of reports from the intervention period contained clinically significant addendums vs 7% in the single resident period but this was not statistically significant (p=0.7). The marginal cost was less than the anticipated economic benefit based assuming a 50% capture of improved TTA-RR inpatient disposition and using the lowest available national insurance rebate as a proxy for economic benefit. Conclusion: TTA-RR improved significantly during the period of increased staff availability, both during the specific period of increased staffing and throughout the day. Increased labor utilisation is cost-effective compared with the potential improved productivity for ED cases requiring CT imaging.

Keywords: workflow, quality, administration, CT, staffing

Procedia PDF Downloads 108
2811 Trend Analysis of Annual Total Precipitation Data in Konya

Authors: Naci Büyükkaracığan

Abstract:

Hydroclimatic observation values ​​are used in the planning of the project of water resources. Climate variables are the first of the values ​​used in planning projects. At the same time, the climate system is a complex and interactive system involving the atmosphere, land surfaces, snow and bubbles, the oceans and other water structures. The amount and distribution of precipitation, which is an important climate parameter, is a limiting environmental factor for dispersed living things. Trend analysis is applied to the detection of the presence of a pattern or trend in the data set. Many trends work in different parts of the world are usually made for the determination of climate change. The detection and attribution of past trends and variability in climatic variables is essential for explaining potential future alteration resulting from anthropogenic activities. Parametric and non-parametric tests are used for determining the trends in climatic variables. In this study, trend tests were applied to annual total precipitation data obtained in period of 1972 and 2012, in the Konya Basin. Non-parametric trend tests, (Sen’s T, Spearman’s Rho, Mann-Kendal, Sen’s T trend, Wald-Wolfowitz) and parametric test (mean square) were applied to annual total precipitations of 15 stations for trend analysis. The linear slopes (change per unit time) of trends are calculated by using a non-parametric estimator developed by Sen. The beginning of trends is determined by using the Mann-Kendall rank correlation test. In addition, homogeneities in precipitation trends are tested by using a method developed by Van Belle and Hughes. As a result of tests, negative linear slopes were found in annual total precipitations in Konya.

Keywords: trend analysis, precipitation, hydroclimatology, Konya

Procedia PDF Downloads 211
2810 Oxidosqualene Cyclase: A Novel Inhibitor

Authors: Devadrita Dey Sarkar

Abstract:

Oxidosqualene cyclase is a membrane bound enzyme in which helps in the formation of steroid scaffold in higher organisms. In a highly selective cyclization reaction oxidosqualene cyclase forms LANOSTEROL with seven chiral centres starting from the linear substrate 2,3-oxidosqualene. In humans OSC in cholesterol biosynthesis it represents a target for the discovery of novel anticholesteraemic drugs that could complement the widely used statins. The enzyme oxidosqualene: lanosterol cyclase (OSC) represents a novel target for the treatment of hypercholesterolemia. OSC catalyzes the cyclization of the linear 2,3-monoepoxysqualene to lanosterol, the initial four-ringed sterol intermediate in the cholesterol biosynthetic pathway. OSC also catalyzes the formation of 24(S), 25-epoxycholesterol, a ligand activator of the liver X receptor. Inhibition of OSC reduces cholesterol biosynthesis and selectively enhances 24(S),25-epoxycholesterol synthesis. Through this dual mechanism, OSC inhibition decreases plasma levels of low-density lipoprotein (LDL)-cholesterol and prevents cholesterol deposition within macrophages. The recent crystallization of OSC identifies the mechanism of action for this complex enzyme, setting the stage for the design of OSC inhibitors with improved pharmacological properties for cholesterol lowering and treatment of atherosclerosis. While studying and designing the inhibitor of oxidosqulene cyclase, I worked on the pdb id of 1w6k which was the most worked on pdb id and I used several methods, techniques and softwares to identify and validate the top most molecules which could be acting as an inhibitor for oxidosqualene cyclase. Thus, by partial blockage of this enzyme, both an inhibition of lanosterol and subsequently cholesterol formation as well as a concomitant effect on HMG-CoA reductase can be achieved. Both effects complement each other and lead to an effective control of cholesterol biosynthesis. It is therefore concluded that 2,3-oxidosqualene cyclase plays a crucial role in the regulation of intracellular cholesterol homeostasis. 2,3-Oxidosqualene cyclase inhibitors offer an attractive approach for novel lipid-lowering agents.

Keywords: anticholesteraemic, crystallization, statins, homeostasis

Procedia PDF Downloads 346
2809 Effects of Matrix Properties on Surfactant Enhanced Oil Recovery in Fractured Reservoirs

Authors: Xiaoqian Cheng, Jon Kleppe, Ole Torsæter

Abstract:

The properties of rocks have effects on efficiency of surfactant. One objective of this study is to analyze the effects of rock properties (permeability, porosity, initial water saturation) on surfactant spontaneous imbibition at laboratory scale. The other objective is to evaluate existing upscaling methods and establish a modified upscaling method. A core is put in a container that is full of surfactant solution. Assume there is no space between the bottom of the core and the container. The core is modelled as a cuboid matrix with a length of 3.5 cm, a width of 3.5 cm, and a height of 5 cm. The initial matrix, brine and oil properties are set as the properties of Ekofisk Field. The simulation results of matrix permeability show that the oil recovery rate has a strong positive linear relationship with matrix permeability. Higher oil recovery is obtained from the matrix with higher permeability. One existing upscaling method is verified by this model. The study on matrix porosity shows that the relationship between oil recovery rate and matrix porosity is a negative power function. However, the relationship between ultimate oil recovery and matrix porosity is a positive power function. The initial water saturation of matrix has negative linear relationships with ultimate oil recovery and enhanced oil recovery. However, the relationship between oil recovery and initial water saturation is more complicated with the imbibition time because of the transition of dominating force from capillary force to gravity force. Modified upscaling methods are established. The work here could be used as a reference for the surfactant application in fractured reservoirs. And the description of the relationships between properties of matrix and the oil recovery rate and ultimate oil recovery helps to improve upscaling methods.

Keywords: initial water saturation, permeability, porosity, surfactant EOR

Procedia PDF Downloads 156
2808 Effect of Thermal Radiation and Chemical Reaction on MHD Flow of Blood in Stretching Permeable Vessel

Authors: Binyam Teferi

Abstract:

In this paper, a theoretical analysis of blood flow in the presence of thermal radiation and chemical reaction under the influence of time dependent magnetic field intensity has been studied. The unsteady non linear partial differential equations of blood flow considers time dependent stretching velocity, the energy equation also accounts time dependent temperature of vessel wall, and concentration equation includes time dependent blood concentration. The governing non linear partial differential equations of motion, energy, and concentration are converted into ordinary differential equations using similarity transformations solved numerically by applying ode45. MATLAB code is used to analyze theoretical facts. The effect of physical parameters viz., permeability parameter, unsteadiness parameter, Prandtl number, Hartmann number, thermal radiation parameter, chemical reaction parameter, and Schmidt number on flow variables viz., velocity of blood flow in the vessel, temperature and concentration of blood has been analyzed and discussed graphically. From the simulation study, the following important results are obtained: velocity of blood flow increases with both increment of permeability and unsteadiness parameter. Temperature of the blood increases in vessel wall as Prandtl number and Hartmann number increases. Concentration of the blood decreases as time dependent chemical reaction parameter and Schmidt number increases.

Keywords: stretching velocity, similarity transformations, time dependent magnetic field intensity, thermal radiation, chemical reaction

Procedia PDF Downloads 86
2807 Viability of Zoning Reform in Tackling Urban Inequality in Louisville

Authors: Mojeed A. Oladele

Abstract:

The original zoning system in Louisville promoted social segregation among groups and remained a tool for social exclusion that strengthened preexisting inequalities. The current residential zoning system in Louisville is predominantly single-family residential housing. Of the 75% of total land allocated for residential purposes, 55% comprises single-family housing, constituting one form of development and ruminant problems of social segregation within the city. The zoning reform initiative birthed the spatial improvement and development of additional middle housing as a more generic and inclusive housing form. The paper investigates the basis of zoning reform relative to the interconnectedness amongst the discursive objects of analysis and the extensiveness as a strategic tool of structural adjustment. Qualitative methodological assessment generated by collective planning professionals reflects the effectiveness of the new zoning design in strengthening the socio-spatial interactions within the city. The zoning reform is currently at the early stage of implementation and requires more professional/public inputs and constant iterative processes for a more promising urban planning outcome.

Keywords: zoning reform, viability, urban inequality, housing affordability, Louisville

Procedia PDF Downloads 191
2806 Linearization of Y-Force Equation of Rigid Body Equation of Motion and Behavior of Fighter Aircraft under Imbalance Weight on Wings during Combat

Authors: Jawad Zakir, Syed Irtiza Ali Shah, Rana Shaharyar, Sidra Mahmood

Abstract:

Y-force equation comprises aerodynamic forces, drag and side force with side slip angle β and weight component along with the coupled roll (φ) and pitch angles (θ). This research deals with the linearization of Y-force equation using Small Disturbance theory assuming equilibrium flight conditions for different state variables of aircraft. By using assumptions of Small Disturbance theory in non-linear Y-force equation, finally reached at linearized lateral rigid body equation of motion; which says that in linearized Y-force equation, the lateral acceleration is dependent on the other different aerodynamic and propulsive forces like vertical tail, change in roll rate (Δp) from equilibrium, change in yaw rate (Δr) from equilibrium, change in lateral velocity due to side force, drag and side force components due to side slip, and the lateral equation from coupled rotating frame to decoupled rotating frame. This paper describes implementation of this lateral linearized equation for aircraft control systems. Another significant parameter considered on which y-force equation depends is ‘c’ which shows that any change bought in the weight of aircrafts wing will cause Δφ and cause lateral force i.e. Y_c. This simplification also leads to lateral static and dynamic stability. The linearization of equations is required because much of mathematics control system design for aircraft is based on linear equations. This technique is simple and eases the linearization of the rigid body equations of motion without using any high-speed computers.

Keywords: Y-force linearization, small disturbance theory, side slip, aerodynamic force drag, lateral rigid body equation of motion

Procedia PDF Downloads 492
2805 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System

Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu

Abstract:

In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.

Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission

Procedia PDF Downloads 139
2804 Correlation of SPT N-Value and Equipment Drilling Parameters in Deep Soil Mixing

Authors: John Eric C. Bargas, Maria Cecilia M. Marcos

Abstract:

One of the most common ground improvement techniques is Deep Soil Mixing (DSM). As the technique progresses, there is still lack in the development when it comes to depth control. This was the issue experienced during the installation of DSM in one of the National projects in the Philippines. This study assesses the feasibility of using equipment drilling parameters such as hydraulic pressure, drilling speed and rotational speed in determining the Standard Penetration Test N-value of a specific soil. Hydraulic pressure and drilling speed with a constant rotational speed of 30 rpm have a positive correlation with SPT N-value for cohesive soil and sand. A linear trend was observed for cohesive soil. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.5377 while the correlation of SPT N-value and drilling speed has a R²=0.6355. While the best fitted model for sand is polynomial trend. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.7088 while the correlation of SPT N-value and drilling speed has a R²=0.4354. The low correlation may be attributed to the behavior of sand when the auger penetrates. Sand tends to follow the rotation of the auger rather than resisting which was observed for very loose to medium dense sand. Specific Energy and the product of hydraulic pressure and drilling speed yielded same R² with a positive correlation. Linear trend was observed for cohesive soil while polynomial trend for sand. Cohesive soil yielded a R²=0.7320 which has a strong relationship. Sand also yielded a strong relationship having a coefficient of determination, R²=0.7203. It is feasible to use hydraulic pressure and drilling speed to estimate the SPT N-value of the soil. Also, the product of hydraulic pressure and drilling speed can be a substitute to specific energy when estimating the SPT N-value of a soil. However, additional considerations are necessary to account for other influencing factors like ground water and physical and mechanical properties of soil.

Keywords: ground improvement, equipment drilling parameters, standard penetration test, deep soil mixing

Procedia PDF Downloads 33
2803 An Effective Route to Control of the Safety of Accessing and Storing Data in the Cloud-Based Data Base

Authors: Omid Khodabakhshi, Amir Rozdel

Abstract:

The subject of cloud computing security research has allocated a number of challenges and competitions because the data center is comprised of complex private information and are always faced various risks of information disclosure by hacker attacks or internal enemies. Accordingly, the security of virtual machines in the cloud computing infrastructure layer is very important. So far, there are many software solutions to develop security in virtual machines. But using software alone is not enough to solve security problems. The purpose of this article is to examine the challenges and security requirements for accessing and storing data in an insecure cloud environment. In other words, in this article, a structure is proposed for the implementation of highly isolated security-sensitive codes using secure computing hardware in virtual environments. It also allows remote code validation with inputs and outputs. We provide these security features even in situations where the BIOS, the operating system, and even the super-supervisor are infected. To achieve these goals, we will use the hardware support provided by the new Intel and AMD processors, as well as the TPM security chip. In conclusion, the use of these technologies ultimately creates a root of dynamic trust and reduces TCB to security-sensitive codes.

Keywords: code, cloud computing, security, virtual machines

Procedia PDF Downloads 185
2802 Management and Conservation of Crop Biodiversity in Karnali Mountains of Nepal

Authors: Chhabi Paudel

Abstract:

The food and nutrition security of the people of the mountain of Karnali province of Nepal is dependent on traditional crop biodiversity. The altitude range of the study area is 1800 meters to 2700 meters above sea level. The climate is temperate to alpine. Farmers are adopting subsistent oriented diversified farming systems and selected crop species, cultivars, and local production systems by their own long adaptation mechanism. The major crop species are finger millet, proso millet, foxtail millet, potato, barley, wheat, mountain rice, buckwheat, Amaranths, medicinal plants, and many vegetable species. The genetic and varietal diversity of those underutilized indigenous crops is also very high, which has sustained farming even in uneven climatic events. Biodiversity provides production synergy, inputs, and other agro-ecological services for self-sustainability. But increase in human population and urban accessibility are seen as threats to biodiversity conservation. So integrated conservation measures are suggested, including agro-tourism and other monetary benefits to the farmers who conserve the local biodiversity.

Keywords: crop biodiversity, climate change, in-situ conservation, resilience, sustainability, agrotourism

Procedia PDF Downloads 93
2801 Optimal Seismic Design of Reinforced Concrete Shear Wall-Frame Structure

Authors: H. Nikzad, S. Yoshitomi

Abstract:

In this paper, the optimal seismic design of reinforced concrete shear wall-frame building structures was done using structural optimization. The optimal section sizes were generated through structural optimization based on linear static analysis conforming to American Concrete Institute building design code (ACI 318-14). An analytical procedure was followed to validate the accuracy of the proposed method by comparing stresses on structural members through output files of MATLAB and ETABS. In order to consider the difference of stresses in structural elements by ETABS and MATLAB, and to avoid over-stress members by ETABS, a stress constraint ratio of MATLAB to ETABS was modified and introduced for the most critical load combinations and structural members. Moreover, seismic design of the structure was done following the International Building Code (IBC 2012), American Concrete Institute Building Code (ACI 318-14) and American Society of Civil Engineering (ASCE 7-10) standards. Typical reinforcement requirements for the structural wall, beam and column were discussed and presented using ETABS structural analysis software. The placement and detailing of reinforcement of structural members were also explained and discussed. The outcomes of this study show that the modification of section sizes play a vital role in finding an optimal combination of practical section sizes. In contrast, the optimization problem with size constraints has a higher cost than that of without size constraints. Moreover, the comparison of optimization problem with that of ETABS program shown to be satisfactory and governed ACI 318-14 building design code criteria.

Keywords: structural optimization, seismic design, linear static analysis, etabs, matlab, rc shear wall-frame structures

Procedia PDF Downloads 171
2800 Intelligent Rheumatoid Arthritis Identification System Based Image Processing and Neural Classifier

Authors: Abdulkader Helwan

Abstract:

Rheumatoid joint inflammation is characterized as a perpetual incendiary issue which influences the joints by hurting body tissues Therefore, there is an urgent need for an effective intelligent identification system of knee Rheumatoid arthritis especially in its early stages. This paper is to develop a new intelligent system for the identification of Rheumatoid arthritis of the knee utilizing image processing techniques and neural classifier. The system involves two principle stages. The first one is the image processing stage in which the images are processed using some techniques such as RGB to gryascale conversion, rescaling, median filtering, background extracting, images subtracting, segmentation using canny edge detection, and features extraction using pattern averaging. The extracted features are used then as inputs for the neural network which classifies the X-ray knee images as normal or abnormal (arthritic) based on a backpropagation learning algorithm which involves training of the network on 400 X-ray normal and abnormal knee images. The system was tested on 400 x-ray images and the network shows good performance during that phase, resulting in a good identification rate 97%.

Keywords: rheumatoid arthritis, intelligent identification, neural classifier, segmentation, backpropoagation

Procedia PDF Downloads 529
2799 Direct Electrical Communication of Redox Enzyme Based on 3-Dimensional Cross-Linked Redox Enzyme/Nanomaterials

Authors: A. K. M. Kafi, S. N. Nina, Mashitah M. Yusoff

Abstract:

In this work, we have described a new 3-dimensional (3D) network of cross-linked Horseradish Peroxidase/Carbon Nanotube (HRP/CNT) on a thiol-modified Au surface in order to build up the effective electrical wiring of the enzyme units with the electrode. This was achieved by the electropolymerization of aniline-functionalized carbon nanotubes (CNTs) and 4-aminothiophenol -modified-HRP on a 4-aminothiophenol monolayer-modified Au electrode. The synthesized 3D HRP/CNT networks were characterized with cyclic voltammetry and amperometry, resulting the establishment direct electron transfer between the redox active unit of HRP and the Au surface. Electrochemical measurements reveal that the immobilized HRP exhibits high biological activity and stability and a quasi-reversible redox peak of the redox center of HRP was observed at about −0.355 and −0.275 V vs. Ag/AgCl. The electron transfer rate constant, KS and electron transfer co-efficient were found to be 0.57 s-1 and 0.42, respectively. Based on the electrocatalytic process by direct electrochemistry of HRP, a biosensor for detecting H2O2 was developed. The developed biosensor exhibits excellent electrocatalytic activity for the reduction of H2O2. The proposed biosensor modified with HRP/CNT 3D network displays a broader linear range and a lower detection limit for H2O2 determination. The linear range is from 1.0×10−7 to 1.2×10−4M with a detection limit of 2.2.0×10−8M at 3σ. Moreover, this biosensor exhibits very high sensitivity, good reproducibility and long-time stability. In summary, ease of fabrication, a low cost, fast response and high sensitivity are the main advantages of the new biosensor proposed in this study. These obvious advantages would really help for the real analytical applicability of the proposed biosensor.

Keywords: redox enzyme, nanomaterials, biosensors, electrical communication

Procedia PDF Downloads 452
2798 Searchable Encryption in Cloud Storage

Authors: Ren Junn Hwang, Chung-Chien Lu, Jain-Shing Wu

Abstract:

Cloud outsource storage is one of important services in cloud computing. Cloud users upload data to cloud servers to reduce the cost of managing data and maintaining hardware and software. To ensure data confidentiality, users can encrypt their files before uploading them to a cloud system. However, retrieving the target file from the encrypted files exactly is difficult for cloud server. This study proposes a protocol for performing multikeyword searches for encrypted cloud data by applying k-nearest neighbor technology. The protocol ranks the relevance scores of encrypted files and keywords, and prevents cloud servers from learning search keywords submitted by a cloud user. To reduce the costs of file transfer communication, the cloud server returns encrypted files in order of relevance. Moreover, when a cloud user inputs an incorrect keyword and the number of wrong alphabet does not exceed a given threshold; the user still can retrieve the target files from cloud server. In addition, the proposed scheme satisfies security requirements for outsourced data storage.

Keywords: fault-tolerance search, multi-keywords search, outsource storage, ranked search, searchable encryption

Procedia PDF Downloads 376