Search results for: mixed variables
1324 A Theoretical Analysis for Modeling and Prediction of the Jet Engine Emissions
Authors: Jamal S. Yassin
Abstract:
This paper is to formulate a mathematical model to predict the amounts of the emissions produced from the combustion process of the gas turbine unit of the jet engine. These emissions have bad impacts on the environment if they are out of standards, which cause real threats to all type of life on the earth. The amounts of the emissions from the gas turbine engine are functions to many operational and design factors. In landing-takeoff (LTO) these amounts are not the same as in taxi or cruise of the plane using jet engines, because of the difference in the activity period during these operating modes. These emissions can be affected by several physical and chemical variables, such as fuel type, fuel to air ratio or equivalence ratio, flame temperature, combustion pressure, in addition to some inlet conditions such as ambient temperature and air humidity. To study the influence of these variables on the amounts of these emissions during the combustion process in the gas turbine unit, a computer program has been developed by using the visual basic 6 software. Here, the analysis of the combustion process is carried out by considering it as a chemical reaction with shifting equilibrium to find the products of the combustion of the octane fuel, at different equivalence ratios, compressor pressure ratios (CPR) and combustion temperatures. The results obtained have shown that there is noticeable influence of the equivalence ratio, CPR, and the combustion temperature on the amounts of the main emissions which are considered pollutants, such as CO, CO2 and NO.
Keywords: Mathematical model, gas turbine unit, equivalence ratio, emissions, shifting equilibrium.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7361323 Measuring Relative Efficiency of Korean Construction Company using DEA/Window
Authors: Jung-Lo Park, Sung-Sik Kim, Sun-Young Choi, Ju-Hyung Kim, Jae-Jun Kim
Abstract:
Sub-prime mortgage crisis which began in the US is regarded as the most economic crisis since the Great Depression in the early 20th century. Especially, hidden problems on efficient operation of a business were disclosed at a time and many financial institutions went bankrupt and filed for court receivership. The collapses of physical market lead to bankruptcy of manufacturing and construction businesses. This study is to analyze dynamic efficiency of construction businesses during the five years at the turn of the global financial crisis. By discovering the trend and stability of efficiency of a construction business, this study-s objective is to improve management efficiency of a construction business in the ever-changing construction market. Variables were selected by analyzing corporate information on top 20 construction businesses in Korea and analyzed for static efficiency in 2008 and dynamic efficiency between 2006 and 2010. Unlike other studies, this study succeeded in deducing efficiency trend and stability of a construction business for five years by using the DEA/Window model. Using the analysis result, efficient and inefficient companies could be figured out. In addition, relative efficiency among DMU was measured by comparing the relationship between input and output variables of construction businesses. This study can be used as a literature to improve management efficiency for companies with low efficiency based on efficiency analysis of construction businesses.Keywords: Construction Company, DEA, DEA/Window, Efficiency Analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19851322 Determination of an Efficient Differentiation Pathway of Stem Cells Employing Predictory Neural Network Model
Authors: Mughal Yar M, Israr Ul Haq, Bushra Noman
Abstract:
The stem cells have ability to differentiated themselves through mitotic cell division and various range of specialized cell types. Cellular differentiation is a way by which few specialized cell develops into more specialized.This paper studies the fundamental problem of computational schema for an artificial neural network based on chemical, physical and biological variables of state. By doing this type of study system could be model for a viable propagation of various economically important stem cells differentiation. This paper proposes various differentiation outcomes of artificial neural network into variety of potential specialized cells on implementing MATLAB version 2009. A feed-forward back propagation kind of network was created to input vector (five input elements) with single hidden layer and one output unit in output layer. The efficiency of neural network was done by the assessment of results achieved from this study with that of experimental data input and chosen target data. The propose solution for the efficiency of artificial neural network assessed by the comparatative analysis of “Mean Square Error" at zero epochs. There are different variables of data in order to test the targeted results.Keywords: Computational shcmin, meiosis, mitosis, neuralnetwork, Stem cell SOM;
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15091321 Comparison of Power Generation Status of Photovoltaic Systems under Different Weather Conditions
Authors: Zhaojun Wang, Zongdi Sun, Qinqin Cui, Xingwan Ren
Abstract:
Based on multivariate statistical analysis theory, this paper uses the principal component analysis method, Mahalanobis distance analysis method and fitting method to establish the photovoltaic health model to evaluate the health of photovoltaic panels. First of all, according to weather conditions, the photovoltaic panel variable data are classified into five categories: sunny, cloudy, rainy, foggy, overcast. The health of photovoltaic panels in these five types of weather is studied. Secondly, a scatterplot of the relationship between the amount of electricity produced by each kind of weather and other variables was plotted. It was found that the amount of electricity generated by photovoltaic panels has a significant nonlinear relationship with time. The fitting method was used to fit the relationship between the amount of weather generated and the time, and the nonlinear equation was obtained. Then, using the principal component analysis method to analyze the independent variables under five kinds of weather conditions, according to the Kaiser-Meyer-Olkin test, it was found that three types of weather such as overcast, foggy, and sunny meet the conditions for factor analysis, while cloudy and rainy weather do not satisfy the conditions for factor analysis. Therefore, through the principal component analysis method, the main components of overcast weather are temperature, AQI, and pm2.5. The main component of foggy weather is temperature, and the main components of sunny weather are temperature, AQI, and pm2.5. Cloudy and rainy weather require analysis of all of their variables, namely temperature, AQI, pm2.5, solar radiation intensity and time. Finally, taking the variable values in sunny weather as observed values, taking the main components of cloudy, foggy, overcast and rainy weather as sample data, the Mahalanobis distances between observed value and these sample values are obtained. A comparative analysis was carried out to compare the degree of deviation of the Mahalanobis distance to determine the health of the photovoltaic panels under different weather conditions. It was found that the weather conditions in which the Mahalanobis distance fluctuations ranged from small to large were: foggy, cloudy, overcast and rainy.
Keywords: Fitting, principal component analysis, Mahalanobis distance, SPSS, MATLAB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6751320 The Impact of Gender Differences on the Expressions of Refusal in Jordanian Arabic
Authors: Hanan Yousef, Nisreen Naji Al-Khawaldeh
Abstract:
The present study investigates the use of the expression of refusal by native speakers of Jordanian Arabic (NSsJA) in different social situations (i.e. invitations, suggestions, and offers). It also investigates the influence of gender on the refusal realization patterns within the Jordanian culture to provide a better insight into the relation between situations, strategies and gender in the Jordanian culture. To that end, a group of 70 participants, including 35 male and 35 female students from different departments at the Hashemite University (HU) participated in this study using mixed methods (i.e. Discourse Completion Test (DCT), interviews and naturally occurring data). Data were analyzed in light of a developed coding scheme. The results showed that NSsJA preferred indirect strategies which mitigate the interaction such as "excuse, reason and, explanation" strategy more than other strategies which aggravate the interaction such as "face-threatening" strategy. Moreover, the analysis of this study has revealed a considerable impact of gender on the use of linguistic forms expressing refusal among NSsJA. Significant differences in the results of the Chi-square test relating the effect of participants' gender indicate that both males and females were conscious of the gender of their interlocutors. The findings provide worthwhile insights into the relation amongst types of communicative acts and the rapport between people in social interaction. They assert that refusal should not be labeled as face threatening act since it does not always pose a threat in some cases especially where refusal is expressed among friends, relatives and family members. They highlight some distinctive culture-specific features of the communicative acts of refusal.
Keywords: Speech act, refusals, semantic formulas, politeness, Jordanian Arabic, mixed methodology, gender.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9451319 Optimal Manufacturing Scheduling for Dependent Details Processing
Authors: Ivan C. Mustakerov, Daniela I. Borissova
Abstract:
The increasing competitiveness in manufacturing industry is forcing manufacturers to seek effective processing schedules. The paper presents an optimization manufacture scheduling approach for dependent details processing with given processing sequences and times on multiple machines. By defining decision variables as start and end moments of details processing it is possible to use straightforward variables restrictions to satisfy different technological requirements and to formulate easy to understand and solve optimization tasks for multiple numbers of details and machines. A case study example is solved for seven base moldings for CNC metalworking machines processed on five different machines with given processing order among details and machines and known processing time-s duration. As a result of linear optimization task solution the optimal manufacturing schedule minimizing the overall processing time is obtained. The manufacturing schedule defines the moments of moldings delivery thus minimizing storage costs and provides mounting due-time satisfaction. The proposed optimization approach is based on real manufacturing plant problem. Different processing schedules variants for different technological restrictions were defined and implemented in the practice of Bulgarian company RAIS Ltd. The proposed approach could be generalized for other job shop scheduling problems for different applications.Keywords: Optimal manufacturing scheduling, linear programming, metalworking machines production, dependant details processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14871318 Power System Damping Using Hierarchical Fuzzy Multi- Input Power System Stabilizer and Static VAR Compensator
Authors: Mohammad Hasan Raouf, Ebrahim Rasooli Anarmarzi, Hamid Lesani, Javad Olamaei
Abstract:
This paper proposes the application of a hierarchical fuzzy system (HFS) based on multi-input power system stabilizer (MPSS) and also Static Var Compensator (SVC) in multi-machine environment.The number of rules grows exponentially with the number of variables in a conventional fuzzy logic system. The proposed HFS method is developed to solve this problem. To reduce the number of rules the HFS consists of a number of low-dimensional fuzzy systems in a hierarchical structure. In fact, by using HFS the total number of involved rules increases only linearly with the number of input variables. In the MPSS, to have better efficiency an auxiliary signal of reactive power deviation (ΔQ) is added with ΔP+ Δω input type Power system stabilizer (PSS). Phasor model of SVC is described and used in this paper. The performances of MPSS, Conventional power system stabilizer (CPSS), hierarchical Fuzzy Multi-input Power System Stabilizer (HFMPSS) and the proposed method in damping inter-area mode of oscillation are examined in response to disturbances. By using digital simulations the comparative study is illustrated. It can be seen that the proposed PSS is performing satisfactorily within the whole range of disturbances.
Keywords: Power system stabilizer (PSS), hierarchical fuzzysystem (HFS), Static VAR compensator (SVC)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15271317 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients
Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi
Abstract:
Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.
Keywords: Frailty model, latent variables, liver cirrhosis, parametric distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10591316 Studying the Effects of Economic and Financial Development as well as Institutional Quality on Environmental Destruction in the Upper-Middle Income Countries
Authors: Morteza Raei Dehaghi, Seyed Mohammad Mirhashemi
Abstract:
The current study explored the effect of economic development, financial development and institutional quality on environmental destruction in upper-middle income countries during the time period of 1999-2011. The dependent variable is logarithm of carbon dioxide emissions that can be considered as an index for destruction or quality of the environment given to its effects on the environment. Financial development and institutional development variables as well as some control variables were considered. In order to study cross-sectional correlation among the countries under study, Pesaran and Friz test was used. Since the results of both tests show cross-sectional correlation in the countries under study, seemingly unrelated regression method was utilized for model estimation. The results disclosed that Kuznets’ environmental curve hypothesis is confirmed in upper-middle income countries and also, financial development and institutional quality have a significant effect on environmental quality. The results of this study can be considered by policy makers in countries with different income groups to have access to a growth accompanied by improved environmental quality.
Keywords: Economic Development, Environmental Destruction, Financial Development, Institutional Development, Seemingly Unrelated Regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19501315 Exploring Additional Intention Predictors within Dietary Behavior among Type 2 Diabetes
Authors: D. O. Omondi, M. K. Walingo, G. M. Mbagaya
Abstract:
Objective: This study explored the possibility of integrating Health Belief Concepts as additional predictors of intention to adopt a recommended diet-category within the Theory of Planned Behavior (TPB). Methods: The study adopted a Sequential Exploratory Mixed Methods approach. Qualitative data were generated on attitude, subjective norm, perceived behavioral control and perceptions on predetermined diet-categories including perceived susceptibility, perceived benefits, perceived severity and cues to action. Synthesis of qualitative data was done using constant comparative approach during phase 1. A survey tool developed from qualitative results was used to collect information on the same concepts across 237 legible Type 2 diabetics. Data analysis included use of Structural Equation Modeling in Analysis of Moment Structures to explore the possibility of including perceived susceptibility, perceived benefits, perceived severity and cues to action as additional intention predictors in a single nested model. Results: Two models-one nested based on the traditional TPB model {χ2=223.3, df = 77, p = .02, χ2/df = 2.9; TLI = .93; CFI =.91; RMSEA (90CI) = .090(.039, .146)} and the newly proposed Planned Behavior Health Belief Model (PBHB) {χ2 = 743.47, df = 301, p = .019; TLI = .90; CFI=.91; RMSEA (90CI) = .079(.031, .14)} passed the goodness of fit tests based on common fit indicators used. Conclusion: The newly developed PBHB Model ranked higher than the traditional TPB model with reference made to chi-square ratios (PBHB: χ2/df = 2.47; p=0.19 against TPB: χ2/df = 2.9, p=0.02). The integrated model can be used to motivate Type 2 diabetics towards healthy eating.
Keywords: Theory, intention, predictors, mixed methods design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14101314 A Renovated Cook's Distance Based On The Buckley-James Estimate In Censored Regression
Authors: Nazrina Aziz, Dong Q. Wang
Abstract:
There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.
Keywords: Buckley-James estimators, censored regression, censored data, diagnostic analysis, product-limit estimator, renovated Cook's Distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14411313 A Generalization of Planar Pascal’s Triangle to Polynomial Expansion and Connection with Sierpinski Patterns
Authors: Wajdi Mohamed Ratemi
Abstract:
The very well-known stacked sets of numbers referred to as Pascal’s triangle present the coefficients of the binomial expansion of the form (x+y)n. This paper presents an approach (the Staircase Horizontal Vertical, SHV-method) to the generalization of planar Pascal’s triangle for polynomial expansion of the form (x+y+z+w+r+⋯)n. The presented generalization of Pascal’s triangle is different from other generalizations of Pascal’s triangles given in the literature. The coefficients of the generalized Pascal’s triangles, presented in this work, are generated by inspection, using embedded Pascal’s triangles. The coefficients of I-variables expansion are generated by horizontally laying out the Pascal’s elements of (I-1) variables expansion, in a staircase manner, and multiplying them with the relevant columns of vertically laid out classical Pascal’s elements, hence avoiding factorial calculations for generating the coefficients of the polynomial expansion. Furthermore, the classical Pascal’s triangle has some pattern built into it regarding its odd and even numbers. Such pattern is known as the Sierpinski’s triangle. In this study, a presentation of Sierpinski-like patterns of the generalized Pascal’s triangles is given. Applications related to those coefficients of the binomial expansion (Pascal’s triangle), or polynomial expansion (generalized Pascal’s triangles) can be in areas of combinatorics, and probabilities.Keywords: Generalized Pascal’s triangle, Pascal’s triangle, polynomial expansion, Sierpinski’s triangle, staircase horizontal vertical method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23821312 Catchment Yield Prediction in an Ungauged Basin Using PyTOPKAPI
Authors: B. S. Fatoyinbo, D. Stretch, O. T. Amoo, D. Allopi
Abstract:
This study extends the use of the Drainage Area Regionalization (DAR) method in generating synthetic data and calibrating PyTOPKAPI stream yield for an ungauged basin at a daily time scale. The generation of runoff in determining a river yield has been subjected to various topographic and spatial meteorological variables, which integers form the Catchment Characteristics Model (CCM). Many of the conventional CCM models adapted in Africa have been challenged with a paucity of adequate, relevance and accurate data to parameterize and validate the potential. The purpose of generating synthetic flow is to test a hydrological model, which will not suffer from the impact of very low flows or very high flows, thus allowing to check whether the model is structurally sound enough or not. The employed physically-based, watershed-scale hydrologic model (PyTOPKAPI) was parameterized with GIS-pre-processing parameters and remote sensing hydro-meteorological variables. The validation with mean annual runoff ratio proposes a decent graphical understanding between observed and the simulated discharge. The Nash-Sutcliffe efficiency and coefficient of determination (R²) values of 0.704 and 0.739 proves strong model efficiency. Given the current climate variability impact, water planner can now assert a tool for flow quantification and sustainable planning purposes.
Keywords: Ungauged Basin, Catchment Characteristics Model, Synthetic data, GIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13121311 Influence of Loudness Compression on Hearing with Bone Anchored Hearing Implants
Authors: Anja Kurz, Marc Flynn, Tobias Good, Marco Caversaccio, Martin Kompis
Abstract:
Bone Anchored Hearing Implants (BAHI) are routinely used in patients with conductive or mixed hearing loss, e.g. if conventional air conduction hearing aids cannot be used. New sound processors and new fitting software now allow the adjustment of parameters such as loudness compression ratios or maximum power output separately. Today it is unclear, how the choice of these parameters influences aided speech understanding in BAHI users. In this prospective experimental study, the effect of varying the compression ratio and lowering the maximum power output in a BAHI were investigated. Twelve experienced adult subjects with a mixed hearing loss participated in this study. Four different compression ratios (1.0; 1.3; 1.6; 2.0) were tested along with two different maximum power output settings, resulting in a total of eight different programs. Each participant tested each program during two weeks. A blinded Latin square design was used to minimize bias. For each of the eight programs, speech understanding in quiet and in noise was assessed. For speech in quiet, the Freiburg number test and the Freiburg monosyllabic word test at 50, 65, and 80 dB SPL were used. For speech in noise, the Oldenburg sentence test was administered. Speech understanding in quiet and in noise was improved significantly in the aided condition in any program, when compared to the unaided condition. However, no significant differences were found between any of the eight programs. In contrast, on a subjective level there was a significant preference for medium compression ratios of 1.3 to 1.6 and higher maximum power output.
Keywords: Bone Anchored Hearing Implant, Compression, Maximum Power Output, Speech understanding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20681310 Enhanced Particle Swarm Optimization Approach for Solving the Non-Convex Optimal Power Flow
Authors: M. R. AlRashidi, M. F. AlHajri, M. E. El-Hawary
Abstract:
An enhanced particle swarm optimization algorithm (PSO) is presented in this work to solve the non-convex OPF problem that has both discrete and continuous optimization variables. The objective functions considered are the conventional quadratic function and the augmented quadratic function. The latter model presents non-differentiable and non-convex regions that challenge most gradient-based optimization algorithms. The optimization variables to be optimized are the generator real power outputs and voltage magnitudes, discrete transformer tap settings, and discrete reactive power injections due to capacitor banks. The set of equality constraints taken into account are the power flow equations while the inequality ones are the limits of the real and reactive power of the generators, voltage magnitude at each bus, transformer tap settings, and capacitor banks reactive power injections. The proposed algorithm combines PSO with Newton-Raphson algorithm to minimize the fuel cost function. The IEEE 30-bus system with six generating units is used to test the proposed algorithm. Several cases were investigated to test and validate the consistency of detecting optimal or near optimal solution for each objective. Results are compared to solutions obtained using sequential quadratic programming and Genetic Algorithms.Keywords: Particle Swarm Optimization, Optimal Power Flow, Economic Dispatch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23681309 The Effect of the Side-Weir Crest Height to Scour in Clay-Sand Mixed Sediments
Authors: F. Ayça Varol Saraçoğlu, Hayrullah Ağaçcıoğlu
Abstract:
Experimental studies to investigate the depth of the scour conducted at a side-weir intersection located at the 1800 curved flume which located Hydraulic Laboratory of Yıldız Technical University, Istanbul, Turkey. Side weirs were located at the middle of the straight part of the main channel. Three different lengths (25, 40 and 50 cm) and three different weir crest height (7, 10 and 12 cm) of the side weir placed on the side weir station. There is no scour when the material is only kaolin. Therefore, the cohesive bed was prepared by properly mixing clay material (kaolin) with 31% sand in all experiments. Following 24h consolidation time, in order to observe the effect of flow intensity on the scour depth, experiments were carried out for five different upstream Froude numbers in the range of 0.33-0.81. As a result of this study the relation between scour depth and upstream flow intensity as a function of time have been established. The longitudinal velocities decreased along the side weir; towards the downstream due to overflow over the side-weirs. At the beginning, the scour depth increases rapidly with time and then asymptotically approached constant values in all experiments for all side weir dimensions as in non-cohesive sediment. Thus, the scour depth reached equilibrium conditions. Time to equilibrium depends on the approach flow intensity and the dimensions of side weirs. For different heights of the weir crest, dimensionless scour depths increased with increasing upstream Froude number. Equilibrium scour depths which formed 7 cm side-weir crest height were obtained higher than that of the 12 cm side-weir crest height. This means when side-weir crest height increased equilibrium scour depths decreased. Although the upstream side of the scour hole is almost vertical, the downstream side of the hole is inclined.Keywords: Clay-sand mixed sediments, scour, side weir.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21361308 Job in Modern Arabic Poetry: A Semantic and Comparative Approach to Two Poems Referring to the Poet Al-Sayyab
Authors: Jeries Khoury
Abstract:
The use of legendary, folkloric and religious symbols is one of the most important phenomena in modern Arabic poetry. Interestingly enough, most of the modern Arabic poetry’s pioneers were so fascinated by the biblical symbols and they managed to use many modern techniques to make these symbols adequate for their personal life from one side and fit to their Islamic beliefs from the other. One of the most famous poets to do so was al-Sayya:b. The way he employed one of these symbols ‘job’, the new features he adds to this character and the link between this character and his personal life will be discussed in this study. Besides, the study will examine the influence of al-Sayya:b on another modern poet Saadi Yusuf, who, following al-Sayya:b, used the character of Job in a special way, by mixing its features with al-Sayya:b’s personal features and in this way creating a new mixed character. A semantic, cultural and comparative analysis of the poems written by al-Sayya:b himself and the other poets who evoked the mixed image of al-Sayya:b-Job, can reveal the changes Arab poets made to the original biblical figure of Job to bring it closer to Islamic culture. The paper will make an intensive use of intertextuality idioms in order to shed light on the network of relations between three kinds of texts (indeed three ‘palimpsests’: 1- biblical- the primary text; 2- poetic- al-Syya:b’s secondary version; 3- re-poetic- Sa’di Yusuf’s tertiary version). The bottom line in this paper is that that al-Sayya:b was directly influenced by the dramatic biblical story of Job more than the brief Quranic version of the story. In fact, the ‘new’ character of Job designed by al-Sayya:b himself differs from the original one in many aspects that we can safely say it is the Sayyabian-Job that cannot be found in the poems of any other poets, unless they are evoking the own tragedy of al-Sayya:b himself, like what Saadi Yusuf did.
Keywords: Arabic poetry, intertextuality, job, meter, modernism, symbolism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6571307 The Impact of Size of the Regional Economic Blocs to the Country’s Flows of Trade: Evidence from COMESA, EAC and Tanzania
Authors: Mosses E. Lufuke, Lorna M. Kamau
Abstract:
This paper attempted to assess whether the size of the regional economic bloc has an impact to the flow of trade to a particular country. Two different sized blocs (COMESA and EAC) and one country (Tanzania) have been used as the point of references. Using the results from of the analyses, the paper also was anticipated to establish whether it was rational for Tanzania to withdraw its membership from COMESA (the larger bloc) to join EAC (the small one). Gravity model has been used to estimate the relationship between the variables, from which the bilateral trade flows between Tanzania and the eighteen member countries of the two blocs (COMESA and EAC) was employed for the time between 2000 and 2013. In the model, the dummy variable for regional bloc (bloc) at which the Tanzania trade partner countries belong are also added to the model to understand which trade bloc exhibit higher trade flow with Tanzania. From the findings, it was noted that over the period of study (2000-2013) Tanzania acknowledged more than 257% of trade volume in EAC than in COMESA. Conclusive, it was noted that the flow of trade is explained by many other variables apart from the size of regional bloc; and that the size by itself offer insufficient evidence in causality relationship. The paper therefore remain neutral on such staggered switching decision since more analyses are required to establish the country’s trade flow, especially when if it had been in multiple membership of COMESA and EAC.Keywords: Economic Bloc, Flow of Trade, Size of Bloc, Switching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12071306 Robust Iterative PID Controller Based on Linear Matrix Inequality for a Sample Power System
Authors: Ahmed Bensenouci
Abstract:
This paper provides the design steps of a robust Linear Matrix Inequality (LMI) based iterative multivariable PID controller whose duty is to drive a sample power system that comprises a synchronous generator connected to a large network via a step-up transformer and a transmission line. The generator is equipped with two control-loops, namely, the speed/power (governor) and voltage (exciter). Both loops are lumped in one where the error in the terminal voltage and output active power represent the controller inputs and the generator-exciter voltage and governor-valve position represent its outputs. Multivariable PID is considered here because of its wide use in the industry, simple structure and easy implementation. It is also preferred in plants of higher order that cannot be reduced to lower ones. To improve its robustness to variation in the controlled variables, H∞-norm of the system transfer function is used. To show the effectiveness of the controller, divers tests, namely, step/tracking in the controlled variables, and variation in plant parameters, are applied. A comparative study between the proposed controller and a robust H∞ LMI-based output feedback is given by its robustness to disturbance rejection. From the simulation results, the iterative multivariable PID shows superiority.Keywords: Linear matrix inequality, power system, robust iterative PID, robust output feedback control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20581305 The Household-Based Socio-Economic Index for Every District in Peninsular Malaysia
Authors: Nuzlinda Abdul Rahman, Syerrina Zakaria
Abstract:
Deprivation indices are widely used in public health study. These indices are also referred as the index of inequalities or disadvantage. Even though, there are many indices that have been built before, it is believed to be less appropriate to use the existing indices to be applied in other countries or areas which had different socio-economic conditions and different geographical characteristics. The objective of this study is to construct the index based on the geographical and socio-economic factors in Peninsular Malaysia which is defined as the weighted household-based deprivation index. This study has employed the variables based on household items, household facilities, school attendance and education level obtained from Malaysia 2000 census report. The factor analysis is used to extract the latent variables from indicators, or reducing the observable variable into smaller amount of components or factor. Based on the factor analysis, two extracted factors were selected, known as Basic Household Amenities and Middle-Class Household Item factor. It is observed that the district with a lower index values are located in the less developed states like Kelantan, Terengganu and Kedah. Meanwhile, the areas with high index values are located in developed states such as Pulau Pinang, W.P. Kuala Lumpur and Selangor.Keywords: Factor Analysis, Basic Household Amenities, Middle-Class Household Item, Socio-economic Index
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30131304 Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results
Authors: C. Villegas-Quezada, J. Climent
Abstract:
Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.
Keywords: AdaBoost Classifier, Holistic Face Recognition, Minimax Multivariate Approximation, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15011303 On a New Nonlinear Sum-difference Inequality with Application
Authors: Kelong Zheng, Shouming Zhong
Abstract:
A new nonlinear sum-difference inequality in two variables which generalize some existing results and can be used as handy tools in the analysis of certain partial difference equation is discussed. An example to show boundedness of solutions of a difference value problem is also given.Keywords: Sum-Difference inequality, Nonlinear, Boundedness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11321302 Reduction of Plutonium Production in Heavy Water Research Reactor: A Feasibility Study through Neutronic Analysis Using MCNPX2.6 and CINDER90 Codes
Authors: H. Shamoradifar, B. Teimuri, P. Parvaresh, S. Mohammadi
Abstract:
One of the main characteristics of Heavy Water Moderated Reactors is their high production of plutonium. This article demonstrates the possibility of reduction of plutonium and other actinides in Heavy Water Research Reactor. Among the many ways for reducing plutonium production in a heavy water reactor, in this research, changing the fuel from natural Uranium fuel to Thorium-Uranium mixed fuel was focused. The main fissile nucleus in Thorium-Uranium fuels is U-233 which would be produced after neutron absorption by Th-232, so the Thorium-Uranium fuels have some known advantages compared to the Uranium fuels. Due to this fact, four Thorium-Uranium fuels with different compositions ratios were chosen in our simulations; a) 10% UO2-90% THO2 (enriched= 20%); b) 15% UO2-85% THO2 (enriched= 10%); c) 30% UO2-70% THO2 (enriched= 5%); d) 35% UO2-65% THO2 (enriched= 3.7%). The natural Uranium Oxide (UO2) is considered as the reference fuel, in other words all of the calculated data are compared with the related data from Uranium fuel. Neutronic parameters were calculated and used as the comparison parameters. All calculations were performed by Monte Carol (MCNPX2.6) steady state reaction rate calculation linked to a deterministic depletion calculation (CINDER90). The obtained computational data showed that Thorium-Uranium fuels with four different fissile compositions ratios can satisfy the safety and operating requirements for Heavy Water Research Reactor. Furthermore, Thorium-Uranium fuels have a very good proliferation resistance and consume less fissile material than uranium fuels at the same reactor operation time. Using mixed Thorium-Uranium fuels reduced the long-lived α emitter, high radiotoxic wastes and the radio toxicity level of spent fuel.
Keywords: Burn-up, heavy water reactor, minor actinides, Monte Carlo, proliferation resistance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10051301 Morphology of Indian Female Athletes of Different Track and Field Events
Authors: Anju Luthra, Rajender Lal, Dhananjoy Shaw
Abstract:
Participation in games and sports in the contemporary times has become more competing with the developed scientific knowledge, skills and methods, along with the equipment and applied research in the field. In spite of India being a large country having vast resources and potential, its performance in the world of sports on the whole needs sincere attention for better achievements. Beside numerous factors responsible for the dismal performance of a sportsperson, the physique and body composition, including the size, shape and form are known to play a significant role. The present investigation was undertaken to study the specific morphological characteristics of Indian female Track and Field athletes. A total of 300 athletes were randomly selected as sample for the purpose of the study from the six events having 50 athletes in each event including 100m., 400m., Shot Put, Discus Throw, Long Jump and High Jump. The study included body weight, body fat percentage, lean body weight, endomorphy, mesomorphy and ectomorphy as variables. The data were computed statistically by using Mean, Standard Deviation and Analysis of Variance. The post-hoc analysis was conducted where the F-ratio was found to be significant at .05 level. The study concluded that there is a significant difference with regard to the selected variables among the Indian female athletes of different track and field events.
Keywords: Indian female athletes, body composition, morphology, somatotypes, track and field.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7551300 Accurate Visualization of Graphs of Functions of Two Real Variables
Authors: Zeitoun D. G., Thierry Dana-Picard
Abstract:
The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.
Keywords: Function singularities, mesh generation, point allocation, visualization, collocation least squares method, Augmented Lagrangian method, Uzawa's Algorithm, Preconditioned Conjugate Gradien
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17091299 Stability Analysis of Linear Switched Systems with Mixed Delays
Authors: Xiuyong Ding, Lan Shu
Abstract:
This paper addresses the stability of the switched systems with discrete and distributed time delays. By applying Lyapunov functional and function method, we show that, if the norm of system matrices Bi is small enough, the asymptotic stability is always achieved. Finally, a example is provided to verify technically feasibility and operability of the developed results.
Keywords: Switched system, stability, Lyapunov function, Lyapunov functional, delays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17831298 Temperature Distribution in Friction Stir Welding Using Finite Element Method
Authors: Armansyah, I. P. Almanar, M. Saiful Bahari Shaari, M. Shamil Jaffarullah, Nur’amirah Busu, M. Arif Fadzleen Zainal Abidin, M. Amlie A. Kasim
Abstract:
During welding, the amount of heat present in weld zones determines the quality of weldment produced. Thus, the heat distribution characteristics and its magnitude in weld zones with respect to process variables such as tool pin-shoulder rotational and traveling speed during welding is analyzed using thermal finite element analyses method. For this purpose, transient thermal finite element analyses are performed to model the temperatures distribution and its quantities in weld-zones with respect to process variables such as rotational speed and traveling speed during welding. Commercially available software Altair HyperWork is used to model three-dimensional tool pin-shoulder vs. workpieces and to simulate the friction stir process. The results show that increasing tool rotational speed, at a constant traveling speed, will increase the amount of heat generated in weld-zones. In contrary, increasing traveling speed, at constant tool pin-shoulder rotational speeds, will reduce the amount of heat generated in weld zones.
Keywords: Frictions Stir Welding, Temperature Distribution, Finite Element Method, Altair Hyperwork.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39591297 Mathematical Study for Traffic Flow and Traffic Density in Kigali Roads
Authors: Kayijuka Idrissa
Abstract:
This work investigates a mathematical study for traffic flow and traffic density in Kigali city roads and the data collected from the national police of Rwanda in 2012. While working on this topic, some mathematical models were used in order to analyze and compare traffic variables. This work has been carried out on Kigali roads specifically at roundabouts from Kigali Business Center (KBC) to Prince House as our study sites. In this project, we used some mathematical tools to analyze the data collected and to understand the relationship between traffic variables. We applied the Poisson distribution method to analyze and to know the number of accidents occurred in this section of the road which is from KBC to Prince House. The results show that the accidents that occurred in 2012 were at very high rates due to the fact that this section has a very narrow single lane on each side which leads to high congestion of vehicles, and consequently, accidents occur very frequently. Using the data of speeds and densities collected from this section of road, we found that the increment of the density results in a decrement of the speed of the vehicle. At the point where the density is equal to the jam density the speed becomes zero. The approach is promising in capturing sudden changes on flow patterns and is open to be utilized in a series of intelligent management strategies and especially in noncurrent congestion effect detection and control.
Keywords: Statistical methods, Poisson distribution, car moving techniques, traffic flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18221296 Efficacy of Selected Mobility Exercises and Participation in Special Games on Psychomotor Abilities, Functional Abilities and Game Performance among Intellectually Disabled Children of Under 14 Age
Authors: J. Samuel Jesudoss
Abstract:
The purpose of the study was to find out the efficacy of selected mobility exercises and participation in special games on psychomotor abilities, functional abilities and skill performance among intellectually disabled children of age group under 14. Thirty male students who were studying in Balar Kalvi Nilayam and YMCA College Special School, Chennai, acted as subjects for the study. They were only mild and moderate in intellectual disability. These students did not undergo any special training or coaching programme apart from their regular routine physical activity classes as a part of the curriculum in the school. They were attached at random, based on age in which 30 belonged to under 14 age group, which was divided into three equal group of ten for each experimental treatment. 10 students (Treatment group I) underwent calisthenics and special games participation, 10 students (Treatment group II) underwent aquatics and special games participation, 10 students (Treatment group III) underwent yoga and special games participation. The subjects were tested on selected criterion variables prior (pre test) and after twelve weeks of training (post test). The pre and post test data collected from three groups on functional abilities(self care, learning, capacity for independent living), psychomotor variables(static balance, eye hand coordination, simple reaction time test) and skill performance (bocce skill, badminton skill, table tennis skill) were statistically examined for significant difference, by applying the analysis ANACOVA. Whenever an 'F' ratio for adjusted test was found to be significant for adjusted post test means, Scheffe-s test was followed as a post-hoc test to determine which of the paired mean differences was significant. The result of the study showed that among under 14 age groups there was a significant improvement on selected criterion variables such as, Balance, Coordination, self-care and learning and also in Bocce, Badminton & Table Tennis skill performance, due to mobility exercises and participation in special games. However there were no significant differences among the groups.Keywords: Functional ability, intellectually disabled, Mobility exercises, Psychomotor ability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19771295 The Effectiveness of Video Clips to Enhance Students’ Achievement and Motivation on History Learning and Facilitation
Authors: L. Bih Ni, D. Norizah Ag Kiflee, T. Choon Keong, R. Talip, S. Singh Bikar Singh, M. Noor Mad Japuni, R. Talin
Abstract:
The purpose of this study is to determine the effectiveness of video clips to enhance students' achievement and motivation towards learning and facilitating of history. We use narrative literature studies to illustrate the current state of the two art and science in focused areas of inquiry. We used experimental method. The experimental method is a systematic scientific research method in which the researchers manipulate one or more variables to control and measure any changes in other variables. For this purpose, two experimental groups have been designed: one experimental and one groups consisting of 30 lower secondary students. The session is given to the first batch using a computer presentation program that uses video clips to be considered as experimental group, while the second group is assigned as the same class using traditional methods using dialogue and discussion techniques that are considered a control group. Both groups are subject to pre and post-trial in matters that are handled by the class. The findings show that the results of the pre-test analysis did not show statistically significant differences, which in turn proved the equality of the two groups. Meanwhile, post-test analysis results show that there was a statistically significant difference between the experimental group and the control group at an importance level of 0.05 for the benefit of the experimental group.
Keywords: Video clips, Historical Learning and Facilitation, Achievement, Motivation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 951