Search results for: executive functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2762

Search results for: executive functions

2162 Cognitive Decline in People Living with HIV in India and Correlation with Neurometabolites Using 3T Magnetic Resonance Spectroscopy (MRS): A Cross-Sectional Study

Authors: Kartik Gupta, Virendra Kumar, Sanjeev Sinha, N. Jagannathan

Abstract:

Introduction: A significant number of patients having human immunodeficiency virus (HIV) infection show a neurocognitive decline (NCD) ranging from minor cognitive impairment to severe dementia. The possible causes of NCD in HIV-infected patients include brain injury by HIV before cART, neurotoxic viral proteins and metabolic abnormalities. In the present study, we compared the level of NCD in asymptomatic HIV-infected patients with changes in brain metabolites measured by using magnetic resonance spectroscopy (MRS). Methods: 43 HIV-positive patients (30 males and 13 females) coming to ART center of the hospital and HIV-seronegative healthy subjects were recruited for the study. All the participants completed MRI and MRS examination, detailed clinical assessments and a battery of neuropsychological tests. All the MR investigations were carried out at 3.0T MRI scanner (Ingenia/Achieva, Philips, Netherlands). MRI examination protocol included the acquisition of T2-weighted imaging in axial, coronal and sagittal planes, T1-weighted, FLAIR, and DWI images in the axial plane. Patients who showed any apparent lesion on MRI were excluded from the study. T2-weighted images in three orthogonal planes were used to localize the voxel in left frontal lobe white matter (FWM) and left basal ganglia (BG) for single voxel MRS. Single voxel MRS spectra were acquired with a point resolved spectroscopy (PRESS) localization pulse sequence at an echo time (TE) of 35 ms and a repetition time (TR) of 2000 ms with 64 or 128 scans. Automated preprocessing and determination of absolute concentrations of metabolites were estimated using LCModel by water scaling method and the Cramer-Rao lower bounds for all metabolites analyzed in the study were below 15\%. Levels of total N-acetyl aspartate (tNAA), total choline (tCho), glutamate + glutamine (Glx), total creatine (tCr), were measured. Cognition was tested using a battery of tests validated for Indian population. The cognitive domains tested were the memory, attention-information processing, abstraction-executive, simple and complex perceptual motor skills. Z-scores normalized according to age, sex and education standard were used to calculate dysfunction in these individual domains. The NCD was defined as dysfunction with Z-score ≤ 2 in at least two domains. One-way ANOVA was used to compare the difference in brain metabolites between the patients and healthy subjects. Results: NCD was found in 23 (53%) patients. There was no significant difference in age, CD4 count and viral load between the two groups. Maximum impairment was found in the domains of memory and simple motor skills i.e., 19/43 (44%). The prevalence of deficit in attention-information processing, complex perceptual motor skills and abstraction-executive function was 37%, 35%, 33% respectively. Subjects with NCD had a higher level of Glutamate in the Frontal region (8.03 ± 2.30 v/s. 10.26 ± 5.24, p-value 0.001). Conclusion: Among newly diagnosed, ART-naïve retroviral disease patients from India, cognitive decline was found in 53\% patients using tests validated for this population. Those with neurocognitive decline had a significantly higher level of Glutamate in the left frontal region. There was no significant difference in age, CD4 count and viral load at initiation of ART between the two groups.

Keywords: HIV, neurocognitive decline, neurometabolites, magnetic resonance spectroscopy

Procedia PDF Downloads 204
2161 Corporate Governance and Firm Performance: An Empirical Study from Pakistan

Authors: Mohammed Nishat, Ahmad Ghazali

Abstract:

This study empirically inspects the corporate governance and firm performance, and attempts to analyze the corporate governance and control related variables which are hypothesized to have effect on firm’s performance. Current study attempts to assess the mechanism and efficiency of corporate governance to achieve high level performance for the listed firms on the Karachi Stock Exchange (KSE) for the period 2005 to 2008. To evaluate the firm performance level this study investigate the firm performance using three measures; Return on assets (ROA), Return on Equity (ROE) and Tobin’s Q. To check the link between firm performances with the corporate governance three categories of corporate governance variables are tested which includes governance, ownership and control related variables. Fixed effect regression model is used to examine the relation among governance and corporate performance for 267 KSE listed Pakistani firms. The result shows that governance related variables like block shareholding by individuals have positive impact on firm performance. When chief executive officer is also the board chairperson then it is observed that performance of firm is adversely affected. Also negative relationship is found between share held by insiders and performance of firm. Leverage has negative influence on the firm performance and size of firm is positively related with performance of the firm.

Keywords: corporate governance, agency cost, KSE, ROA, Tobin’s Q

Procedia PDF Downloads 405
2160 Parameter Identification Analysis in the Design of Rock Fill Dams

Authors: G. Shahzadi, A. Soulaimani

Abstract:

This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.

Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS

Procedia PDF Downloads 142
2159 Approximation by Generalized Lupaş-Durrmeyer Operators with Two Parameter α and β

Authors: Preeti Sharma

Abstract:

This paper deals with the Stancu type generalization of Lupaş-Durrmeyer operators. We establish some direct results in the polynomial weighted space of continuous functions defined on the interval [0, 1]. Also, Voronovskaja type theorem is studied.

Keywords: Lupas-Durrmeyer operators, polya distribution, weighted approximation, rate of convergence, modulus of continuity

Procedia PDF Downloads 339
2158 Household Wealth and Portfolio Choice When Tail Events Are Salient

Authors: Carlson Murray, Ali Lazrak

Abstract:

Robust experimental evidence of systematic violations of expected utility (EU) establishes that individuals facing risk overweight utility from low probability gains and losses when making choices. These findings motivated development of models of preferences with probability weighting functions, such as rank dependent utility (RDU). We solve for the optimal investing strategy of an RDU investor in a dynamic binomial setting from which we derive implications for investing behavior. We show that relative to EU investors with constant relative risk aversion, commonly measured probability weighting functions produce optimal RDU terminal wealth with significant downside protection and upside exposure. We additionally find that in contrast to EU investors, RDU investors optimally choose a portfolio that contains fair bets that provide payo↵s that can be interpreted as lottery outcomes or exposure to idiosyncratic returns. In a calibrated version of the model, we calculate that RDU investors would be willing to pay 5% of their initial wealth for the freedom to trade away from an optimal EU wealth allocation. The dynamic trading strategy that supports the optimal wealth allocation implies portfolio weights that are independent of initial wealth but requires higher risky share after good stock return histories. Optimal trading also implies the possibility of non-participation when historical returns are poor. Our model fills a gap in the literature by providing new quantitative and qualitative predictions that can be tested experimentally or using data on household wealth and portfolio choice.

Keywords: behavioral finance, probability weighting, portfolio choice

Procedia PDF Downloads 417
2157 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory

Procedia PDF Downloads 124
2156 Enhancement of Mass Transport and Separations of Species in a Electroosmotic Flow by Distinct Oscillatory Signals

Authors: Carlos Teodoro, Oscar Bautista

Abstract:

In this work, we analyze theoretically the mass transport in a time-periodic electroosmotic flow through a parallel flat plate microchannel under different periodic functions of the applied external electric field. The microchannel connects two reservoirs having different constant concentrations of an electro-neutral solute, and the zeta potential of the microchannel walls are assumed to be uniform. The governing equations that allow determining the mass transport in the microchannel are given by the Poisson-Boltzmann equation, the modified Navier-Stokes equations, where the Debye-Hückel approximation is considered (the zeta potential is less than 25 mV), and the species conservation. These equations are nondimensionalized and four dimensionless parameters appear which control the mass transport phenomenon. In this sense, these parameters are an angular Reynolds, the Schmidt and the Péclet numbers, and an electrokinetic parameter representing the ratio of the half-height of the microchannel to the Debye length. To solve the mathematical model, first, the electric potential is determined from the Poisson-Boltzmann equation, which allows determining the electric force for various periodic functions of the external electric field expressed as Fourier series. In particular, three different excitation wave forms of the external electric field are assumed, a) sawteeth, b) step, and c) a periodic irregular functions. The periodic electric forces are substituted in the modified Navier-Stokes equations, and the hydrodynamic field is derived for each case of the electric force. From the obtained velocity fields, the species conservation equation is solved and the concentration fields are found. Numerical calculations were done by considering several binary systems where two dilute species are transported in the presence of a carrier. It is observed that there are different angular frequencies of the imposed external electric signal where the total mass transport of each species is the same, independently of the molecular diffusion coefficient. These frequencies are called crossover frequencies and are obtained graphically at the intersection when the total mass transport is plotted against the imposed frequency. The crossover frequencies are different depending on the Schmidt number, the electrokinetic parameter, the angular Reynolds number, and on the type of signal of the external electric field. It is demonstrated that the mass transport through the microchannel is strongly dependent on the modulation frequency of the applied particular alternating electric field. Possible extensions of the analysis to more complicated pulsation profiles are also outlined.

Keywords: electroosmotic flow, mass transport, oscillatory flow, species separation

Procedia PDF Downloads 212
2155 Identification Algorithm of Critical Interface, Modelling Perils on Critical Infrastructure Subjects

Authors: Jiří. J. Urbánek, Hana Malachová, Josef Krahulec, Jitka Johanidisová

Abstract:

The paper deals with crisis situations investigation and modelling within the organizations of critical infrastructure. Every crisis situation has an origin in the emergency event occurrence in the organizations of energetic critical infrastructure especially. Here, the emergency events can be both the expected events, then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping or the unexpected event (Black Swan effect) – without pre-prepared scenario, but it needs operational coping of crisis situations as well. The forms, characteristics, behaviour and utilization of crisis scenarios have various qualities, depending on real critical infrastructure organization prevention and training processes. An aim is always better organizational security and continuity obtainment. This paper objective is to find and investigate critical/ crisis zones and functions in critical situations models of critical infrastructure organization. The DYVELOP (Dynamic Vector Logistics of Processes) method is able to identify problematic critical zones and functions, displaying critical interfaces among actors of crisis situations on the DYVELOP maps named Blazons. Firstly, for realization of this ability is necessary to derive and create identification algorithm of critical interfaces. The locations of critical interfaces are the flags of crisis situation in real organization of critical infrastructure. Conclusive, the model of critical interface will be displayed at real organization of Czech energetic crisis infrastructure subject in Black Out peril environment. The Blazons need live power Point presentation for better comprehension of this paper mission.

Keywords: algorithm, crisis, DYVELOP, infrastructure

Procedia PDF Downloads 406
2154 Identify the Factors Affecting Employment and Prioritize in the Economic Sector Jobs of Increased Employment MADM approach of using SAW and TOPSIS and POSET: Ministry of Cooperatives, Do Varamin City Social Welfare

Authors: Mina Rahmani Pour

Abstract:

Negative consequences of unemployment are: increasing age at marriage, addiction, depression, drug trafficking, divorce, immigration, elite, frustration, delinquency, theft, murder, etc., has led to addressing the issue of employment by economic planners, public authorities, chief executive economic conditions in different countries and different time is important. All countries are faced with the problem of unemployment. By identifying the influential factors of occupational employment and employing strengths in the basic steps can be taken to reduce unemployment. In this study, the most significant factors affecting employment has identified 12 variables based on interviews conducted Choose Vtasyrafzaysh engaged in three main business is discussed. DRGAM next question the 8 expert ministry to respond to it is distributed and for weight Horns AZFN Shannon entropy and the ranking criteria of the (SAW, TOPSIS) used. According to the results of the above methods are not compatible with each other, to reach a general consensus on the rating criteria of the technique of integrating (POSET) involving average, Borda, copeland is used. Ultimately, there is no difference between the employments in the economic sector jobs of increased employment.

Keywords: employment, effective techniques, SAW, TOPSIS

Procedia PDF Downloads 231
2153 Comparison Analysis on the Safety Culture between the Executives and the Operators: Case Study in the Aircraft Manufacturer in Taiwan

Authors: Wen-Chen Hwang, Yu-Hsi Yuan

Abstract:

According to the estimation made by researchers of safety and hygiene, 80% to 90% of workplace accidents in enterprises could be attributed to human factors. Nevertheless, human factors are not the only cause for accidents; instead, happening of accidents is also closely associated with the safety culture of the organization. Therefore, the most effective way of reducing accident rate would be to improve the social and the organizational factors that influence organization’s safety performance. Overview the present study is to understand the current level of safety culture in manufacturing enterprises. A tool for evaluating safety culture matching the needs and characteristics of manufacturing enterprises was developed by reviewing literature of safety culture, and taking the special backgrounds of the case enterprises into consideration. Expert validity was also implied for developing the questionnaire. Moreover, safety culture assessment was conducted through the practical investigation of the case enterprises. Total 505 samples were involved, 53 were executives and 452 were operators. The result of this study in comparison of the safety culture level between the executives and the operators was reached the significant level in 8 dimensions: Safety Commitment, Safety System, Safety Training, Safety Involvement, Reward and Motivation, Communication and Reporting, Leadership and Supervision, Learning and Changing. In general, the overall safety culture were executive level higher than operators level (M: 74.98 > 69.08; t=2.87; p < 0.01).

Keywords: questionnaire survey, safety culture, t-test, media studies

Procedia PDF Downloads 309
2152 The Continuing Saga of Poverty Reduction and Food Security in the Philippines

Authors: Shienna Marie Esteban

Abstract:

The economic growth experience of the Philippines is one of the fastest in Asia. However, the said growth has not yet trickled down to every Filipino. This is evident to agricultural-dependent population. Moreover, the contribution of the agriculture sector to GDP has been dwindling while large number of labor force is still dependent on a relatively small share of GDP. As a result, poverty incidence worsened among rural poor causing hunger and malnutrition. Therefore, the existing agricultural policies in the Philippines are pushing to achieve greater food production and productivity to alleviate poverty and food insecurity. Through a review of related literature and collection and analysis of secondary data from DA, DBM, BAS - CountrySTAT, PSA, NSCB, PIDS, IRRI, UN-FAO, IFPRI, and World Bank among others, the study revealed that Philippines is still far from its goals of poverty reduction and food security. In addition, the agricultural sector is underperforming. The productivity growth of the sector comes out mediocre. The common observation is that weakness is attributed to the failures of policy and institutional environments of the agriculture sector. The policy environment failed to create a structure appropriate for the rapid growth of the sector due to institutional and governance weaknesses. A recommendation is to go through institutional and policy reforms through legislative or executive mandates should take form to improve the implementation and enforcement of existing policies.

Keywords: agriculture, food security, policy, poverty

Procedia PDF Downloads 307
2151 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation

Authors: Li-hsing Shih, Wei-Jen Hsu

Abstract:

Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.

Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation

Procedia PDF Downloads 66
2150 Nonparametric Truncated Spline Regression Model on the Data of Human Development Index in Indonesia

Authors: Kornelius Ronald Demu, Dewi Retno Sari Saputro, Purnami Widyaningsih

Abstract:

Human Development Index (HDI) is a standard measurement for a country's human development. Several factors may have influenced it, such as life expectancy, gross domestic product (GDP) based on the province's annual expenditure, the number of poor people, and the percentage of an illiterate people. The scatter plot between HDI and the influenced factors show that the plot does not follow a specific pattern or form. Therefore, the HDI's data in Indonesia can be applied with a nonparametric regression model. The estimation of the regression curve in the nonparametric regression model is flexible because it follows the shape of the data pattern. One of the nonparametric regression's method is a truncated spline. Truncated spline regression is one of the nonparametric approach, which is a modification of the segmented polynomial functions. The estimator of a truncated spline regression model was affected by the selection of the optimal knots point. Knot points is a focus point of spline truncated functions. The optimal knots point was determined by the minimum value of generalized cross validation (GCV). In this article were applied the data of Human Development Index with a truncated spline nonparametric regression model. The results of this research were obtained the best-truncated spline regression model to the HDI's data in Indonesia with the combination of optimal knots point 5-5-5-4. Life expectancy and the percentage of an illiterate people were the significant factors depend to the HDI in Indonesia. The coefficient of determination is 94.54%. This means the regression model is good enough to applied on the data of HDI in Indonesia.

Keywords: generalized cross validation (GCV), Human Development Index (HDI), knots point, nonparametric regression, truncated spline

Procedia PDF Downloads 334
2149 A Study of Algebraic Structure Involving Banach Space through Q-Analogue

Authors: Abdul Hakim Khan

Abstract:

The aim of the present paper is to study the Banach Space and Combinatorial Algebraic Structure of R. It is further aimed to study algebraic structure of set of all q-extension of classical formula and function for 0 < q < 1.

Keywords: integral functions, q-extensions, q numbers of metric space, algebraic structure of r and banach space

Procedia PDF Downloads 576
2148 Integration of STEM Education in Quebec, Canada – Challenges and Opportunities

Authors: B. El Fadil, R. Najar

Abstract:

STEM education is promoted by many scholars and curricula around the world, but it is not yet well established in the province of Quebec in Canada. In addition, effective instructional STEM activities and design methods are required to ensure that students and teachers' needs are being met. One potential method is the Engineering Design Process (EDP), a methodology that emphasizes the importance of creativity and collaboration in problem-solving strategies. This article reports on a case study that focused on using the EDP to develop instructional materials by means of making a technological artifact to teach mathematical variables and functions at the secondary level. The five iterative stages of the EDP (design, make, test, infer, and iterate) were integrated into the development of the course materials. Data was collected from different sources: pre- and post-questionnaires, as well as a working document dealing with pupils' understanding based on designing, making, testing, and simulating. Twenty-four grade seven (13 years old) students in Northern Quebec participated in the study. The findings of this study indicate that STEM activities have a positive impact not only on students' engagement in classroom activities but also on learning new mathematical concepts. Furthermore, STEM-focused activities have a significant effect on problem-solving skills development in an interdisciplinary approach. Based on the study's results, we can conclude, inter alia, that teachers should integrate STEM activities into their teaching practices to increase learning outcomes and attach more importance to STEM-focused activities to develop students' reflective thinking and hands-on skills.

Keywords: engineering design process, motivation, stem, integration, variables, functions

Procedia PDF Downloads 85
2147 [Keynote Talk]: Applying p-Balanced Energy Technique to Solve Liouville-Type Problems in Calculus

Authors: Lina Wu, Ye Li, Jia Liu

Abstract:

We are interested in solving Liouville-type problems to explore constancy properties for maps or differential forms on Riemannian manifolds. Geometric structures on manifolds, the existence of constancy properties for maps or differential forms, and energy growth for maps or differential forms are intertwined. In this article, we concentrate on discovery of solutions to Liouville-type problems where manifolds are Euclidean spaces (i.e. flat Riemannian manifolds) and maps become real-valued functions. Liouville-type results of vanishing properties for functions are obtained. The original work in our research findings is to extend the q-energy for a function from finite in Lq space to infinite in non-Lq space by applying p-balanced technique where q = p = 2. Calculation skills such as Hölder's Inequality and Tests for Series have been used to evaluate limits and integrations for function energy. Calculation ideas and computational techniques for solving Liouville-type problems shown in this article, which are utilized in Euclidean spaces, can be universalized as a successful algorithm, which works for both maps and differential forms on Riemannian manifolds. This innovative algorithm has a far-reaching impact on research work of solving Liouville-type problems in the general settings involved with infinite energy. The p-balanced technique in this algorithm provides a clue to success on the road of q-energy extension from finite to infinite.

Keywords: differential forms, holder inequality, Liouville-type problems, p-balanced growth, p-harmonic maps, q-energy growth, tests for series

Procedia PDF Downloads 228
2146 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.

Keywords: censored data, R statistical software, reliability analysis, time to failure

Procedia PDF Downloads 396
2145 Hydrological Modelling of Geological Behaviours in Environmental Planning for Urban Areas

Authors: Sheetal Sharma

Abstract:

Runoff,decreasing water levels and recharge in urban areas have been a complex issue now a days pointing defective urban design and increasing demography as cause. Very less has been discussed or analysed for water sensitive Urban Master Plans or local area plans. Land use planning deals with land transformation from natural areas into developed ones, which lead to changes in natural environment. Elaborated knowledge of relationship between the existing patterns of land use-land cover and recharge with respect to prevailing soil below is less as compared to speed of development. The parameters of incompatibility between urban functions and the functions of the natural environment are becoming various. Changes in land patterns due to built up, pavements, roads and similar land cover affects surface water flow seriously. It also changes permeability and absorption characteristics of the soil. Urban planners need to know natural processes along with modern means and best technologies available,as there is a huge gap between basic knowledge of natural processes and its requirement for balanced development planning leading to minimum impact on water recharge. The present paper analyzes the variations in land use land cover and their impacts on surface flows and sub-surface recharge in study area. The methodology adopted was to analyse the changes in land use and land cover using GIS and Civil 3d auto cad. The variations were used in  computer modeling using Storm-water Management Model to find out the runoff for various soil groups and resulting recharge observing water levels in POW data for last 40 years of the study area. Results were anlayzed again to find best correlations for sustainable recharge in urban areas.

Keywords: geology, runoff, urban planning, land use-land cover

Procedia PDF Downloads 313
2144 A Study Concerning Foreign Worker Migration in Thailand

Authors: Napatsorn Suput-Anyaporn

Abstract:

This paper aimed to investigate multilateral relationships across the factors which included labor shortage, trade union, turnover rate of employee, labor law and regulation, and effectiveness of foreign worker administration in the scope of foreign workers in the industrial manufacturing sector of Thailand. The research employed both quantitative and qualitative approaches, in which foreign workers from Myanmar, Laos and Cambodia in the industrial manufacturing sector in selected areas of Thailand were sampled for the quantitative data collection, and persons in the chief executive management and the supervisor levels, and persons in the academic area in relation with foreign workers were selected as the sample for the qualitative data collection method. Thus, a questionnaire, in-depth interview and focus group were utilized as tools in this research paper. The discussion placed an emphasis on the fact that Thailand should design more effective law and regulations for the foreign workers administration and management in response to preparing for the coming ASEAN Economic Community with the declaration of the free- flow labor movement policy.

Keywords: industrial manufacturing sector, labor law and regulation, labor shortage, migrant worker, trade union, turnover rate of employee

Procedia PDF Downloads 405
2143 Life Prediction Method of Lithium-Ion Battery Based on Grey Support Vector Machines

Authors: Xiaogang Li, Jieqiong Miao

Abstract:

As for the problem of the grey forecasting model prediction accuracy is low, an improved grey prediction model is put forward. Firstly, use trigonometric function transform the original data sequence in order to improve the smoothness of data , this model called SGM( smoothness of grey prediction model), then combine the improved grey model with support vector machine , and put forward the grey support vector machine model (SGM - SVM).Before the establishment of the model, we use trigonometric functions and accumulation generation operation preprocessing data in order to enhance the smoothness of the data and weaken the randomness of the data, then use support vector machine (SVM) to establish a prediction model for pre-processed data and select model parameters using genetic algorithms to obtain the optimum value of the global search. Finally, restore data through the "regressive generate" operation to get forecasting data. In order to prove that the SGM-SVM model is superior to other models, we select the battery life data from calce. The presented model is used to predict life of battery and the predicted result was compared with that of grey model and support vector machines.For a more intuitive comparison of the three models, this paper presents root mean square error of this three different models .The results show that the effect of grey support vector machine (SGM-SVM) to predict life is optimal, and the root mean square error is only 3.18%. Keywords: grey forecasting model, trigonometric function, support vector machine, genetic algorithms, root mean square error

Keywords: Grey prediction model, trigonometric functions, support vector machines, genetic algorithms, root mean square error

Procedia PDF Downloads 456
2142 Simulation-Based Control Module for Offshore Single Point Mooring System

Authors: Daehyun Baek, Seungmin Lee, Minju Kim Jangik Park, Hyeong-Soon Moon

Abstract:

SPM (Single Point Mooring) is one of the mooring buoy facilities installed on a coast near oil and gas terminal which is not able to berth FPSO or large oil tankers under the condition of high draft due to geometrical limitation. Loading and unloading of crude oil and gas through a subsea pipeline can be carried out between the mooring buoy, ships and onshore facilities. SPM is an offshore-standalone system which has to withstand the harsh marine environment with harsh conditions such as high wind, current and so on. Therefore, SPM is required to have high stability, reliability and durability. Also, SPM is comprised to be integrated systems which consist of power management, high pressure valve control, sophisticated hardware/software and a long distance communication system. In order to secure required functions of SPM system, a simulation model for the integrated system of SPM using MATLAB Simulink and State flow tool has been developed. The developed model consists of configuration of hydraulic system for opening and closing of PLEM (Pipeline End Manifold) valves and control system logic. To verify functions of the model, an integrated simulation model for overall systems of SPM was also developed by considering handshaking variables between individual systems. In addition to the dynamic model, a self-diagnostic function to determine failure of the system was configured, which enables the SPM system itself to alert users about the failure once a failure signal comes to arise. Controlling and monitoring the SPM system is able to be done by a HMI system which is capable of managing the SPM system remotely, which was carried out by building a communication environment between the SPM system and the HMI system.

Keywords: HMI system, mooring buoy, simulink simulation model, single point mooring, stateflow

Procedia PDF Downloads 415
2141 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications

Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino

Abstract:

The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.

Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses

Procedia PDF Downloads 179
2140 Prioritization in Modern Portfolio Management - An Action Design Research Approach to Method Development for Scaled Agility

Authors: Jan-Philipp Schiele, Karsten Schlinkmeier

Abstract:

Allocation of scarce resources is a core process of traditional project portfolio management. However, with the popularity of agile methodology, established concepts and methods of portfolio management are reaching their limits and need to be adapted. Consequently, the question arises of how the process of resource allocation can be managed appropriately in scaled agile environments. The prevailing framework SAFe offers Weightest Shortest Job First (WSJF) as a prioritization technique, butestablished companies are still looking for methodical adaptions to apply WSJF for prioritization in portfolios in a more goal-oriented way and aligned for their needs in practice. In this paper, the relevant problem of prioritization in portfolios is conceptualized from the perspective of coordination and related mechanisms to support resource allocation. Further, an Action Design Research (ADR) project with case studies in a finance company is outlined to develop a practically applicable yet scientifically sound prioritization method based on coordination theory. The ADR project will be flanked by consortium research with various practitioners from the financial and insurance industry. Preliminary design requirements indicate that the use of a feedback loop leads to better team and executive level coordination in the prioritization process.

Keywords: scaled agility, portfolio management, prioritization, business-IT alignment

Procedia PDF Downloads 193
2139 Functional Aspects of Carbonic Anhydrase

Authors: Bashistha Kumar Kanth, Seung Pil Pack

Abstract:

Carbonic anhydrase is ubiquitously distributed in organisms, and is fundamental to many eukaryotic biological processes such as photosynthesis, respiration, CO2 and ion transport, calcification and acid–base balance. However, CA occurs across the spectrum of prokaryotic metabolism in both the archaea and bacteria domains and many individual species contain more than one class. In this review, various roles of CA involved in cellular mechanism are presented to find out the CA functions applicable for industrial use.

Keywords: carbonic anhydrase, mechanism, CO2 sequestration, respiration

Procedia PDF Downloads 487
2138 Strengthening Regulation and Supervision of Microfinance Sector for Development in Ethiopia

Authors: Megersa Dugasa Fite

Abstract:

This paper analyses regulatory and supervisory issues in the Ethiopian micro finance sector, which caters to the needs of those who have been excluded from the formal financial sector. Micro-finance has received increased importance in development because of its grand goal to give credits to the poor to raise their economic and social well-being and improve the quality of lives. The micro-finance at present has been moving towards a credit-plus period through covering savings and insurance functions. It thus helps in reducing the rate of financial exclusion and social segregation, alleviating poverty and, consequently, stimulating development. The Ethiopian micro finance policy has been generally positive and developmental but major regulatory and supervisory limitations such as the absolute prohibition of NGOs to participate in micro credit functions, higher risks for depositors of micro-finance institutions, lack of credit information services with research and development, the unmet demand, and risks of market failures due to over-regulation are disappointing. Therefore, to remove the limited reach and high degree of problems typical in the informal means of financial intermediation plus to deal with the failure of formal banks to provide basic financial services to a significant portion of the country’s population, more needs to be done on micro finance. Certain key regulatory and supervisory revisions hence need to be taken to strengthen the Ethiopian micro finance sector so that it can practically provide majority poor access to a range of high quality financial services that help them work their way out of poverty and the incapacity it imposes.

Keywords: micro-finance, micro-finance regulation and supervision, micro-finance institutions, financial access, social segregation, poverty alleviation, development, Ethiopia

Procedia PDF Downloads 387
2137 Decision Support System Based On GIS and MCDM to Identify Land Suitability for Agriculture

Authors: Abdelkader Mendas

Abstract:

The integration of MultiCriteria Decision Making (MCDM) approaches in a Geographical Information System (GIS) provides a powerful spatial decision support system which offers the opportunity to efficiently produce the land suitability maps for agriculture. Indeed, GIS is a powerful tool for analyzing spatial data and establishing a process for decision support. Because of their spatial aggregation functions, MCDM methods can facilitate decision making in situations where several solutions are available, various criteria have to be taken into account and decision-makers are in conflict. The parameters and the classification system used in this work are inspired from the FAO (Food and Agriculture Organization) approach dedicated to a sustainable agriculture. A spatial decision support system has been developed for establishing the land suitability map for agriculture. It incorporates the multicriteria analysis method ELECTRE Tri (ELimitation Et Choix Traduisant la REalité) in a GIS within the GIS program package environment. The main purpose of this research is to propose a conceptual and methodological framework for the combination of GIS and multicriteria methods in a single coherent system that takes into account the whole process from the acquisition of spatially referenced data to decision-making. In this context, a spatial decision support system for developing land suitability maps for agriculture has been developed. The algorithm of ELECTRE Tri is incorporated into a GIS environment and added to the other analysis functions of GIS. This approach has been tested on an area in Algeria. A land suitability map for durum wheat has been produced. Through the obtained results, it appears that ELECTRE Tri method, integrated into a GIS, is better suited to the problem of land suitability for agriculture. The coherence of the obtained maps confirms the system effectiveness.

Keywords: multicriteria decision analysis, decision support system, geographical information system, land suitability for agriculture

Procedia PDF Downloads 633
2136 Western and Eastern Ways of Special Warfare: The Strategic History of Special Operations from Western and Eastern Sources

Authors: Adam Kok Wey Leong

Abstract:

Special operations were supposedly a new way of irregular warfare that was officially formed during World War 2. For example, the famous British Special Operations Executive (SOE) and the Americans’ Office for Strategic Services (OSS) – the forerunners of modern day CIA were born in World War 2. These special operations units were tasked with the conduct of sabotage and subversion activities behind enemy lines, placing great importance in forming Fifth Column activities and supporting resistance movements. This pointed to a paradoxical argument that modern day special operations is a product of Western modern military innovation but utilizing Eastern ways of ‘ungentlemanly’ warfare. This thesis is superfluous as special operations had been well practised by both ancient Western empires such as the Greeks and Romans, and around the same time in the East, such as in China, and Japan. This paper will describe the practice of special operations, first from the Western military history of the Greeks during the Peloponnesian war. It will then highlight the similar practice of special operations by the Near Eastern Assassins and Eastern militaries by using examples from the Chinese and the Japanese. This paper propounds that special operations, or ways of warfare as a whole, has no cultural and geographical divide, but rather very similarly practiced by men from all over the world. Ideas of fighting, killing and ultimately winning a war have similar undertones – attempts to find ways to win economically and at the least time.

Keywords: special operations, strategic culture, ways of warfare, Sun Tzu, Frontinus

Procedia PDF Downloads 466
2135 Spatiotemporal Variation Characteristics of Soil pH around the Balikesir City, Turkey

Authors: Çağan Alevkayali, Şermin Tağil

Abstract:

Determination of soil pH surface distribution in urban areas is substantial for sustainable development. Changes on soil properties occur due to functions on performed in agriculture, industry and other urban functions. Soil pH is important to effect on soil productivity which based on sensitive and complex relation between plant and soil. Furthermore, the spatial variability of soil reaction is necessary to measure the effects of urbanization. The objective of this study was to explore the spatial variation of soil pH quality and the influence factors of human land use on soil Ph around Balikesir City using data for 2015 and Geographic Information Systems (GIS). For this, soil samples were taken from 40 different locations, and collected with the method of "Systematic Random" from the pits at 0-20 cm depths, because anthropologic sourced pollutants accumulate on upper layers of soil. The study area was divided into a grid system with 750 x 750 m. GPS was used to determine sampling locations, and Inverse Distance Weighting (IDW) interpolation technique was used to analyze the spatial distribution of pH in the study area and to predict the variable values of un-exampled places with the help from the values of exampled places. Natural soil acidity and alkalinity depend on interaction between climate, vegetation, and soil geological properties. However, analyzing soil pH is important to indirectly evaluate soil pollution caused by urbanization and industrialization. The result of this study showed that soil pH around the Balikesir City was neutral, in generally, with values were between 6.5 and 7.0. On the other hand, some slight changes were demonstrated around open dump areas and the small industrial sites. The results obtained from this study can be indicator of important soil problems and this data can be used by ecologists, planners and managers to protect soil supplies around the Balikesir City.

Keywords: Balikesir, IDW, GIS, spatial variability, soil pH, urbanization

Procedia PDF Downloads 320
2134 Fast Switching Mechanism for Multicasting Failure in OpenFlow Networks

Authors: Alaa Allakany, Koji Okamura

Abstract:

Multicast technology is an efficient and scalable technology for data distribution in order to optimize network resources. However, in the IP network, the responsibility for management of multicast groups is distributed among network routers, which causes some limitations such as delays in processing group events, high bandwidth consumption and redundant tree calculation. Software Defined Networking (SDN) represented by OpenFlow presented as a solution for many problems, in SDN the control plane and data plane are separated by shifting the control and management to a remote centralized controller, and the routers are used as a forwarder only. In this paper we will proposed fast switching mechanism for solving the problem of link failure in multicast tree based on Tabu Search heuristic algorithm and modifying the functions of OpenFlow switch to fasts switch to the pack up sub tree rather than sending to the controller. In this work we will implement multicasting OpenFlow controller, this centralized controller is a core part in our multicasting approach, which is responsible for 1- constructing the multicast tree, 2- handling the multicast group events and multicast state maintenance. And finally modifying OpenFlow switch functions for fasts switch to pack up paths. Forwarders, forward the multicast packet based on multicast routing entries which were generated by the centralized controller. Tabu search will be used as heuristic algorithm for construction near optimum multicast tree and maintain multicast tree to still near optimum in case of join or leave any members from multicast group (group events).

Keywords: multicast tree, software define networks, tabu search, OpenFlow

Procedia PDF Downloads 259
2133 Selection Criteria in the Spanish Secondary Education Content and Language Integrated Learning (CLIL) Programmes and Their Effect on Code-Switching in CLIL Methodology

Authors: Dembele Dembele, Philippe

Abstract:

Several Second Language Acquisition (SLA) studies have stressed the benefits of Content and Language Integrated Learning (CLIL) and shown how CLIL students outperformed their non-CLIL counterparts in many L2 skills. However, numerous experimental CLIL programs seem to have mainly targeted above-average and rather highly motivated language learners. The need to understand the impact of the student’s language proficiency on code-switching in CLIL instruction motivated this study. Therefore, determining the implications of the students’ low-language proficiency for CLIL methodology, as well as the frequency with which CLIL teachers use the main pedagogical functions of code-switching, seemed crucial for a Spanish CLIL instruction on a large scale. In the mixed-method approach adopted, ten face-to-face interviews were conducted in nine Valencian public secondary education schools, while over 30 CLIL teachers also contributed with their experience in two online survey questionnaires. The results showed the crucial role language proficiency plays in the Valencian CLIL/Plurilingual selection criteria. The presence of a substantial number of low-language proficient students in CLIL groups, which in turn implied important methodological consequences, was another finding of the study. Indeed, though the pedagogical use of L1 was confirmed as an extended practice among CLIL teachers, more than half of the participants perceived that code-switching impaired attaining their CLIL lesson objectives. Therein, the dissertation highlights the need for more extensive empirical research on how code-switching could prove beneficial in CLIL instruction involving low-language proficient students while maintaining the maximum possible exposure to the target language.

Keywords: CLIL methodology, low language proficiency, code switching, selection criteria, code-switching functions

Procedia PDF Downloads 76