Search results for: instrumental variable estimation
2991 New Technique of Estimation of Charge Carrier Density of Nanomaterials from Thermionic Emission Data
Authors: Dilip K. De, Olukunle C. Olawole, Emmanuel S. Joel, Moses Emetere
Abstract:
A good number of electronic properties such as electrical and thermal conductivities depend on charge carrier densities of nanomaterials. By controlling the charge carrier densities during the fabrication (or growth) processes, the physical properties can be tuned. In this paper, we discuss a new technique of estimating the charge carrier densities of nanomaterials from the thermionic emission data using the newly modified Richardson-Dushman equation. We find that the technique yields excellent results for graphene and carbon nanotube.Keywords: charge carrier density, nano materials, new technique, thermionic emission
Procedia PDF Downloads 3252990 The Accuracy of Small Firms at Predicting Their Employment
Authors: Javad Nosratabadi
Abstract:
This paper investigates the difference between firms' actual and expected employment along with the amount of loans invested by them. In addition, it examines the relationship between the amount of loans received by firms and wages. Empirically, using a causal effect estimation and firm-level data from a province in Iran between 2004 and 2011, the results show that there is a range of the loan amount for which firms' expected employment meets their actual one. In contrast, there is a gap between firms' actual and expected employment for any other loan amount. Furthermore, the result shows that there is a positive and significant relationship between the amount of loan invested by firms and wages.Keywords: expected employment, actual employment, wage, loan
Procedia PDF Downloads 1622989 How Envisioning Process Is Constructed: An Exploratory Research Comparing Three International Public Televisions
Authors: Alexandre Bedard, Johane Brunet, Wendellyn Reid
Abstract:
Public Television is constantly trying to maintain and develop its audience. And to achieve those goals, it needs a strong and clear vision. Vision or envision is a multidimensional process; it is simultaneously a conduit that orients and fixes the future, an idea that comes before the strategy and a mean by which action is accomplished, from a business perspective. Also, vision is often studied from a prescriptive and instrumental manner. Based on our understanding of the literature, we were able to explain how envisioning, as a process, is a creative one; it takes place in the mind and uses wisdom and intelligence through a process of evaluation, analysis and creation. Through an aggregation of the literature, we build a model of the envisioning process, based on past experiences, perceptions and knowledge and influenced by the context, being the individual, the organization and the environment. With exploratory research in which vision was deciphered through the discourse, through a qualitative and abductive approach and a grounded theory perspective, we explored three extreme cases, with eighteen interviews with experts, leaders, politicians, actors of the industry, etc. and more than twenty hours of interviews in three different countries. We compared the strategy, the business model, and the political and legal forces. We also looked at the history of each industry from an inertial point of view. Our analysis of the data revealed that a legitimacy effect due to the audience, the innovation and the creativity of the institutions was at the cornerstone of what would influence the envisioning process. This allowed us to identify how different the process was for Canadian, French and UK public broadcasters, although we concluded that the three of them had a socially constructed vision for their future, based on stakeholder management and an emerging role for the managers: ideas brokers.Keywords: envisioning process, international comparison, television, vision
Procedia PDF Downloads 1352988 Determination of a Novel Artificial Sweetener Advantame in Food by Liquid Chromatography Tandem Mass Spectrometry
Authors: Fangyan Li, Lin Min Lee, Hui Zhu Peh, Shoet Harn Chan
Abstract:
Advantame, a derivative of aspartame, is the latest addition to a family of low caloric and high potent dipeptide sweeteners which include aspartame, neotame and alitame. The use of advantame as a high-intensity sweetener in food was first accepted by Food Standards Australia New Zealand in 2011 and subsequently by US and EU food authorities in 2014, with the results from toxicity and exposure studies showing advantame poses no safety concern to the public at regulated levels. To our knowledge, currently there is barely any detailed information on the analytical method of advantame in food matrix, except for one report published in Japanese, stating a high performance liquid chromatography (HPLC) and liquid chromatography/ mass spectrometry (LC-MS) method with a detection limit at ppm level. However, the use of acid in sample preparation and instrumental analysis in the report raised doubt over the reliability of the method, as there is indication that stability of advantame is compromised under acidic conditions. Besides, the method may not be suitable for analyzing food matrices containing advantame at low ppm or sub-ppm level. In this presentation, a simple, specific and sensitive method for the determination of advantame in food is described. The method involved extraction with water and clean-up via solid phase extraction (SPE) followed by detection using liquid chromatography tandem mass spectrometry (LC-MS/MS) in negative electrospray ionization mode. No acid was used in the entire procedure. Single laboratory validation of the method was performed in terms of linearity, precision and accuracy. A low detection limit at ppb level was achieved. Satisfactory recoveries were obtained using spiked samples at three different concentration levels. This validated method could be used in the routine inspection of the advantame level in food.Keywords: advantame, food, LC-MS/MS, sweetener
Procedia PDF Downloads 4782987 Implications of Fulani Herders/Farmers Conflict on the Socio-Economic Development of Nigeria (2000-2018)
Authors: Larry E. Udu, Joseph N. Edeh
Abstract:
Unarguably, the land is an indispensable factor of production and has been instrumental to numerous conflicts between crop farmers and herders in Nigeria. The conflicts pose a grave challenge to life and property, food security and ultimately to sustainable socio-economic development of the nation. The paper examines the causes of the Fulani herders/farmers conflicts, particularly in the Middle Belt; numerity of occurrences and extent of damage and their socio-economic implications. Content Analytical Approach was adopted as methodology wherein data was extensively drawn from the secondary source. Findings reveal that major causes of the conflict are attributable to violation of tradition and laws, trespass and cultural factors. Consequently, the numerity of attacks and level of fatality coupled with displacement of farmers, destruction of private and public facilities impacted negatively on farmers output with their attendant socio-economic implications on sustainable livelihood of the people and the nation at large. For instance, Mercy Corps (a Global Humanitarian Organization) in its research, 2013-2016 asserts that a loss of $14billion within 3 years was incurred and if the conflict were resolved, the average affected household could see increase income by at least 64 percent and potentially 210 percent or higher and that states affected by the conflicts lost an average of 47 percent taxes/IGR. The paper therefore recommends strict adherence to grazing laws; platform for dialogue bothering on compromises where necessary and encouragement of cattle farmers to build ranches for their cattle according to international standards.Keywords: conflict, farmers, herders, Nigeria, socio-economic implications
Procedia PDF Downloads 2102986 Error Estimation for the Reconstruction Algorithm with Fan Beam Geometry
Authors: Nirmal Yadav, Tanuja Srivastava
Abstract:
Shannon theory is an exact method to recover a band limited signals from its sampled values in discrete implementation, using sinc interpolators. But sinc based results are not much satisfactory for band-limited calculations so that convolution with window function, having compact support, has been introduced. Convolution Backprojection algorithm with window function is an approximation algorithm. In this paper, the error has been calculated, arises due to this approximation nature of reconstruction algorithm. This result will be defined for fan beam projection data which is more faster than parallel beam projection.Keywords: computed tomography, convolution backprojection, radon transform, fan beam
Procedia PDF Downloads 4932985 Effects and Mechanisms of an Online Short-Term Audio-Based Mindfulness Intervention on Wellbeing in Community Settings and How Stress and Negative Affect Influence the Therapy Effects: Parallel Process Latent Growth Curve Modeling of a Randomized Control
Authors: Man Ying Kang, Joshua Kin Man Nan
Abstract:
The prolonged pandemic has posed alarming public health challenges to various parts of the world, and face-to-face mental health treatment is largely discounted for the control of virus transmission, online psychological services and self-help mental health kits have become essential. Online self-help mindfulness-based interventions have proved their effects on fostering mental health for different populations over the globe. This paper was to test the effectiveness of an online short-term audio-based mindfulness (SAM) program in enhancing wellbeing, dispositional mindfulness, and reducing stress and negative affect in community settings in China, and to explore possible mechanisms of how dispositional mindfulness, stress, and negative affect influenced the intervention effects on wellbeing. Community-dwelling adults were recruited via online social networking sites (e.g., QQ, WeChat, and Weibo). Participants (n=100) were randomized into the mindfulness group (n=50) and a waitlist control group (n=50). In the mindfulness group, participants were advised to spend 10–20 minutes listening to the audio content, including mindful-form practices (e.g., eating, sitting, walking, or breathing). Then practice daily mindfulness exercises for 3 weeks (a total of 21 sessions), whereas those in the control group received the same intervention after data collection in the mindfulness group. Participants in the mindfulness group needed to fill in the World Health Organization Five Well-Being Index (WHO), Positive and Negative Affect Schedule (PANAS), Perceived Stress Scale (PSS), and Freiburg Mindfulness Inventory (FMI) four times: at baseline (T0) and at 1 (T1), 2 (T2), and 3 (T3) weeks while those in the waitlist control group only needed to fill in the same scales at pre- and post-interventions. Repeated-measure analysis of variance, paired sample t-test, and independent sample t-test was used to analyze the variable outcomes of the two groups. The parallel process latent growth curve modeling analysis was used to explore the longitudinal moderated mediation effects. The dependent variable was WHO slope from T0 to T3, the independent variable was Group (1=SAM, 2=Control), the mediator was FMI slope from T0 to T3, and the moderator was T0NA and T0PSS separately. The different levels of moderator effects on WHO slope was explored, including low T0NA or T0PSS (Mean-SD), medium T0NA or T0PSS (Mean), and high T0NA or T0PSS (Mean+SD). The results found that SAM significantly improved and predicted higher levels of WHO slope and FMI slope, as well as significantly reduced NA and PSS. FMI slope positively predict WHO slope. FMI slope partially mediated the relationship between SAM and WHO slope. Baseline NA and PSS as the moderators were found to be significant between SAM and WHO slope and between SAM and FMI slope, respectively. The conclusion was that SAM was effective in promoting levels of mental wellbeing, positive affect, and dispositional mindfulness as well as reducing negative affect and stress in community settings in China. SAM improved wellbeing faster through the faster enhancement of dispositional mindfulness. Participants with medium-to-high negative affect and stress buffered the therapy effects of SAM on wellbeing improvement speed.Keywords: mindfulness, negative affect, stress, wellbeing, randomized control trial
Procedia PDF Downloads 1112984 Nature of Forest Fragmentation Owing to Human Population along Elevation Gradient in Different Countries in Hindu Kush Himalaya Mountains
Authors: Pulakesh Das, Mukunda Dev Behera, Manchiraju Sri Ramachandra Murthy
Abstract:
Large numbers of people living in and around the Hindu Kush Himalaya (HKH) region, depends on this diverse mountainous region for ecosystem services. Following the global trend, this region also experiencing rapid population growth, and demand for timber and agriculture land. The eight countries sharing the HKH region have different forest resources utilization and conservation policies that exert varying forces in the forest ecosystem. This created a variable spatial as well altitudinal gradient in rate of deforestation and corresponding forest patch fragmentation. The quantitative relationship between fragmentation and demography has not been established before for HKH vis-à-vis along elevation gradient. This current study was carried out to attribute the overall and different nature in landscape fragmentations along the altitudinal gradient with the demography of each sharing countries. We have used the tree canopy cover data derived from Landsat data to analyze the deforestation and afforestation rate, and corresponding landscape fragmentation observed during 2000 – 2010. Area-weighted mean radius of gyration (AMN radius of gyration) was computed owing to its advantage as spatial indicator of fragmentation over non-spatial fragmentation indices. Using the subtraction method, the change in fragmentation was computed during 2000 – 2010. Using the tree canopy cover data as a surrogate of forest cover, highest forest loss was observed in Myanmar followed by China, India, Bangladesh, Nepal, Pakistan, Bhutan, and Afghanistan. However, the sequence of fragmentation was different after the maximum fragmentation observed in Myanmar followed by India, China, Bangladesh, and Bhutan; whereas increase in fragmentation was seen following the sequence of as Nepal, Pakistan, and Afghanistan. Using SRTM-derived DEM, we observed higher rate of fragmentation up to 2400m that corroborated with high human population for the year 2000 and 2010. To derive the nature of fragmentation along the altitudinal gradients, the Statistica software was used, where the user defined function was utilized for regression applying the Gauss-Newton estimation method with 50 iterations. We observed overall logarithmic decrease in fragmentation change (area-weighted mean radius of gyration), forest cover loss and population growth during 2000-2010 along the elevation gradient with very high R2 values (i.e., 0.889, 0.895, 0.944 respectively). The observed negative logarithmic function with the major contribution in the initial elevation gradients suggest to gap filling afforestation in the lower altitudes to enhance the forest patch connectivity. Our finding on the pattern of forest fragmentation and human population across the elevation gradient in HKH region will have policy level implication for different nations and would help in characterizing hotspots of change. Availability of free satellite derived data products on forest cover and DEM, grid-data on demography, and utility of geospatial tools helped in quick evaluation of the forest fragmentation vis-a-vis human impact pattern along the elevation gradient in HKH.Keywords: area-weighted mean radius of gyration, fragmentation, human impact, tree canopy cover
Procedia PDF Downloads 2162983 Survival Data with Incomplete Missing Categorical Covariates
Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar
Abstract:
The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution
Procedia PDF Downloads 4072982 A Generalisation of Pearson's Curve System and Explicit Representation of the Associated Density Function
Authors: S. B. Provost, Hossein Zareamoghaddam
Abstract:
A univariate density approximation technique whereby the derivative of the logarithm of a density function is assumed to be expressible as a rational function is introduced. This approach which extends Pearson’s curve system is solely based on the moments of a distribution up to a determinable order. Upon solving a system of linear equations, the coefficients of the polynomial ratio can readily be identified. An explicit solution to the integral representation of the resulting density approximant is then obtained. It will be explained that when utilised in conjunction with sample moments, this methodology lends itself to the modelling of ‘big data’. Applications to sets of univariate and bivariate observations will be presented.Keywords: density estimation, log-density, moments, Pearson's curve system
Procedia PDF Downloads 2822981 Improving the Quantification Model of Internal Control Impact on Banking Risks
Authors: M. Ndaw, G. Mendy, S. Ouya
Abstract:
Risk management in banking sector is a key issue linked to financial system stability and its importance has been elevated by technological developments and emergence of new financial instruments. In this paper, we improve the model previously defined for quantifying internal control impact on banking risks by automatizing the residual criticality estimation step of FMECA. For this, we defined three equations and a maturity coefficient to obtain a mathematical model which is tested on all banking processes and type of risks. The new model allows an optimal assessment of residual criticality and improves the correlation rate that has become 98%.Keywords: risk, control, banking, FMECA, criticality
Procedia PDF Downloads 3362980 Adsorption of Heavy Metals Using Chemically-Modified Tea Leaves
Authors: Phillip Ahn, Bryan Kim
Abstract:
Copper is perhaps the most prevalent heavy metal used in the manufacturing industries, from food additives to metal-mechanic factories. Common methodologies to remove copper are expensive and produce undesired by-products. A good decontaminating candidate should be environment-friendly, inexpensive, and capable of eliminating low concentrations of the metal. This work suggests chemically modified spent tea leaves of chamomile, peppermint and green tea in their thiolated, sulfonated and carboxylated forms as candidates for the removal of copper from solutions. Batch experiments were conducted to maximize the adsorption of copper (II) ions. Effects such as acidity, salinity, adsorbent dose, metal concentration, and presence of surfactant were explored. Experimental data show that maximum adsorption is reached at neutral pH. The results indicate that Cu(II) can be removed up to 53%, 22% and 19% with the thiolated, carboxylated and sulfonated adsorbents, respectively. Maximum adsorption of copper on TPM (53%) is achieved with 150 mg and decreases with the presence of salts and surfactants. Conversely, sulfonated and carboxylated adsorbents show better adsorption in the presence of surfactants. Time-dependent experiments show that adsorption is reached in less than 25 min for TCM and 5 min for SCM. Instrumental analyses determined the presence of active functional groups, thermal resistance, and scanning electron microscopy, indicating that both adsorbents are promising materials for the selective recovery and treatment of metal ions from wastewaters. Finally, columns were prepared with these adsorbents to explore their application in scaled-up processes, with very positive results. A long-term goal involves the recycling of the exhausted adsorbent and/or their use in the preparation of biofuels due to changes in materials’ structures.Keywords: heavy metal removal, adsorption, wastewaters, water remediation
Procedia PDF Downloads 2912979 Cost Overrun Causes in Public Construction Projects in Saudi Arabia
Authors: Ibrahim Mahamid, A. Al-Ghonamy, M. Aichouni
Abstract:
This study is conducted to identify causes of cost deviations in public construction projects in Saudi Arabia from contractors’ perspective. 41 factors that might affect cost estimating accuracy were identified through literature review and discussion with some construction experts. The factors were tabulated in a questionnaire form and a field survey included 51 contractors from the Northern Province of Saudi Arabia was performed. The results show that the top five important causes are: wrong estimation method, long period between design and time of implementation, cost of labor, cost of machinary and absence of construction-cost data.Keywords: cost deviation, public construction, cost estimating, Saudi Arabia, contractors
Procedia PDF Downloads 4802978 Active Power Filters and their Smart Grid Integration - Applications for Smart Cities
Authors: Pedro Esteban
Abstract:
Most installations nowadays are exposed to many power quality problems, and they also face numerous challenges to comply with grid code and energy efficiency requirements. The reason behind this is that they are not designed to support nonlinear, non-balanced, and variable loads and generators that make up a large percentage of modern electric power systems. These problems and challenges become especially critical when designing green buildings and smart cities. These problems and challenges are caused by equipment that can be typically found in these installations like variable speed drives (VSD), transformers, lighting, battery chargers, double-conversion UPS (uninterruptible power supply) systems, highly dynamic loads, single-phase loads, fossil fuel generators and renewable generation sources, to name a few. Moreover, events like capacitor switching (from existing capacitor banks or passive harmonic filters), auto-reclose operations of transmission and distribution lines, or the starting of large motors also contribute to these problems and challenges. Active power filters (APF) are one of the fastest-growing power electronics technologies for solving power quality problems and meeting grid code and energy efficiency requirements for a wide range of segments and applications. They are a high performance, flexible, compact, modular, and cost-effective type of power electronics solutions that provide an instantaneous and effective response in low or high voltage electric power systems. They enable longer equipment lifetime, higher process reliability, improved power system capacity and stability, and reduced energy losses, complying with most demanding power quality and energy efficiency standards and grid codes. There can be found several types of active power filters, including active harmonic filters (AHF), static var generators (SVG), active load balancers (ALB), hybrid var compensators (HVC), and low harmonic drives (LHD) nowadays. All these devices can be used in applications in Smart Cities bringing several technical and economic benefits.Keywords: power quality improvement, energy efficiency, grid code compliance, green buildings, smart cities
Procedia PDF Downloads 1152977 Memorizing Music and Learning Strategies
Authors: Elisabeth Eder
Abstract:
Memorizing music plays an important role for instrumentalists and has been researched very little so far. Almost every musician is confronted with memorizing music in the course of their musical career. For numerous competitions, examinations (e.g., at universities, music schools), solo performances, and the like, memorization is a requirement. Learners are often required to learn a piece by heart but are rarely given guidance on how to proceed. This was also confirmed by Eder's preliminary study to examine the topicality and relevance of the topic, in which 111 instrumentalists took part. The preliminary study revealed a great desire for more knowledge or information about learning strategies as well as a greater sense of security when performing by heart on stage through the use of learning strategies by those musicians who use learning strategies. Eder’s research focuses on learning strategies for memorizing music. As part of a large-scale empirical study – an online questionnaire translated into 10 languages was used to conduct the study – 1091 musicians from 64 different countries described how they memorize. The participants in the study also evaluated their learning strategies and justified their choice in terms of their degree of effectiveness. Based on the study and pedagogical literature, 100 learning strategies were identified and categorized; the strategies were examined with regard to their effectiveness, and instrument-specific, age-specific, country-specific, gender-specific, and education-related differences and similarities concerning the choice of learning strategies were investigated. Her research also deals with forms and models of memory and how music-related information can be stored and retrieved and also forgotten again. A further part is devoted to the possibilities that teachers and learners have to support the process of memorization independently of learning strategies. The findings resulting from Elisabeth Eder's research should enable musicians and instrumental students to memorize faster and more confidently.Keywords: memorizing music, learning strategies, empirical study, effectiveness of strategies
Procedia PDF Downloads 432976 Adaptive Nonparametric Approach for Guaranteed Real-Time Detection of Targeted Signals in Multichannel Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
An adaptive nonparametric method is proposed for stable real-time detection of seismoacoustic sources in multichannel C-OTDR systems with a significant number of channels. This method guarantees given upper boundaries for probabilities of Type I and Type II errors. Properties of the proposed method are rigorously proved. The results of practical applications of the proposed method in a real C-OTDR-system are presented in this report.Keywords: guaranteed detection, multichannel monitoring systems, change point, interval estimation, adaptive detection
Procedia PDF Downloads 4502975 Antecedents and Consequents of Organizational Politics: A Select Study of a Central University
Authors: Poonam Mishra, Shiv Kumar Sharma, Sanjeev Swami
Abstract:
Purpose: The Purpose of this paper is to investigate the relationship of percieved organizational politics with three levels of antecedents (i.e., organizational level, work environment level and individual level)and its consequents simultaneously. The study addresses antecedents and consequents of percieved political behavior in the higher education sector of India with specific reference to a central university. Design/ Methodology/ Approach: A conceptual framework and hypotheses were first developed on the basis of review of previous studies on organizational politics. A questionnaire was then developed carrying 66 items related to 8-constructs and demographic characteristics of respondents. Jundegemental sampling was used to select respondents. Primary data is collected through structured questionnaire from 45 faculty members of a central university. The sample constitutes Professors, Associate Professors and Assistant Professors from various departments of the University. To test hypotheses data was analyzed statistically using partial least square-structural equations modeling (PLS-SEM). Findings: Results indicated a strong support for OP’s relationship with three of the four proposed antecedents that are, workforce diversity, relationship conflict and need for power with relationship conflict having the strongest impact. No significant relationship was found between role conflict and perception of organizational politics. The three consequences that is, intention to turnover, job anxiety, and organizational commitment are significantly impacted by perception of organizational politics. Practical Implications– This study will be helpful in motivating future research for improving the quality of higher education in India by reducing the level of antecedents that adds to the level of perception of organizational politics, ultimately resulting in unfavorable outcomes. Originality/value: Although a large number of studies on atecedents and consequents of percieved organizational politics have been reported, little attention has been paid to test all the separate but interdependent relationships simultaneously; in this paper organizational politics will be simultaneously treated as a dependent variable and same will be treated as independent variable in subsequent relationships.Keywords: organizational politics, workforce diversity, relationship conflict, role conflict, need for power, intention to turnover, job anxiety, organizational commitment
Procedia PDF Downloads 4972974 Issues in Travel Demand Forecasting
Authors: Huey-Kuo Chen
Abstract:
Travel demand forecasting including four travel choices, i.e., trip generation, trip distribution, modal split and traffic assignment constructs the core of transportation planning. In its current application, travel demand forecasting has associated with three important issues, i.e., interface inconsistencies among four travel choices, inefficiency of commonly used solution algorithms, and undesirable multiple path solutions. In this paper, each of the three issues is extensively elaborated. An ideal unified framework for the combined model consisting of the four travel choices and variable demand functions is also suggested. Then, a few remarks are provided in the end of the paper.Keywords: travel choices, B algorithm, entropy maximization, dynamic traffic assignment
Procedia PDF Downloads 4592973 Analysis of Structural Modeling on Digital English Learning Strategy Use
Authors: Gyoomi Kim, Jiyoung Bae
Abstract:
The purpose of this study was to propose a framework that verifies the structural relationships among students’ use of digital English learning strategy (DELS), affective domains, and their individual variables. The study developed a hypothetical model based on previous studies on language learning strategy use as well as digital language learning. The participants were 720 Korean high school students and 430 university students. The instrument was a self-response questionnaire that contained 70 question items based on Oxford’s SILL (Strategy Inventory for Language Learning) as well as the previous studies on language learning strategies in digital learning environment in order to measure DELS and affective domains. The collected data were analyzed through structural equation modeling (SEM). This study used quantitative data analysis procedures: Explanatory factor analysis (EFA) and confirmatory factor analysis (CFA). Firstly, the EFA was conducted in order to verify the hypothetical model; the factor analysis was conducted preferentially to identify the underlying relationships between measured variables of DELS and the affective domain in the EFA process. The hypothetical model was established with six indicators of learning strategies (memory, cognitive, compensation, metacognitive, affective, and social strategies) under the latent variable of the use of DELS. In addition, the model included four indicators (self-confidence, interests, self-regulation, and attitude toward digital learning) under the latent variable of learners’ affective domain. Secondly, the CFA was used to determine the suitability of data and research models, so all data from the present study was used to assess model fits. Lastly, the model also included individual learner factors as covariates and five constructs selected were learners’ gender, the level of English proficiency, the duration of English learning, the period of using digital devices, and previous experience of digital English learning. The results verified from SEM analysis proposed a theoretical model that showed the structural relationships between Korean students’ use of DELS and their affective domains. Therefore, the results of this study help ESL/EFL teachers understand how learners use and develop appropriate learning strategies in digital learning contexts. The pedagogical implication and suggestions for the further study will be also presented.Keywords: Digital English Learning Strategy, DELS, individual variables, learners' affective domains, Structural Equation Modeling, SEM
Procedia PDF Downloads 1262972 Modeling the Impact of Controls on Information System Risks
Authors: M. Ndaw, G. Mendy, S. Ouya
Abstract:
Information system risk management helps to reduce or eliminate risk by implementing appropriate controls. In this paper, we propose a quantification model of controls impact on information system risks by automatizing the residual criticality estimation step of FMECA which is based on a inductive reasoning. For this, we defined three equations based on type and maturity of controls. For testing, the values obtained with the model were compared to estimated values given by interlocutors during different working sessions and the result is satisfactory. This model allows an optimal assessment of controls maturity and facilitates risk analysis of information system.Keywords: information system, risk, control, FMECA method
Procedia PDF Downloads 3562971 The Use of the Mediated Learning Experience in Response of Special Needs Education
Authors: Maria Luisa Boninelli
Abstract:
This study wants to explore the effects of a mediated intervention program in a primary school. The participants where 120 students aged 8-9, half of them Italian and half immigrants of first or second generation. The activities consisted on the cognitive enhancement of the participants through Feuerstein’s Instrumental Enrichment, (IE) and on an activity centred on body awareness and mediated learning experience. Given that there are limited studied on learners in remedial schools, the current study intented to hypothesized that participants exposed to mediation would yiel a significant improvement in cognitive functioning. Hypothesis One proposed that, following the intervention, improved Q1vata scores of the participants would occur in each of the groups. Hypothesis two postulated that participants within the Mediated Learning Experience would perform significantly better than those group of control. For the intervention a group of 60 participants constituted a group of Mediation sample and were exposed to Mediated Learning Experience through Enrichment Programm. Similiary the other 60 were control group. Both the groups have students with special needs and were exposed to the same learning goals. A pre-experimental research design, in particular a one-group pretest-posttest approach was adopted. All the participants in this study underwent pretest and post test phases whereby they completed measures according to the standard instructions. During the pretest phase, all the participants were simultaneously exposed to Q1vata test for logical and linguistic evaluation skill. During the mediation intervention, significant improvement was demonstrated with the group of mediation. This supports Feuerstein's Theory that initial poor performance was a result of a lack of mediated learning experience rather than inherent difference or deficiencies. Furthermore the use of an appropriate mediated learning enabled the participants to function adequately.Keywords: cognitive structural modifiability, learning to learn, mediated learning experience, Reuven Feuerstein, special needs
Procedia PDF Downloads 3792970 Modeling and Prediction of Zinc Extraction Efficiency from Concentrate by Operating Condition and Using Artificial Neural Networks
Authors: S. Mousavian, D. Ashouri, F. Mousavian, V. Nikkhah Rashidabad, N. Ghazinia
Abstract:
PH, temperature, and time of extraction of each stage, agitation speed, and delay time between stages effect on efficiency of zinc extraction from concentrate. In this research, efficiency of zinc extraction was predicted as a function of mentioned variable by artificial neural networks (ANN). ANN with different layer was employed and the result show that the networks with 8 neurons in hidden layer has good agreement with experimental data.Keywords: zinc extraction, efficiency, neural networks, operating condition
Procedia PDF Downloads 5472969 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 2782968 Breast Cancer Incidence Estimation in Castilla-La Mancha (CLM) from Mortality and Survival Data
Authors: C. Romero, R. Ortega, P. Sánchez-Camacho, P. Aguilar, V. Segur, J. Ruiz, G. Gutiérrez
Abstract:
Introduction: Breast cancer is a leading cause of death in CLM. (2.8% of all deaths in women and 13,8% of deaths from tumors in womens). It is the most tumor incidence in CLM region with 26.1% from all tumours, except nonmelanoma skin (Cancer Incidence in Five Continents, Volume X, IARC). Cancer registries are a good information source to estimate cancer incidence, however the data are usually available with a lag which makes difficult their use for health managers. By contrast, mortality and survival statistics have less delay. In order to serve for resource planning and responding to this problem, a method is presented to estimate the incidence of mortality and survival data. Objectives: To estimate the incidence of breast cancer by age group in CLM in the period 1991-2013. Comparing the data obtained from the model with current incidence data. Sources: Annual number of women by single ages (National Statistics Institute). Annual number of deaths by all causes and breast cancer. (Mortality Registry CLM). The Breast cancer relative survival probability. (EUROCARE, Spanish registries data). Methods: A Weibull Parametric survival model from EUROCARE data is obtained. From the model of survival, the population and population data, Mortality and Incidence Analysis MODel (MIAMOD) regression model is obtained to estimate the incidence of cancer by age (1991-2013). Results: The resulting model is: Ix,t = Logit [const + age1*x + age2*x2 + coh1*(t – x) + coh2*(t-x)2] Where: Ix,t is the incidence at age x in the period (year) t; the value of the parameter estimates is: const (constant term in the model) = -7.03; age1 = 3.31; age2 = -1.10; coh1 = 0.61 and coh2 = -0.12. It is estimated that in 1991 were diagnosed in CLM 662 cases of breast cancer (81.51 per 100,000 women). An estimated 1,152 cases (112.41 per 100,000 women) were diagnosed in 2013, representing an increase of 40.7% in gross incidence rate (1.9% per year). The annual average increases in incidence by age were: 2.07% in women aged 25-44 years, 1.01% (45-54 years), 1.11% (55-64 years) and 1.24% (65-74 years). Cancer registries in Spain that send data to IARC declared 2003-2007 the average annual incidence rate of 98.6 cases per 100,000 women. Our model can obtain an incidence of 100.7 cases per 100,000 women. Conclusions: A sharp and steady increase in the incidence of breast cancer in the period 1991-2013 is observed. The increase was seen in all age groups considered, although it seems more pronounced in young women (25-44 years). With this method you can get a good estimation of the incidence.Keywords: breast cancer, incidence, cancer registries, castilla-la mancha
Procedia PDF Downloads 3132967 Gaze Behaviour of Individuals with and without Intellectual Disability for Nonaccidental and Metric Shape Properties
Authors: S. Haider, B. Bhushan
Abstract:
Eye Gaze behaviour of individuals with and without intellectual disability are investigated in an eye tracking study in terms of sensitivity to Nonaccidental (NAPs) and Metric (MPs) shape properties. Total fixation time is used as an indirect measure of attention allocation. Studies have found Mean reaction times for non accidental properties (NAPs) to be shorter than for metric (MPs) when the MP and NAP differences were equalized. METHODS: Twenty-five individuals with intellectual disability (mild and moderate level of Mental Retardation) and twenty-seven normal individuals were compared on mean total fixation duration, accuracy level and mean reaction time for mild NAPs, extreme NAPs and metric properties of images. 2D images of cylinders were adapted and made into forced choice match-to-sample tasks. Tobii TX300 Eye Tracker was used to record total fixation duration and data obtained from the Areas of Interest (AOI). Variable trial duration (total reaction time of each participant) and fixed trail duration (data taken at each second from one to fifteen seconds) data were used for analyses. Both groups did not differ in terms of fixation times (fixed as well as variable) across any of the three image manipulations but differed in terms of reaction time and accuracy. Normal individuals had longer reaction time compared to individuals with intellectual disability across all types of images. Both the groups differed significantly on accuracy measure across all image types. Normal individuals performed better across all three types of images. Mild NAPs vs. Metric differences: There was significant difference between mild NAPs and metric properties of images in terms of reaction times. Mild NAPs images had significantly longer reaction time compared to metric for normal individuals but this difference was not found for individuals with intellectual disability. Mild NAPs images had significantly better accuracy level compared to metric for both the groups. In conclusion, type of image manipulations did not result in differences in attention allocation for individuals with and without intellectual disability. Mild Nonaccidental properties facilitate better accuracy level compared to metric in both the groups but this advantage is seen only for normal group in terms of mean reaction time.Keywords: eye gaze fixations, eye movements, intellectual disability, stimulus properties
Procedia PDF Downloads 5552966 The Effects of Cultural Distance and Institutions on Foreign Direct Investment Choices: Evidence from Turkey and China
Authors: Nihal Kartaltepe Behram, Göksel Ataman, Dila Okçu
Abstract:
With the development of foreign direct investments, the social, cultural, political and economic interactions between countries and institutions have become visible and they have become determining factors for the strategic structuring and market goals. In this context the purpose of this study is to investigate the effects of cultural distance and institutions on foreign direct investment choices in terms of location and investment model. For international establishments, the concept of culture, as well as the concept of cultural distance, is taken specifically into consideration, especially in the selection of methods for entering the market. In the researches and empirical studies conducted, a direct relationship between cultural distance and foreign direct investments is set and institutions and effective variable factors are examined at the level of defining the investment types. When the detailed calculation strategies and empirical researches and studies are taken into consideration, the most common methods for determining the direct investment model, considering the cultural distances, are full-ownership enterprises and joint ventures. Also, when all of the factors affecting the investments are taken into consideration, it was seen that the effect of institutions such as Government Intervention, Intellectual Property Rights, Corruption and Contract Enforcements is very important. Furthermore agglomeration is more intense and effective on the investment, compared to other factors. China has been selected as the target country, due to its effectiveness in world economy and its contributions to developing countries, which has commercial relationships with. Qualitative research methods are used for this study conducted, to measure the effects of determinative variable factors in the hypotheses of study, on the direct foreign investors and to evaluate the findings. In this study in-depth interview is used as a data collection method and the data analysis is made through descriptive analysis. Foreign Direct Investments are so reactive to institutions and cultural distance is identified by all interviews and analysis. On the other hand, agglomeration is the most strong determiner factor on foreign direct investors in Chinese Market. The reason of this factors, which comprise the sectorial aggregate, are not the strongest factors as agglomeration that the most important finding. We expect that this study became a beneficial guideline for developed and developing countries and local and national institutions’ strategic plans.Keywords: China, cultural distance, Foreign Direct Investments, institutions
Procedia PDF Downloads 4202965 Efficacy of Agrobacterium Tumefaciens as a Possible Entomopathogenic Agent
Authors: Fouzia Qamar, Shahida Hasnain
Abstract:
The objective of the present study was to evaluate the possible role of Agrobacterium tumefaciens as a possible insect biocontrol agent. Pests selected for the present challenge were adult males of Periplaneta americana and last instar larvae of Pieris brassicae and Spodoptera litura. Different ranges of bacterial doses were selected and tested to score the mortalities of the insects after 24 hours, for the lethal dose estimation studies. Mode of application for the inoculation of the bacteria, was the microinjection technique. The evaluation of the possible entomopathogenic carrying attribute of bacterial Ti plasmid, led to the conclusion that the loss of plasmid was associated with the loss of virulence against target insects.Keywords: agrobacterium tumefaciens, toxicity assessment, biopesticidal attribute, entomopathogenic agent
Procedia PDF Downloads 3812964 Bianchi Type- I Viscous Fluid Cosmological Models with Stiff Matter and Time Dependent Λ- Term
Authors: Rajendra Kumar Dubey
Abstract:
Einstein’s field equations with variable cosmological term Λ are considered in the presence of viscous fluid for Bianchi type I space time. Exact solutions of Einstein’s field equations are obtained by assuming cosmological term Λ Proportional to (R is a scale factor and m is constant). We observed that the shear viscosity is found to be responsible for faster removal of initial anisotropy in the universe. The physical significance of the cosmological models has also been discussed.Keywords: bianchi type, I cosmological model, viscous fluid, cosmological constant Λ
Procedia PDF Downloads 5292963 Kinetic Studies on CO₂ Gasification of Low and High Ash Indian Coals in Context of Underground Coal Gasification
Authors: Geeta Kumari, Prabu Vairakannu
Abstract:
Underground coal gasification (UCG) technology is an efficient and an economic in-situ clean coal technology, which converts unmineable coals into calorific valuable gases. This technology avoids ash disposal, coal mining, and storage problems. CO₂ gas can be a potential gasifying medium for UCG. CO₂ is a greenhouse gas and, the liberation of this gas to the atmosphere from thermal power plant industries leads to global warming. Hence, the capture and reutilization of CO₂ gas are crucial for clean energy production. However, the reactivity of high ash Indian coals with CO₂ needs to be assessed. In the present study, two varieties of Indian coals (low ash and high ash) are used for thermogravimetric analyses (TGA). Two low ash north east Indian coals (LAC) and a typical high ash Indian coal (HAC) are procured from the coal mines of India. Low ash coal with 9% ash (LAC-1) and 4% ash (LAC-2) and high ash coal (HAC) with 42% ash are used for the study. TGA studies are carried out to evaluate the activation energy for pyrolysis and gasification of coal under N₂ and CO₂ atmosphere. Coats and Redfern method is used to estimate the activation energy of coal under different temperature regimes. Volumetric model is assumed for the estimation of the activation energy. The activation energy estimated under different temperature range. The inherent properties of coals play a major role in their reactivity. The results show that the activation energy decreases with the decrease in the inherent percentage of coal ash due to the ash layer hindrance. A reverse trend was observed with volatile matter. High volatile matter of coal leads to the estimation of low activation energy. It was observed that the activation energy under CO₂ atmosphere at 400-600°C is less as compared to N₂ inert atmosphere. At this temperature range, it is estimated that 15-23% reduction in the activation energy under CO₂ atmosphere. This shows the reactivity of CO₂ gas with higher hydrocarbons of the coal volatile matters. The reactivity of CO₂ with the volatile matter of coal might occur through dry reforming reaction in which CO₂ reacts with higher hydrocarbon, aromatics of the tar content. The observed trend of Ea in the temperature range of 150-200˚C and 400-600˚C is HAC > LAC-1 >LAC-2 in both N₂ and CO₂ atmosphere. At the temperature range of 850-1000˚C, higher activation energy is estimated when compared to those values in the temperature range of 400-600°C. Above 800°C, char gasification through Boudouard reaction progressed under CO₂ atmosphere. It was observed that 8-20 kJ/mol of activation energy is increased during char gasification above 800°C compared to volatile matter pyrolysis between the temperature ranges of 400-600°C. The overall activation energy of the coals in the temperature range of 30-1000˚C is higher in N₂ atmosphere than CO₂ atmosphere. It can be concluded that higher hydrocarbons such as tar effectively undergoes cracking and reforming reactions in presence of CO₂. Thus, CO₂ gas is beneficial for the production of high calorific value syngas using high ash Indian coals.Keywords: clean coal technology, CO₂ gasification, activation energy, underground coal gasification
Procedia PDF Downloads 1732962 Tracking Maximum Power Point Utilizing Artificial Immunity System
Authors: Marwa Ahmed Abd El Hamied
Abstract:
In this paper In this paper, a new technique based on Artificial Immunity System (AIS) technique has been developed to track Maximum Power Point (MPP). AIS system is implemented in a photovoltaic system that is subjected to variable temperature and insulation condition. The proposed novel is simulated using Mat Lab program. The results of simulation have been compared to those who are generated from Observation Controller. The proposed model shows promising results as it provide better accuracy comparing to classical model.Keywords: component, artificial immunity technique, solar energy, perturbation and observation, power based methods
Procedia PDF Downloads 428