Search results for: statistical machine learning
1033 Environmental Impacts of Point and Non-Point Source Pollution in Krishnagiri Reservoir: A Case Study in South India
Authors: N. K. Ambujam, V. Sudha
Abstract:
Reservoirs are being contaminated all around the world with point source and Non-Point Source (NPS) pollution. The most common NPS pollutants are sediments and nutrients. Krishnagiri Reservoir (KR) has been chosen for the present case study, which is located in the tropical semi-arid climatic zone of Tamil Nadu, South India. It is the main source of surface water in Krishnagiri district to meet the freshwater demands. The reservoir has lost about 40% of its water holding capacity due to sedimentation over the period of 50 years. Hence, from the research and management perspective, there is a need for a sound knowledge on the spatial and seasonal variations of KR water quality. The present study encompasses the specific objectives as (i) to investigate the longitudinal heterogeneity and seasonal variations of physicochemical parameters, nutrients and biological characteristics of KR water and (ii) to examine the extent of degradation of water quality in KR. 15 sampling points were identified by uniform stratified method and a systematic monthly sampling strategy was selected due to high dynamic nature in its hydrological characteristics. The physicochemical parameters, major ions, nutrients and Chlorophyll a (Chl a) were analysed. Trophic status of KR was classified by using Carlson's Trophic State Index (TSI). All statistical analyses were performed by using Statistical Package for Social Sciences programme, version-16.0. Spatial maps were prepared for Chl a using Arc GIS. Observations in KR pointed out that electrical conductivity and major ions are highly variable factors as it receives inflow from the catchment with different land use activities. The study of major ions in KR exhibited different trends in their values and it could be concluded that as the monsoon progresses the major ions in the water decreases or water quality stabilizes. The inflow point of KR showed comparatively higher concentration of nutrients including nitrate, soluble reactive phosphorus (SRP), total phosphors (TP), total suspended phosphorus (TSP) and total dissolved phosphorus (TDP) during monsoon seasons. This evidently showed the input of significant amount of nutrients from the catchment side through agricultural runoff. High concentration of TDP and TSP at the lacustrine zone of the reservoir during summer season evidently revealed that there was a significant release of phosphorus from the bottom sediments. Carlson’s TSI of KR ranged between 81 and 92 during northeast monsoon and summer seasons. High and permanent Cyanobacterial bloom in KR could be mainly due to the internal loading of phosphorus from the bottom sediments. According to Carlson’s TSI classification Krishnagiri reservoir was ranked in the hyper-eutrophic category. This study provides necessary basic data on the spatio-temporal variations of water quality in KR and also proves the impact of point and NPS pollution from the catchment area. High TSI warrants a greater threat for the recovery of internal P loading and hyper-eutrophic condition of KR. Several expensive internal measures for the reduction of internal loading of P were introduced by many scientists. However, the outcome of the present research suggests for the innovative algae harvesting technique for the removal of sediment nutrients.
Keywords: Hyper-eutrophication, Krishnagiri reservoir, nutrients, NPS pollution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16221032 Comparison between LQR and ANN Active Anti-Roll Control of a Single Unit Heavy Vehicle
Authors: Babesse Saad, Ameddah Djameleddine
Abstract:
In this paper, a learning algorithm using neuronal networks to improve the roll stability and prevent the rollover in a single unit heavy vehicle is proposed. First, LQR control to keep balanced normalized rollovers, between front and rear axles, below the unity, then a data collected from this controller is used as a training basis of a neuronal regulator. The ANN controller is thereafter applied for the nonlinear side force model, and gives satisfactory results than the LQR one.Keywords: Rollover, single unit heavy vehicle, neural networks, nonlinear side force.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10451031 Visual Attention Analysis on Mutated Brand Name using Eye-Tracking: A Case Study
Authors: Anirban Chowdhury, Sougata Karmakar, Swathi Matta Reddy, Sanjog J., Subrata Ghosh, Debkumar Chakrabarti
Abstract:
Brand name plays a vital role for in-shop buying behavior of consumers and mutated brand name may affect the selling of leading branded products. In Indian market, there are many products with mutated brand names which are either orthographically or phonologically similar. Due to presence of such products, Indian consumers very often fall under confusion when buying some regularly used stuff. Authors of the present paper have attempted to demonstrate relationship between less attention and false recognition of mutated brand names during a product selection process. To achieve this goal, visual attention study was conducted on 15 male college students using eye-tracker against a mutated brand name and errors in recognition were noted using questionnaire. Statistical analysis of the acquired data revealed that there was more false recognition of mutated brand name when less attention was paid during selection of favorite product. Moreover, it was perceived that eye tracking is an effective tool for analyzing false recognition of brand name mutation.Keywords: Brand Name Mutation, Consumer Behavior, Visual Attention, Orthography
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25351030 Error Correction of Radial Displacement in Grinding Machine Tool Spindle by Optimizing Shape and Bearing Tuning
Authors: Khairul Jauhari, Achmad Widodo, Ismoyo Haryanto
Abstract:
In this article, the radial displacement error correction capability of a high precision spindle grinding caused by unbalance force was investigated. The spindle shaft is considered as a flexible rotor mounted on two sets of angular contact ball bearing. Finite element methods (FEM) have been adopted for obtaining the equation of motion of the spindle. In this paper, firstly, natural frequencies, critical frequencies, and amplitude of the unbalance response caused by residual unbalance are determined in order to investigate the spindle behaviors. Furthermore, an optimization design algorithm is employed to minimize radial displacement of the spindle which considers dimension of the spindle shaft, the dynamic characteristics of the bearings, critical frequencies and amplitude of the unbalance response, and computes optimum spindle diameters and stiffness and damping of the bearings. Numerical simulation results show that by optimizing the spindle diameters, and stiffness and damping in the bearings, radial displacement of the spindle can be reduced. A spindle about 4 μm radial displacement error can be compensated with 2 μm accuracy. This certainly can improve the accuracy of the product of machining.Keywords: Error correction, High precision grinding, Optimization, Radial displacement, Spindle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17941029 Bayesian Inference for Phase Unwrapping Using Conjugate Gradient Method in One and Two Dimensions
Authors: Yohei Saika, Hiroki Sakaematsu, Shota Akiyama
Abstract:
We investigated statistical performance of Bayesian inference using maximum entropy and MAP estimation for several models which approximated wave-fronts in remote sensing using SAR interferometry. Using Monte Carlo simulation for a set of wave-fronts generated by assumed true prior, we found that the method of maximum entropy realized the optimal performance around the Bayes-optimal conditions by using model of the true prior and the likelihood representing optical measurement due to the interferometer. Also, we found that the MAP estimation regarded as a deterministic limit of maximum entropy almost achieved the same performance as the Bayes-optimal solution for the set of wave-fronts. Then, we clarified that the MAP estimation perfectly carried out phase unwrapping without using prior information, and also that the MAP estimation realized accurate phase unwrapping using conjugate gradient (CG) method, if we assumed the model of the true prior appropriately.
Keywords: Bayesian inference using maximum entropy, MAP estimation using conjugate gradient method, SAR interferometry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17511028 SIPINA Induction Graph Method for Seismic Risk Prediction
Authors: B. Selma
Abstract:
The aim of this study is to test the feasibility of SIPINA method to predict the harmfulness parameters controlling the seismic response. The approach developed takes into consideration both the focal depth and the peak ground acceleration. The parameter to determine is displacement. The data used for the learning of this method and analysis nonlinear seismic are described and applied to a class of models damaged to some typical structures of the existing urban infrastructure of Jassy, Romania. The results obtained indicate an influence of the focal depth and the peak ground acceleration on the displacement.
Keywords: SIPINA method, seism, focal depth, peak ground acceleration, displacement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12111027 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game
Authors: Steven W. Carruthers
Abstract:
The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.Keywords: Effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10151026 Trusting Smart Speakers: Analysing the Different Levels of Trust between Technologies
Authors: Alec Wells, Aminu Bello Usman, Justin McKeown
Abstract:
The growing usage of smart speakers raises many privacy and trust concerns compared to other technologies such as smart phones and computers. In this study, a proxy measure of trust is used to gauge users’ opinions on three different technologies based on an empirical study, and to understand which technology most people are most likely to trust. The collected data were analysed using the Kruskal-Wallis H test to determine the statistical differences between the users’ trust level of the three technologies: smart speaker, computer and smart phone. The findings of the study revealed that despite the wide acceptance, ease of use and reputation of smart speakers, people find it difficult to trust smart speakers with their sensitive information via the Direct Voice Input (DVI) and would prefer to use a keyboard or touchscreen offered by computers and smart phones. Findings from this study can inform future work on users’ trust in technology based on perceived ease of use, reputation, perceived credibility and risk of using technologies via DVI.
Keywords: Direct voice input, risk, security, technology and trust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5921025 Face Authentication for Access Control based on SVM using Class Characteristics
Authors: SeHun Lim, Sanghoon Kim, Sun-Tae Chung, Seongwon Cho
Abstract:
Face authentication for access control is a face membership authentication which passes the person of the incoming face if he turns out to be one of an enrolled person based on face recognition or rejects if not. Face membership authentication belongs to the two class classification problem where SVM(Support Vector Machine) has been successfully applied and shows better performance compared to the conventional threshold-based classification. However, most of previous SVMs have been trained using image feature vectors extracted from face images of each class member(enrolled class/unenrolled class) so that they are not robust to variations in illuminations, poses, and facial expressions and much affected by changes in member configuration of the enrolled class In this paper, we propose an effective face membership authentication method based on SVM using class discriminating features which represent an incoming face image-s associability with each class distinctively. These class discriminating features are weakly related with image features so that they are less affected by variations in illuminations, poses and facial expression. Through experiments, it is shown that the proposed face membership authentication method performs better than the threshold rule-based or the conventional SVM-based authentication methods and is relatively less affected by changes in member size and membership.Keywords: Face Authentication, Access control, member ship authentication, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15081024 Reliability Indices Evaluation of SEIG Rotor Core Magnetization with Minimum Capacitive Excitation for WECs
Authors: Lokesh Varshney, R. K. Saket
Abstract:
This paper presents reliability indices evaluation of the rotor core magnetization of the induction motor operated as a self excited induction generator by using probability distribution approach and Monte Carlo simulation. Parallel capacitors with calculated minimum capacitive value across the terminals of the induction motor operated as a SEIG with unregulated shaft speed have been connected during the experimental study. A three phase, 4 poles, 50Hz, 5.5 hp, 12.3A, 230V induction motor coupled with DC Shunt Motor was tested in the electrical machine laboratory with variable reactive loads. Based on this experimental study, it is possible to choose a reliable induction machines operated as a SEIG for unregulated renewable energy application in remote area or where grid is not available. Failure density function, cumulative failure distribution function, survivor function, hazard model, probability of success and probability of failure for reliability evaluation of the three phase induction motor operating as a SEIG have been presented graphically in this paper.
Keywords: Residual magnetism, magnetization curve, induction motor, self excited induction generator, probability distribution, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21261023 Power System with PSS and FACTS Controller: Modelling, Simulation and Simultaneous Tuning Employing Genetic Algorithm
Authors: Sidhartha Panda, Narayana Prasad Padhy
Abstract:
This paper presents a systematic procedure for modelling and simulation of a power system installed with a power system stabilizer (PSS) and a flexible ac transmission system (FACTS)-based controller. For the design purpose, the model of example power system which is a single-machine infinite-bus power system installed with the proposed controllers is developed in MATLAB/SIMULINK. In the developed model synchronous generator is represented by model 1.1. which includes both the generator main field winding and the damper winding in q-axis so as to evaluate the impact of PSS and FACTS-based controller on power system stability. The model can be can be used for teaching the power system stability phenomena, and also for research works especially to develop generator controllers using advanced technologies. Further, to avoid adverse interactions, PSS and FACTS-based controller are simultaneously designed employing genetic algorithm (GA). The non-linear simulation results are presented for the example power system under various disturbance conditions to validate the effectiveness of the proposed modelling and simultaneous design approach.
Keywords: Genetic algorithm, modelling and simulation, MATLAB/SIMULINK, power system stabilizer, thyristor controlledseries compensator, simultaneous design, power system stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31571022 Case Studies in Three Domains of Learning: Cognitive, Affective, Psychomotor
Authors: Zeinabsadat Haghshenas
Abstract:
Bloom’s Taxonomy has been changed during the years. The idea of this writing is about the revision that has happened in both facts and terms. It also contains case studies of using cognitive Bloom’s taxonomy in teaching geometric solids to the secondary school students, affective objectives in a creative workshop for adults and psychomotor objectives in fixing a malfunctioned refrigerator lamp. There is also pointed to the important role of classification objectives in adult education as a way to prevent memory loss.Keywords: Adult education, affective domain, cognitive domain, memory loss, psychomotor domain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71861021 Biometric Authentication Using Fast Correlation of Near Infrared Hand Vein Patterns
Authors: Mohamed Shahin, Ahmed Badawi, Mohamed Kamel
Abstract:
This paper presents a hand vein authentication system using fast spatial correlation of hand vein patterns. In order to evaluate the system performance, a prototype was designed and a dataset of 50 persons of different ages above 16 and of different gender, each has 10 images per person was acquired at different intervals, 5 images for left hand and 5 images for right hand. In verification testing analysis, we used 3 images to represent the templates and 2 images for testing. Each of the 2 images is matched with the existing 3 templates. FAR of 0.02% and FRR of 3.00 % were reported at threshold 80. The system efficiency at this threshold was found to be 99.95%. The system can operate at a 97% genuine acceptance rate and 99.98 % genuine reject rate, at corresponding threshold of 80. The EER was reported as 0.25 % at threshold 77. We verified that no similarity exists between right and left hand vein patterns for the same person over the acquired dataset sample. Finally, this distinct 100 hand vein patterns dataset sample can be accessed by researchers and students upon request for testing other methods of hand veins matching.Keywords: Biometrics, Verification, Hand Veins, PatternsSimilarity, Statistical Performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35081020 Overall Stability of Welded Q460GJ Steel Box Columns: Experimental Study and Numerical Simulations
Authors: Zhou Xiong, Kang Shao Bo, Yang Bo
Abstract:
To date, high-performance structural steel has been widely used for columns in construction practices due to its significant advantages over conventional steel. However, the same design approach with conventional steel columns is still adopted in the design of high-performance steel columns. As a result, its superior properties cannot be fully considered in design. This paper conducts a test and finite element analysis on the overall stability behaviour of welded Q460GJ steel box columns. In the test, four steel columns with different slenderness and width-to-thickness ratio were compressed under an axial compression testing machine. And finite element models were established in which material nonlinearity and residual stress distributions of test columns were included. Then, comparisons were made between test results and finite element result, it showed that finite element analysis results are agree well with the test result. It means that the test and finite element model are reliable. Then, we compared the test result with the design value calculated by current code, the result showed that Q460GJ steel box columns have the higher overall buckling capacity than the design value. It is necessary to update the design curves for Q460GJ steel columns so that the overall stability capacity of Q460GJ box columns can be designed appropriately.
Keywords: Axial compression, Finite element analysis, Overall stability, Q460GJ steel, Welded box columns.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8611019 Optimization of Enzymatic Hydrolysis of Manihot Esculenta Root Starch by Immobilizeda-Amylase Using Response Surface Methodology
Authors: G. Baskar, C. Muthukumaran, S. Renganathan
Abstract:
Enzymatic hydrolysis of starch from natural sources finds potential application in commercial production of alcoholic beverage and bioethanol. In this study the effect of starch concentration, temperature, time and enzyme concentration were studied and optimized for hydrolysis of cassava (Manihot esculenta) starch powder (of mesh 80/120) into glucose syrup by immobilized (using Polyacrylamide gel) a-amylase using central composite design. The experimental result on enzymatic hydrolysis of cassava starch was subjected to multiple linear regression analysis using MINITAB 14 software. Positive linear effect of starch concentration, enzyme concentration and time was observed on hydrolysis of cassava starch by a-amylase. The statistical significance of the model was validated by F-test for analysis of variance (p < 0.01). The optimum value of starch concentration temperature, time and enzyme concentration were found to be 4.5% (w/v), 45oC, 150 min, and 1% (w/v) enzyme. The maximum glucose yield at optimum condition was 5.17 mg/mL.Keywords: Enzymatic hydrolysis, Alcoholic beverage, Centralcomposite design, Polynomial model, glucose yield.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22371018 Mapping Crime against Women in India: Spatio-Temporal Analysis, 2001-2012
Authors: Ritvik Chauhan, Vijay Kumar Baraik
Abstract:
Women are most vulnerable to crime despite occupying central position in shaping a society as the first teacher of children. In India too, having equal rights and constitutional safeguards, the incidences of crime against them are large and grave. In this context of crime against women, especially rape has been increasing over time. This paper explores the spatial and temporal aspects of crime against women in India with special reference to rape. It also examines the crime against women with its spatial, socio-economic and demographic associates using related data obtained from the National Crime Records Bureau India, Indian Census and other government sources of the Government of India. The simple statistical, choropleth mapping and other cartographic representation methods have been used to see the crime rates, spatio-temporal patterns of crime, and association of crime with its correlates. The major findings are visible spatial variations across the country and are also in the rising trends in terms of incidence and rates over the reference period. The study also indicates that the geographical associations are somewhat observed. However, selected indicators of socio-economic factors seem to have no significant bearing on crime against women at this level.
Keywords: Crime against women, crime mapping, trend analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17071017 Operational Risk – Scenario Analysis
Authors: Milan Rippel, Petr Teply
Abstract:
This paper focuses on operational risk measurement techniques and on economic capital estimation methods. A data sample of operational losses provided by an anonymous Central European bank is analyzed using several approaches. Loss Distribution Approach and scenario analysis method are considered. Custom plausible loss events defined in a particular scenario are merged with the original data sample and their impact on capital estimates and on the financial institution is evaluated. Two main questions are assessed – What is the most appropriate statistical method to measure and model operational loss data distribution? and What is the impact of hypothetical plausible events on the financial institution? The g&h distribution was evaluated to be the most suitable one for operational risk modeling. The method based on the combination of historical loss events modeling and scenario analysis provides reasonable capital estimates and allows for the measurement of the impact of extreme events on banking operations.Keywords: operational risk, scenario analysis, economic capital, loss distribution approach, extreme value theory, stress testing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24291016 Key Frame Based Video Summarization via Dependency Optimization
Authors: Janya Sainui
Abstract:
As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.Keywords: Video summarization, key frame extraction, dependency measure, quadratic mutual information, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9641015 Evolutionary Computing Approach for the Solution of Initial value Problems in Ordinary Differential Equations
Authors: A. Junaid, M. A. Z. Raja, I. M. Qureshi
Abstract:
An evolutionary computing technique for solving initial value problems in Ordinary Differential Equations is proposed in this paper. Neural network is used as a universal approximator while the adaptive parameters of neural networks are optimized by genetic algorithm. The solution is achieved on the continuous grid of time instead of discrete as in other numerical techniques. The comparison is carried out with classical numerical techniques and the solution is found with a uniform accuracy of MSE ≈ 10-9 .
Keywords: Neural networks, Unsupervised learning, Evolutionary computing, Numerical methods, Fitness evaluation function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17801014 Efficacy of Selected Mobility Exercises and Participation in Special Games on Psychomotor Abilities, Functional Abilities and Game Performance among Intellectually Disabled Children of Under 14 Age
Authors: J. Samuel Jesudoss
Abstract:
The purpose of the study was to find out the efficacy of selected mobility exercises and participation in special games on psychomotor abilities, functional abilities and skill performance among intellectually disabled children of age group under 14. Thirty male students who were studying in Balar Kalvi Nilayam and YMCA College Special School, Chennai, acted as subjects for the study. They were only mild and moderate in intellectual disability. These students did not undergo any special training or coaching programme apart from their regular routine physical activity classes as a part of the curriculum in the school. They were attached at random, based on age in which 30 belonged to under 14 age group, which was divided into three equal group of ten for each experimental treatment. 10 students (Treatment group I) underwent calisthenics and special games participation, 10 students (Treatment group II) underwent aquatics and special games participation, 10 students (Treatment group III) underwent yoga and special games participation. The subjects were tested on selected criterion variables prior (pre test) and after twelve weeks of training (post test). The pre and post test data collected from three groups on functional abilities(self care, learning, capacity for independent living), psychomotor variables(static balance, eye hand coordination, simple reaction time test) and skill performance (bocce skill, badminton skill, table tennis skill) were statistically examined for significant difference, by applying the analysis ANACOVA. Whenever an 'F' ratio for adjusted test was found to be significant for adjusted post test means, Scheffe-s test was followed as a post-hoc test to determine which of the paired mean differences was significant. The result of the study showed that among under 14 age groups there was a significant improvement on selected criterion variables such as, Balance, Coordination, self-care and learning and also in Bocce, Badminton & Table Tennis skill performance, due to mobility exercises and participation in special games. However there were no significant differences among the groups.Keywords: Functional ability, intellectually disabled, Mobility exercises, Psychomotor ability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19741013 Establishing a Probabilistic Model of Extrapolated Wind Speed Data for Wind Energy Prediction
Authors: Mussa I. Mgwatu, Reuben R. M. Kainkwa
Abstract:
Wind is among the potential energy resources which can be harnessed to generate wind energy for conversion into electrical power. Due to the variability of wind speed with time and height, it becomes difficult to predict the generated wind energy more optimally. In this paper, an attempt is made to establish a probabilistic model fitting the wind speed data recorded at Makambako site in Tanzania. Wind speeds and direction were respectively measured using anemometer (type AN1) and wind Vane (type WD1) both supplied by Delta-T-Devices at a measurement height of 2 m. Wind speeds were then extrapolated for the height of 10 m using power law equation with an exponent of 0.47. Data were analysed using MINITAB statistical software to show the variability of wind speeds with time and height, and to determine the underlying probability model of the extrapolated wind speed data. The results show that wind speeds at Makambako site vary cyclically over time; and they conform to the Weibull probability distribution. From these results, Weibull probability density function can be used to predict the wind energy.Keywords: Probabilistic models, wind speed, wind energy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23471012 Empirical Modeling of Air Dried Rubberwood Drying System
Authors: S. Khamtree, T. Ratanawilai, C. Nuntadusit
Abstract:
Rubberwood is a crucial commercial timber in Southern Thailand. All processes in a rubberwood production depend on the knowledge and expertise of the technicians, especially the drying process. This research aims to develop an empirical model for drying kinetics in rubberwood. During the experiment, the temperature of the hot air and the average air flow velocity were kept at 80-100 °C and 1.75 m/s, respectively. The moisture content in the samples was determined less than 12% in the achievement of drying basis. The drying kinetic was simulated using an empirical solver. The experimental results illustrated that the moisture content was reduced whereas the drying temperature and time were increased. The coefficient of the moisture ratio between the empirical and the experimental model was tested with three statistical parameters, R-square (R²), Root Mean Square Error (RMSE) and Chi-square (χ²) to predict the accuracy of the parameters. The experimental moisture ratio had a good fit with the empirical model. Additionally, the results indicated that the drying of rubberwood using the Henderson and Pabis model revealed the suitable level of agreement. The result presented an excellent estimation (R² = 0.9963) for the moisture movement compared to the other models. Therefore, the empirical results were valid and can be implemented in the future experiments.
Keywords: Empirical models, hot air, moisture ratio, rubberwood.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7801011 Mathematical Modeling to Predict Surface Roughness in CNC Milling
Authors: Ab. Rashid M.F.F., Gan S.Y., Muhammad N.Y.
Abstract:
Surface roughness (Ra) is one of the most important requirements in machining process. In order to obtain better surface roughness, the proper setting of cutting parameters is crucial before the process take place. This research presents the development of mathematical model for surface roughness prediction before milling process in order to evaluate the fitness of machining parameters; spindle speed, feed rate and depth of cut. 84 samples were run in this study by using FANUC CNC Milling α-Τ14ιE. Those samples were randomly divided into two data sets- the training sets (m=60) and testing sets(m=24). ANOVA analysis showed that at least one of the population regression coefficients was not zero. Multiple Regression Method was used to determine the correlation between a criterion variable and a combination of predictor variables. It was established that the surface roughness is most influenced by the feed rate. By using Multiple Regression Method equation, the average percentage deviation of the testing set was 9.8% and 9.7% for training data set. This showed that the statistical model could predict the surface roughness with about 90.2% accuracy of the testing data set and 90.3% accuracy of the training data set.
Keywords: Surface roughness, regression analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21311010 Architectural Building Safety and Health Performance Model for Stratified Low-Cost Housing: Education and Management Tool for Building Managers
Authors: Zainal Abidin Akasah, Maizam Alias, Azuin Ramli
Abstract:
The safety and health performances aspects of a building are the most challenging aspect of facility management. It requires a deep understanding by the building managers on the factors that contribute to health and safety performances. This study attempted to develop an explanatory architectural safety performance model for stratified low-cost housing in Malaysia. The proposed Building Safety and Health Performance (BSHP) model was tested empirically through a survey on 308 construction practitioners using partial least squares (PLS) and structural equation modelling (SEM) tool. Statistical analysis results supports the conclusion that architecture, building services, external environment, management approaches and maintenance management have positive influence on safety and health performance of stratified low-cost housing in Malaysia. The findings provide valuable insights for construction industry to introduce BSHP model in the future where the model could be used as a guideline for training purposes of managers and better planning and implementation of building management.
Keywords: Building management, stratified low-cost housing, Safety and health model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21951009 Web-Based Tools to Increase Public Understanding of Nuclear Technology and Food Irradiation
Authors: Denise Levy, Anna Lucia C. H. Villavicencio
Abstract:
Food irradiation is a processing and preservation technique to eliminate insects and parasites and reduce disease-causing microorganisms. Moreover, the process helps to inhibit sprouting and delay ripening, extending fresh fruits and vegetables shelf-life. Nevertheless, most Brazilian consumers seem to misunderstand the difference between irradiated food and radioactive food and the general public has major concerns about the negative health effects and environmental contamination. Society´s judgment and decision making are directly linked to perceived benefits and risks. The web-based project entitled ‘Scientific information about food irradiation: Internet as a tool to approach science and society’ was created by the Nuclear and Energetic Research Institute (IPEN), in order to offer an interdisciplinary approach to science education, integrating economic, ethical, social and political aspects of food irradiation. This project takes into account that, misinformation and unfounded preconceived ideas impact heavily on the acceptance of irradiated food and purchase intention by the Brazilian consumer. Taking advantage of the potential value of the Internet to enhance communication and education among general public, a research study was carried out regarding the possibilities and trends of Information and Communication Technologies among the Brazilian population. The content includes concepts, definitions and Frequently Asked Questions (FAQ) about processes, safety, advantages, limitations and the possibilities of food irradiation, including health issues, as well as its impacts on the environment. The project counts on eight self-instructional interactive web courses, situating scientific content in relevant social contexts in order to encourage self-learning and further reflections. Communication is a must to improve public understanding of science. The use of information technology for quality scientific divulgation shall contribute greatly to provide information throughout the country, spreading information to as many people as possible, minimizing geographic distances and stimulating communication and development.
Keywords: Food irradiation, multimedia learning tools, nuclear science, society and education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15411008 Multi-Objective Planning and Operation of Water Supply Systems Subject to Climate Change
Authors: B. J. C. Perera, D. A. Sachindra, W. Godoy., A.F. Barton, F. Huang
Abstract:
Many water supply systems in Australia are currently undergoing significant reconfiguration due to reductions in long term average rainfall and resulting low inflows to water supply reservoirs since the second half of the 20th century. When water supply systems undergo change, it is necessary to develop new operating rules, which should consider climate, because the climate change is likely to further reduce inflows. In addition, water resource systems are increasingly intended to be operated to meet complex and multiple objectives representing social, economic, environmental and sustainability criteria. This is further complicated by conflicting preferences on these objectives from diverse stakeholders. This paper describes a methodology to develop optimum operating rules for complex multi-reservoir systems undergoing significant change, considering all of the above issues. The methodology is demonstrated using the Grampians water supply system in northwest Victoria, Australia. Initial work conducted on the project is also presented in this paper.Keywords: Climate change, Multi-objective planning, Pareto optimal; Stakeholder preference, Statistical downscaling, Water supply systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18971007 Prediction of the Solubility of Benzoic Acid in Supercritical CO2 Using the PC-SAFT EoS
Authors: Hamidreza Bagheri, Alireza Shariati
Abstract:
There are many difficulties in the purification of raw components and products. However, researchers are seeking better ways for purification. One of the recent methods is extraction using supercritical fluids. In this study, the phase equilibria of benzoic acid -supercritical carbon dioxide system were investigated. Regarding the phase equilibria of this system, the modeling of solid-supercritical fluid behavior was performed using the Perturbed-Chain Statistical Association Fluid Theory (PC-SAFT) and Peng-Robinson equations of state (PR EoS). For this purpose, five PC-SAFT EoS parameters for pure benzoic acid were obtained using its experimental vapor pressure. Benzoic acid has association sites and the behavior of the benzoic acid-supercritical fluid system was well predicted using both equations of state, while the binary interaction parameter values for PR EoS were negative. Genetic algorithm, which is one of the most accurate global optimization algorithms, was also used to optimize the pure benzoic acid parameters and the binary interaction parameters. The AAD% value for the PC-SAFT EoS, were 0.22 for the carbon dioxide-benzoic acid system.
Keywords: Supercritical fluids, Solubility, Solid, PC-SAFT EoS, Genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26681006 Modeling of Crude Oil Blending via Discrete-Time Neural Networks
Abstract:
Crude oil blending is an important unit operation in petroleum refining industry. A good model for the blending system is beneficial for supervision operation, prediction of the export petroleum quality and realizing model-based optimal control. Since the blending cannot follow the ideal mixing rule in practice, we propose a static neural network to approximate the blending properties. By the dead-zone approach, we propose a new robust learning algorithm and give theoretical analysis. Real data of crude oil blending is applied to illustrate the neuro modeling approach.Keywords: Neural networks, modeling, stability, crude oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22631005 Using Linear Quadratic Gaussian Optimal Control for Lateral Motion of Aircraft
Authors: A. Maddi, A. Guessoum, D. Berkani
Abstract:
The purpose of this paper is to provide a practical example to the Linear Quadratic Gaussian (LQG) controller. This method includes a description and some discussion of the discrete Kalman state estimator. One aspect of this optimality is that the estimator incorporates all information that can be provided to it. It processes all available measurements, regardless of their precision, to estimate the current value of the variables of interest, with use of knowledge of the system and measurement device dynamics, the statistical description of the system noises, measurement errors, and uncertainty in the dynamics models. Since the time of its introduction, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation. For example, to determine the velocity of an aircraft or sideslip angle, one could use a Doppler radar, the velocity indications of an inertial navigation system, or the relative wind information in the air data system. Rather than ignore any of these outputs, a Kalman filter could be built to combine all of this data and knowledge of the various systems- dynamics to generate an overall best estimate of velocity and sideslip angle.Keywords: Aircraft motion, Kalman filter, LQG control, Lateral stability, State estimator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24701004 Random Projections for Dimensionality Reduction in ICA
Authors: Sabrina Gaito, Andrea Greppi, Giuliano Grossi
Abstract:
In this paper we present a technique to speed up ICA based on the idea of reducing the dimensionality of the data set preserving the quality of the results. In particular we refer to FastICA algorithm which uses the Kurtosis as statistical property to be maximized. By performing a particular Johnson-Lindenstrauss like projection of the data set, we find the minimum dimensionality reduction rate ¤ü, defined as the ratio between the size k of the reduced space and the original one d, which guarantees a narrow confidence interval of such estimator with high confidence level. The derived dimensionality reduction rate depends on a system control parameter β easily computed a priori on the basis of the observations only. Extensive simulations have been done on different sets of real world signals. They show that actually the dimensionality reduction is very high, it preserves the quality of the decomposition and impressively speeds up FastICA. On the other hand, a set of signals, on which the estimated reduction rate is greater than 1, exhibits bad decomposition results if reduced, thus validating the reliability of the parameter β. We are confident that our method will lead to a better approach to real time applications.Keywords: Independent Component Analysis, FastICA algorithm, Higher-order statistics, Johnson-Lindenstrauss lemma.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890