Search results for: maximal data sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24909

Search results for: maximal data sets

24729 Ranking All of the Efficient DMUs in DEA

Authors: Elahe Sarfi, Esmat Noroozi, Farhad Hosseinzadeh Lotfi

Abstract:

One of the important issues in Data Envelopment Analysis is the ranking of Decision Making Units. In this paper, a method for ranking DMUs is presented through which the weights related to efficient units should be chosen in a way that the other units preserve a certain percentage of their efficiency with the mentioned weights. To this end, a model is presented for ranking DMUs on the base of their superefficiency by considering the mentioned restrictions related to weights. This percentage can be determined by decision Maker. If the specific percentage is unsuitable, we can find a suitable and feasible one for ranking DMUs accordingly. Furthermore, the presented model is capable of ranking all of the efficient units including nonextreme efficient ones. Finally, the presented models are utilized for two sets of data and related results are reported.

Keywords: data envelopment analysis, efficiency, ranking, weight

Procedia PDF Downloads 430
24728 A Neural Network Modelling Approach for Predicting Permeability from Well Logs Data

Authors: Chico Horacio Jose Sambo

Abstract:

Recently neural network has gained popularity when come to solve complex nonlinear problems. Permeability is one of fundamental reservoir characteristics system that are anisotropic distributed and non-linear manner. For this reason, permeability prediction from well log data is well suited by using neural networks and other computer-based techniques. The main goal of this paper is to predict reservoir permeability from well logs data by using neural network approach. A multi-layered perceptron trained by back propagation algorithm was used to build the predictive model. The performance of the model on net results was measured by correlation coefficient. The correlation coefficient from testing, training, validation and all data sets was evaluated. The results show that neural network was capable of reproducing permeability with accuracy in all cases, so that the calculated correlation coefficients for training, testing and validation permeability were 0.96273, 0.89991 and 0.87858, respectively. The generalization of the results to other field can be made after examining new data, and a regional study might be possible to study reservoir properties with cheap and very fast constructed models.

Keywords: neural network, permeability, multilayer perceptron, well log

Procedia PDF Downloads 366
24727 Empowering a New Frontier in Heart Disease Detection: Unleashing Quantum Machine Learning

Authors: Sadia Nasrin Tisha, Mushfika Sharmin Rahman, Javier Orduz

Abstract:

Machine learning is applied in a variety of fields throughout the world. The healthcare sector has benefited enormously from it. One of the most effective approaches for predicting human heart diseases is to use machine learning applications to classify data and predict the outcome as a classification. However, with the rapid advancement of quantum technology, quantum computing has emerged as a potential game-changer for many applications. Quantum algorithms have the potential to execute substantially faster than their classical equivalents, which can lead to significant improvements in computational performance and efficiency. In this study, we applied quantum machine learning concepts to predict coronary heart diseases from text data. We experimented thrice with three different features; and three feature sets. The data set consisted of 100 data points. We pursue to do a comparative analysis of the two approaches, highlighting the potential benefits of quantum machine learning for predicting heart diseases.

Keywords: quantum machine learning, SVM, QSVM, matrix product state

Procedia PDF Downloads 62
24726 Flow-Oriented Incentive Spirometry in the Reversal of Diaphragmatic Dysfunction in Bariatric Surgery Postoperative Period

Authors: Eli Maria Forti-Pazzianotto, Carolina Moraes Da Costa, Daniela Faleiros Berteli Merino, Maura Rigoldi Simões Da Rocha, Irineu Rasera-Junior

Abstract:

There is no conclusive evidence to support the use of one type or brand of incentive espirometry over others. The decision as to which equipment is best, have being based on empirical assessment of patient acceptance, ease of use, and cost. The aim was to evaluate the effects of use of two methodologies of breathing exercises, performed by flow-oriented incentive spirometry, in the reversal of diaphragmatic dysfunction in postoperative bariatric surgery. 38 morbid obese women were selected. Respiratory muscle strength was evaluated through the nasal inspiratory pressure (NIP), and the respiratory muscles endurance, through incremental test by measurement of sustained maximal inspiratory pressure (SMIP). They were randomized in 2 groups: 1- Respiron® Classic (RC) the inspirations were slow, deep and sustained for as long as possible (5 sec). 2- Respiron® Athletic1 (RA1) - the inspirations were explosive, quick and intense, raising balls by the explosive way. 6 sets of 15 repetitions with intervals of 30 to 60 seconds were performed in groups. At the end of the intervention program (second PO), the volunteers were reevaluated. The groups were homogeneous with regard to initial assessment. However on reevaluating there was a significant decline of the variable PIN (p= < 0.0001) and SMIP (p=0.0004) in RC. In the RA1 group there was a maintenance of SMIP (p=0.5076) after surgery. The use of the Respiron Athletic 1, as well as the methodology of application used, can contribute positively to preserve the inspiratory muscle endurance and improve the diaphragmatic dysfunction in postoperative period.

Keywords: bariatric surgery, incentive spirometry, respiratory muscle, physiotherapy

Procedia PDF Downloads 344
24725 Exercise in Extreme Conditions: Leg Cooling and Fat/Carbohydrate Utilization

Authors: Anastasios Rodis

Abstract:

Background: Case studies of walkers, climbers, and campers exposed to cold and wet conditions without limb water/windproof protection revealed experiences of muscle weakness and fatigue. It is reasonable to assume that a part of the fatigue could occur due to an alteration in substrate utilization, since reduction of performance in extreme cold conditions, may partially be explained by higher anaerobic glycolysis, reflecting higher carbohydrate oxidation and an increase accumulation rate of blood lactate. The aim of this study was to assess the effects of pre-exercise lower limb cooling on substrate utilization rate during sub-maximal exercise. Method: Six male university students (mean (SD): age, 21.3 (1.0) yr; maximal oxygen uptake (V0₂ max), 49.6 (3.6) ml.min⁻¹; and percentage of body fat, 13.6 (2.5) % were examined in random order after either 30min cold water (12°C) immersion utilized as the cooling strategy up to the gluteal fold, or under control conditions (no precooling), with tests separated by minimum of 7 days. Exercise consisted of 60min cycling at 50% V0₂ max, in a thermoneutral environment of 20°C. Subjects were also required to record a diet diary over the 24hrs prior to the each trial. Means (SD) for the three macronutrients during the 1 day prior to each trial (expressed as a percentage of total energy) 52 (3) % carbohydrate, 31 (4) % fat, and 17 (± 2) % protein. Results: The following responses to lower limb cooling relative to control trial during exercise were: 1) Carbohydrate (CHO) oxidation, and blood lactate (Bₗₐc) concentration were significantly higher (P < 0.05); 2) rectal temperature (Tᵣₑc) was significantly higher (P < 0.05), but skin temperature was significantly lower (P < 0.05); no significant differences were found in blood glucose (Bg), heart rate (HR) and oxygen consumption (V0₂). Discussion: These data suggested that lower limb cooling prior to submaximal exercise will shift metabolic processes from Fat oxidation to CHO oxidation. This shift from Fat to CHO oxidation will probably have important implications in the surviving scenario, since people facing accidental localized cooling of their limbs either through wading/falling in cold water or snow even if they do not perform high intensity activity, they have to rely on CHO availability.

Keywords: exercise in wet conditions, leg cooling, outdoors exercise, substrate utilization

Procedia PDF Downloads 411
24724 Long-Term Foam Roll Intervention Study of the Effects on Muscle Performance and Flexibility

Authors: T. Poppendieker

Abstract:

A new innovative tool for self-myofascial release is widely and increasingly used among athletes of various sports. The application of the foam roll is suggested to improve muscle performance and flexibility. Attempts to examine acute and somewhat long term effects of either have been conducted over the past ten years. However, the results of muscle performance have been inconsistent. It is suggested that regular use over a long period of time results in a different, muscle performance improving outcome. This study examines long-term effects of regular foam rolling combined with a short plyometric routine vs. solely the same plyometric routine on muscle performance and flexibility over a period of six weeks. Results of counter movement jump (CMJ), squat jump (SJ), and isometric maximal force (IMF) of a 90° horizontal squat in a leg-press will serve as parameters for muscle performance. Data on the range of motion (ROM) of the sit and reach test will be used as a parameter for the flexibility assessment. Muscle activation will be measured throughout all tests. Twenty male and twenty female members of a Frankfurt area fitness center chain (7.11) with an average age of 25 years will be recruited. Women and men will be randomly assigned to a foam roll (FR) and a control group. All participants will practice their assigned routine three times a week over the period of six weeks. Tests on CMJ, SJ, IMF, and ROM will be taken before and after the intervention period. The statistic software program SPSS 22 will be used to analyze the data of CMJ, SJ, IMF, and ROM under consideration of muscle activation by a 2 x 2 x 2 (time of measurement x gender x group) analysis of variance with repeated measures and dependent t-test analysis of pre- and post-test. The alpha level for statistic significance will be set at p ≤ 0.05. It is hypothesized that a significant difference in outcome based on gender differences in all four tests will be observed. It is further hypothesized that both groups may show significant improvements in their performance in the CMJ and SJ after the six-week period. However, the FR group is hypothesized to achieve a higher improvement in the two jump tests. Moreover, the FR group may increase IMF as well as flexibility, whereas the control group may not show likewise progress. The results of this study are crucial for the understanding of long-term effects of regular foam roll application. The collected information on the matter may help to motivate the incorporation of foam rolling into training routines, in order to improve athletic performances.

Keywords: counter movement jump, foam rolling, isometric maximal force, long term effects, self-myofascial release, squat jump

Procedia PDF Downloads 269
24723 A Probabilistic View of the Spatial Pooler in Hierarchical Temporal Memory

Authors: Mackenzie Leake, Liyu Xia, Kamil Rocki, Wayne Imaino

Abstract:

In the Hierarchical Temporal Memory (HTM) paradigm the effect of overlap between inputs on the activation of columns in the spatial pooler is studied. Numerical results suggest that similar inputs are represented by similar sets of columns and dissimilar inputs are represented by dissimilar sets of columns. It is shown that the spatial pooler produces these results under certain conditions for the connectivity and proximal thresholds. Following the discussion of the initialization of parameters for the thresholds, corresponding qualitative arguments about the learning dynamics of the spatial pooler are discussed.

Keywords: hierarchical temporal memory, HTM, learning algorithms, machine learning, spatial pooler

Procedia PDF Downloads 314
24722 Optimizing Electric Vehicle Charging with Charging Data Analytics

Authors: Tayyibah Khanam, Mohammad Saad Alam, Sanchari Deb, Yasser Rafat

Abstract:

Electric vehicles are considered as viable replacements to gasoline cars since they help in reducing harmful emissions and stimulate power generation through renewable energy sources, hence contributing to sustainability. However, one of the significant obstacles in the mass deployment of electric vehicles is the charging time anxiety among users and, thus, the subsequent large waiting times for available chargers at charging stations. Data analytics, on the other hand, has revolutionized the decision-making tasks of management and operating systems since its arrival. In this paper, we attempt to optimize the choice of EV charging stations for users in their vicinity by minimizing the time taken to reach the charging stations and the waiting times for available chargers. Time taken to travel to the charging station is calculated by the Google Maps API and the waiting times are predicted by polynomial regression of the historical data stored. The proposed framework utilizes real-time data and historical data from all operating charging stations in the city and assists the user in finding the best suitable charging station for their current situation and can be implemented in a mobile phone application. The algorithm successfully predicts the most optimal choice of a charging station and the minimum required time for various sample data sets.

Keywords: charging data, electric vehicles, machine learning, waiting times

Procedia PDF Downloads 152
24721 The Phenomena of False Cognates and Deceptive Cognates: Issues to Foreign Language Learning and Teaching Methodology Based on Set Theory

Authors: Marilei Amadeu Sabino

Abstract:

The aim of this study is to establish differences between the terms ‘false cognates’, ‘false friends’ and ‘deceptive cognates’, usually considered to be synonyms. It will be shown they are not synonyms, since they do not designate the same linguistic process or phenomenon. Despite their differences in meaning, many pairs of formally similar words in two (or more) different languages are true cognates, although they are usually known as ‘false’ cognates – such as, for instance, the English and Italian lexical items ‘assist x assistere’; ‘attend x attendere’; ‘argument x argomento’; ‘apology x apologia’; ‘camera x camera’; ‘cucumber x cocomero’; ‘fabric x fabbrica’; ‘factory x fattoria’; ‘firm x firma’; ‘journal x giornale’; ‘library x libreria’; ‘magazine x magazzino’; ‘parent x parente’; ‘preservative x preservativo’; ‘pretend x pretendere’; ‘vacancy x vacanza’, to name but a few examples. Thus, one of the theoretical objectives of this paper is firstly to elaborate definitions establishing a distinction between the words that are definitely ‘false cognates’ (derived from different etyma) and those that are just ‘deceptive cognates’ (derived from the same etymon). Secondly, based on Set Theory and on the concepts of equal sets, subsets, intersection of sets and disjoint sets, this study is intended to elaborate some theoretical and practical questions that will be useful in identifying more precisely similarities and differences between cognate words of different languages, and according to graphic interpretation of sets it will be possible to classify them and provide discernment about the processes of semantic changes. Therefore, these issues might be helpful not only to the Learning of Second and Foreign Languages, but they could also give insights into Foreign and Second Language Teaching Methodology. Acknowledgements: FAPESP – São Paulo State Research Support Foundation – the financial support offered (proc. n° 2017/02064-7).

Keywords: deceptive cognates, false cognates, foreign language learning, teaching methodology

Procedia PDF Downloads 308
24720 Part of Speech Tagging Using Statistical Approach for Nepali Text

Authors: Archit Yajnik

Abstract:

Part of Speech Tagging has always been a challenging task in the era of Natural Language Processing. This article presents POS tagging for Nepali text using Hidden Markov Model and Viterbi algorithm. From the Nepali text, annotated corpus training and testing data set are randomly separated. Both methods are employed on the data sets. Viterbi algorithm is found to be computationally faster and accurate as compared to HMM. The accuracy of 95.43% is achieved using Viterbi algorithm. Error analysis where the mismatches took place is elaborately discussed.

Keywords: hidden markov model, natural language processing, POS tagging, viterbi algorithm

Procedia PDF Downloads 303
24719 Interrelationship between Quadriceps' Activation and Inhibition as a Function of Knee-Joint Angle and Muscle Length: A Torque and Electro and Mechanomyographic Investigation

Authors: Ronald Croce, Timothy Quinn, John Miller

Abstract:

Incomplete activation, or activation failure, of motor units during maximal voluntary contractions is often referred to as muscle inhibition (MI), and is defined as the inability of the central nervous system to maximally drive a muscle during a voluntary contraction. The purpose of the present study was to assess the interrelationship amongst peak torque (PT), muscle inhibition (MI; incomplete activation of motor units), and voluntary muscle activation (VMA) of the quadriceps’ muscle group as a function of knee angle and muscle length during maximal voluntary isometric contractions (MVICs). Nine young adult males (mean + standard deviation: age: 21.58 + 1.30 years; height: 180.07 + 4.99 cm; weight: 89.07 + 7.55 kg) performed MVICs in random order with the knee at 15, 55, and 95° flexion. MI was assessed using the interpolated twitch technique and was estimated by the amount of additional knee extensor PT evoked by the superimposed twitch during MVICs. Voluntary muscle activation was estimated by root mean square amplitude electromyography (EMGrms) and mechanomyography (MMGrms) of agonist (vastus medialis [VM], vastus lateralis [VL], and rectus femoris [RF]) and antagonist (biceps femoris ([BF]) muscles during MVICs. Data were analyzed using separate repeated measures analysis of variance. Results revealed a strong dependency of quadriceps’ PT (p < 0.001), MI (p < 0.001) and MA (p < 0.01) on knee joint position: PT was smallest at the most shortened muscle position (15°) and greatest at mid-position (55°); MI and MA were smallest at the most shortened muscle position (15°) and greatest at the most lengthened position (95°), with the RF showing the greatest change in MA. It is hypothesized that the ability to more fully activate the quadriceps at short compared to longer muscle lengths (96% contracted at 15°; 91% at 55°; 90% at 95°) might partly compensate for the unfavorable force-length mechanics at the more extended position and consequent declines in VMA (decreases in EMGrms and MMGrms muscle amplitude during MVICs) and force production (PT = 111-Nm at 15°, 217-NM at 55°, 199-Nm at 95°). Biceps femoris EMG and MMG data showed no statistical differences (p = 0.11 and 0.12, respectively) at joint angles tested, although there were greater values at the extended position. Increased BF muscle amplitude at this position could be a mechanism by which anterior shear and tibial rotation induced by high quadriceps’ activity are countered. Measuring and understanding the degree to which one sees MI and VMA in the QF muscle has particular clinical relevance because different knee-joint disorders, such ligament injuries or osteoarthritis, increase levels of MI observed and markedly reduced the capability of full VMA.

Keywords: electromyography, interpolated twitch technique, mechanomyography, muscle activation, muscle inhibition

Procedia PDF Downloads 312
24718 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 88
24717 Diagnosis of the Heart Rhythm Disorders by Using Hybrid Classifiers

Authors: Sule Yucelbas, Gulay Tezel, Cuneyt Yucelbas, Seral Ozsen

Abstract:

In this study, it was tried to identify some heart rhythm disorders by electrocardiography (ECG) data that is taken from MIT-BIH arrhythmia database by subtracting the required features, presenting to artificial neural networks (ANN), artificial immune systems (AIS), artificial neural network based on artificial immune system (AIS-ANN) and particle swarm optimization based artificial neural network (PSO-NN) classifier systems. The main purpose of this study is to evaluate the performance of hybrid AIS-ANN and PSO-ANN classifiers with regard to the ANN and AIS. For this purpose, the normal sinus rhythm (NSR), atrial premature contraction (APC), sinus arrhythmia (SA), ventricular trigeminy (VTI), ventricular tachycardia (VTK) and atrial fibrillation (AF) data for each of the RR intervals were found. Then these data in the form of pairs (NSR-APC, NSR-SA, NSR-VTI, NSR-VTK and NSR-AF) is created by combining discrete wavelet transform which is applied to each of these two groups of data and two different data sets with 9 and 27 features were obtained from each of them after data reduction. Afterwards, the data randomly was firstly mixed within themselves, and then 4-fold cross validation method was applied to create the training and testing data. The training and testing accuracy rates and training time are compared with each other. As a result, performances of the hybrid classification systems, AIS-ANN and PSO-ANN were seen to be close to the performance of the ANN system. Also, the results of the hybrid systems were much better than AIS, too. However, ANN had much shorter period of training time than other systems. In terms of training times, ANN was followed by PSO-ANN, AIS-ANN and AIS systems respectively. Also, the features that extracted from the data affected the classification results significantly.

Keywords: AIS, ANN, ECG, hybrid classifiers, PSO

Procedia PDF Downloads 410
24716 An Ab Initio Molecular Orbital Theory and Density Functional Theory Study of Fluorous 1,3-Dion Compounds

Authors: S. Ghammamy, M. Mirzaabdollahiha

Abstract:

Quantum mechanical calculations of energies, geometries, and vibrational wavenumbers of fluorous 1,3-dion compounds are carried out using density functional theory (DFT/B3LYP) method with LANL2DZ basis sets. The calculated HOMO and LUMO energies show that charge transfer occurs in the molecules. The thermodynamic functions of fluorous 1,3-dion compounds have been performed at B3LYP/LANL2DZ basis sets. The theoretical spectrograms for F NMR spectra of fluorous 1,3-dion compounds have also been constructed. The F NMR nuclear shieldings of fluoride ligands in fluorous 1,3-dion compounds have been studied quantum chemical.

Keywords: density function theory, natural bond orbital, HOMO, LOMO, fluorous

Procedia PDF Downloads 360
24715 Efficiency, Effectiveness, and Technological Change in Armed Forces: Indonesian Case

Authors: Citra Pertiwi, Muhammad Fikruzzaman Rahawarin

Abstract:

Government of Indonesia had committed to increasing its national defense the budget up to 1,5 percent of GDP. However, the budget increase does not necessarily allocate efficiently and effectively. Using Data Envelopment Analysis (DEA), the operational units of Indonesian Armed Forces are considered as a proxy to measure those two aspects. The bootstrap technique is being used as well to reduce uncertainty in the estimation. Additionally, technological change is being measured as a nonstationary component. Nearly half of the units are being estimated as fully efficient, with less than a third is considered as effective. Longer and larger sets of data might increase the robustness of the estimation in the future.

Keywords: bootstrap, effectiveness, efficiency, DEA, military, Malmquist, technological change

Procedia PDF Downloads 278
24714 SQL Generator Based on MVC Pattern

Authors: Chanchai Supaartagorn

Abstract:

Structured Query Language (SQL) is the standard de facto language to access and manipulate data in a relational database. Although SQL is a language that is simple and powerful, most novice users will have trouble with SQL syntax. Thus, we are presenting SQL generator tool which is capable of translating actions and displaying SQL commands and data sets simultaneously. The tool was developed based on Model-View-Controller (MVC) pattern. The MVC pattern is a widely used software design pattern that enforces the separation between the input, processing, and output of an application. Developers take full advantage of it to reduce the complexity in architectural design and to increase flexibility and reuse of code. In addition, we use White-Box testing for the code verification in the Model module.

Keywords: MVC, relational database, SQL, White-Box testing

Procedia PDF Downloads 400
24713 Effect of Pollution and Ethylene-Diurea on Bean Plants Grown in KSA

Authors: Abdel Rahman A. Alzandi

Abstract:

The primary objectives of this investigation were to examine the interactive effects of three air quality treatments, ethylene-diurea (EDU) and two irrigation conditions on physiological characteristics of kidney beans (Phaseolus vulgaris L.) during its whole growth. These plants were grown in 12-open top chambers (OTC's). Ethylene-diurea (EDU) was used as a factor to evaluate O3 pollution impact on plant growth. The air quality treatments consisted of charcoal filtered (CF) air, nonfiltered (NF) air and ambient air (AA) were irrigated and non- irrigated. Leaf samples were collected from upper canopy positions six times (pre- EDU addition, week after four EDU's addition, at the time of harvesting). Maximal differences in leaf carbohydrate, N contents, pigments and total lipids were observed in response to moisture conditions in presence and absence of EDU applications. Significant reduction were noted for air quality treatments regarding carbohydrate and pigment fractions but not for all cases of leaf N and lipid contents under O3 effects only. Minimal differences were found for first EDU application while maximal ones were recorded at 200 mg l-1 of treatments. The EDU treatments stimulated carbohydrate and pigment contents at the upper canopy position with higher levels for both NF and AA compared to untreated conditions. The NF and AA treatments caused lower total carbohydrate and pigment contents in the canopy position before harvesting of EDU applications. The stimulation in leaf carbohydrates by the EDU treatment, compared to the non-treated EDU of AA and NF treatments, provides a rational explanation for the counteracting effects of EDU against moderate exposures to O3 regarding grain yields in C3 plants.

Keywords: leaf contents, moisture relations, EDU additions, global climate change, kidney bean

Procedia PDF Downloads 323
24712 An Exploratory Investigation into the Quality of Life of People with Multi-Drug Resistant Pulmonary Tuberculosis (MDR-PTB) Using the ICF Core Sets: A Preliminary Investigation

Authors: Shamila Manie, Soraya Maart, Ayesha Osman

Abstract:

Introduction: People diagnosed with multidrug resistant pulmonary tuberculosis (MDR-PTB) is subjected to prolonged hospitalization in South Africa. It has thus become essential for research to shift its focus from a purely medical approach, but to include social and environmental factors when looking at the impact of the disease on those affected. Aim: To explore the factors affecting individuals with multi-drug resistant pulmonary tuberculosis during long-term hospitalization using the comprehensive ICF core-sets for obstructive pulmonary disease (OPD) and cardiopulmonary (CPR) conditions at Brooklyn Chest Hospital (BCH). Methods: A quantitative descriptive, cross-sectional study design was utilized. A convenient sample of 19 adults at Brooklyn Chest Hospital were interviewed. Results: Most participants reported a decrease in exercise tolerance levels (b455: n=11). However it did not limit participation. Participants reported that a lack of privacy in the environment (e155) was a barrier to health. The presence of health professionals (e355) and the provision of skills development services (e585) are facilitators to health and well-being. No differences exist in the functional ability of HIV positive and negative participants in this sample. Conclusion: The ICF Core Sets appeared valid in identifying the barriers and facilitators experienced by individuals with MDR-PTB admitted to BCH. The hospital environment must be improved to add to the QoL of those admitted, especially improving privacy within the wards. Although the social grant is seen as a facilitator, greater emphasis must be placed on preparing individuals to be economically active in the labour for when they are discharged.

Keywords: multidrug resistant tuberculosis, MDR ICF core sets, health-related quality of life (HRQoL), hospitalization

Procedia PDF Downloads 316
24711 On-Ice Force-Velocity Modeling Technical Considerations

Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra

Abstract:

Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.   

Keywords: ice-hockey, sprint, skating, power

Procedia PDF Downloads 60
24710 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 390
24709 Physical and Physiological Characteristics of Young Soccer Players in Republic of Macedonia

Authors: Sanja Manchevska, Vaska Antevska, Lidija Todorovska, Beti Dejanova, Sunchica Petrovska, Ivanka Karagjozova, Elizabeta Sivevska, Jasmina Pluncevic Gligoroska

Abstract:

Introduction: A number of positive effects on the player’s physical status, including the body mass components are attributed to training process. As young soccer players grow up qualitative and quantitative changes appear and contribute to better performance. Player’s anthropometric and physiologic characteristics are recognized as important determinants of performance. Material: A sample of 52 soccer players with an age span from 9 to 14 years were divided in two groups differentiated by age. The younger group consisted of 25 boys under 11 years (mean age 10.2) and second group consisted of 27 boys with mean age 12.64. Method: The set of basic anthropometric parameters was analyzed: height, weight, BMI (Body Mass Index) and body mass components. Maximal oxygen uptake was tested using the treadmill protocol by Brus. Results: The group aged under 11 years showed the following anthropometric and physiological features: average height= 143.39cm, average weight= 44.27 kg; BMI= 18.77; Err = 5.04; Hb= 13.78 g/l; VO2=37.72 mlO2/kg. Average values of analyzed parameters were as follows: height was 163.7 cm; weight= 56.3 kg; BMI = 19.6; VO2= 39.52 ml/kg; Err=5.01; Hb=14.3g/l for the participants aged 12 to14 years. Conclusion: Physiological parameters (maximal oxygen uptake, erythrocytes and Hb) were insignificantly higher in the older group compared to the younger group. There were no statistically significant differences between analyzed anthropometric parameters among the two groups except for the basic measurements (height and weight).

Keywords: body composition, young soccer players, BMI, physical status

Procedia PDF Downloads 375
24708 Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising

Authors: Jianwei Ma, Diriba Gemechu

Abstract:

In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.

Keywords: anisotropic total fractional order variation, fractional order bounded variation, seismic random noise attenuation, split Bregman algorithm

Procedia PDF Downloads 185
24707 EDM for Prediction of Academic Trends and Patterns

Authors: Trupti Diwan

Abstract:

Predicting student failure at school has changed into a difficult challenge due to both the large number of factors that can affect the reduced performance of students and the imbalanced nature of these kinds of data sets. This paper surveys the two elements needed to make prediction on Students’ Academic Performances which are parameters and methods. This paper also proposes a framework for predicting the performance of engineering students. Genetic programming can be used to predict student failure/success. Ranking algorithm is used to rank students according to their credit points. The framework can be used as a basis for the system implementation & prediction of students’ Academic Performance in Higher Learning Institute.

Keywords: classification, educational data mining, student failure, grammar-based genetic programming

Procedia PDF Downloads 399
24706 Impact of Zinc on Heavy Metals Content, Polyphenols and Antioxidant Capacity of Faba Bean in Milk Ripeness

Authors: M. Timoracká, A. Vollmannová., D.S. Ismael, J. Musilová

Abstract:

We investigated the effect of targeted contaminated soil by Zn model conditions. The soil used in the pot trial was uncontaminated. Faba beans (cvs Saturn, Zobor) were harvested in milk ripeness. With increased doses applied into the soil the strong statistical relationship between soil Zn content and Zn amount in seeds of both of faba bean cultivars was confirmed. Despite the high Zn doses applied into the soil in model conditions, in all variants the determined Zn amount in faba bean cv. Saturn was just below the maximal allowed content in foodstuffs given by the legislative. In cv. Zobor the determined Zn content was higher than maximal allowed amount (by 2% and 12%, respectively). Faba bean cvs. Saturn and Zobor accumulated (in all variants higher than hygienic limits) high amounts of Pb and Cd. The contents of all other heavy metals were lower than hygienic limits. With increased Zn doses applied into the soil the total polyphenols contents as well as the total antioxidant capacity determined in seeds of both cultivars Saturn and Zobor were increased. The strong statistical relationship between soil Zn content and the total polyphenols contents as well as the total antioxidant capacity in seeds of faba bean cultivars was confirmed.

Keywords: antioxidant capacity, faba bean, polyphenols, zinc

Procedia PDF Downloads 371
24705 A Bivariate Inverse Generalized Exponential Distribution and Its Applications in Dependent Competing Risks Model

Authors: Fatemah A. Alqallaf, Debasis Kundu

Abstract:

The aim of this paper is to introduce a bivariate inverse generalized exponential distribution which has a singular component. The proposed bivariate distribution can be used when the marginals have heavy-tailed distributions, and they have non-monotone hazard functions. Due to the presence of the singular component, it can be used quite effectively when there are ties in the data. Since it has four parameters, it is a very flexible bivariate distribution, and it can be used quite effectively for analyzing various bivariate data sets. Several dependency properties and dependency measures have been obtained. The maximum likelihood estimators cannot be obtained in closed form, and it involves solving a four-dimensional optimization problem. To avoid that, we have proposed to use an EM algorithm, and it involves solving only one non-linear equation at each `E'-step. Hence, the implementation of the proposed EM algorithm is very straight forward in practice. Extensive simulation experiments and the analysis of one data set have been performed. We have observed that the proposed bivariate inverse generalized exponential distribution can be used for modeling dependent competing risks data. One data set has been analyzed to show the effectiveness of the proposed model.

Keywords: Block and Basu bivariate distributions, competing risks, EM algorithm, Marshall-Olkin bivariate exponential distribution, maximum likelihood estimators

Procedia PDF Downloads 110
24704 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: big data, k-NN, machine learning, traffic speed prediction

Procedia PDF Downloads 330
24703 A Comprehensive Methodology for Voice Segmentation of Large Sets of Speech Files Recorded in Naturalistic Environments

Authors: Ana Londral, Burcu Demiray, Marcus Cheetham

Abstract:

Speech recording is a methodology used in many different studies related to cognitive and behaviour research. Modern advances in digital equipment brought the possibility of continuously recording hours of speech in naturalistic environments and building rich sets of sound files. Speech analysis can then extract from these files multiple features for different scopes of research in Language and Communication. However, tools for analysing a large set of sound files and automatically extract relevant features from these files are often inaccessible to researchers that are not familiar with programming languages. Manual analysis is a common alternative, with a high time and efficiency cost. In the analysis of long sound files, the first step is the voice segmentation, i.e. to detect and label segments containing speech. We present a comprehensive methodology aiming to support researchers on voice segmentation, as the first step for data analysis of a big set of sound files. Praat, an open source software, is suggested as a tool to run a voice detection algorithm, label segments and files and extract other quantitative features on a structure of folders containing a large number of sound files. We present the validation of our methodology with a set of 5000 sound files that were collected in the daily life of a group of voluntary participants with age over 65. A smartphone device was used to collect sound using the Electronically Activated Recorder (EAR): an app programmed to record 30-second sound samples that were randomly distributed throughout the day. Results demonstrated that automatic segmentation and labelling of files containing speech segments was 74% faster when compared to a manual analysis performed with two independent coders. Furthermore, the methodology presented allows manual adjustments of voiced segments with visualisation of the sound signal and the automatic extraction of quantitative information on speech. In conclusion, we propose a comprehensive methodology for voice segmentation, to be used by researchers that have to work with large sets of sound files and are not familiar with programming tools.

Keywords: automatic speech analysis, behavior analysis, naturalistic environments, voice segmentation

Procedia PDF Downloads 258
24702 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 529
24701 Stability Assessment of Chamshir Dam Based on DEM, South West Zagros

Authors: Rezvan Khavari

Abstract:

The Zagros fold-thrust belt in SW Iran is a part of the Alpine-Himalayan system which consists of a variety of structures with different sizes or geometries. The study area is Chamshir Dam, which is located on the Zohreh River, 20 km southeast of Gachsaran City (southwest Iran). The satellite images are valuable means available to geologists for locating geological or geomorphological features expressing regional fault or fracture systems, therefore, the satellite images were used for structural analysis of the Chamshir dam area. As well, using the DEM and geological maps, 3D Models of the area have been constructed. Then, based on these models, all the acquired fracture traces data were integrated in Geographic Information System (GIS) environment by using Arc GIS software. Based on field investigation and DEM model, main structures in the area consist of Cham Shir syncline and two fault sets, the main thrust faults with NW-SE direction and small normal faults in NE-SW direction. There are three joint sets in the study area, both of them (J1 and J3) are the main large fractures around the Chamshir dam. These fractures indeed consist with the normal faults in NE-SW direction. The third joint set in NW-SE is normal to the others. In general, according to topography, geomorphology and structural geology evidences, Chamshir dam has a potential for sliding in some parts of Gachsaran formation.

Keywords: DEM, chamshir dam, zohreh river, satellite images

Procedia PDF Downloads 461
24700 Cd2+ Ions Removal from Aqueous Solutions Using Alginite

Authors: Vladimír Frišták, Martin Pipíška, Juraj Lesný

Abstract:

Alginate has been evaluated as an efficient pollution control material. In this paper, alginate from maar Pinciná (SR) for removal of Cd2+ ions from aqueous solution was studied. The potential sorbent was characterized by X-Ray Fluorescence Analysis (RFA) analysis, Fourier Transform Infrared Spectral Analysis (FT-IR) and Specific Surface Area (SSA) was also determined. The sorption process was optimized from the point of initial cadmium concentration effect and effect of pH value. The Freundlich and Langmuir models were used to interpret the sorption behaviour of Cd2+ ions, and the results showed that experimental data were well fitted by the Langmuir equation. Alginate maximal sorption capacity (QMAX) for Cd2+ ions calculated from Langmuir isotherm was 34 mg/g. Sorption process was significantly affected by initial pH value in the range from 4.0-7.0. Alginate is a comparable sorbent with other materials for toxic metals removal.

Keywords: alginates, Cd2+, sorption, QMAX

Procedia PDF Downloads 319