Search results for: maximal data sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25280

Search results for: maximal data sets

24890 Multi-Source Data Fusion for Urban Comprehensive Management

Authors: Bolin Hua

Abstract:

In city governance, various data are involved, including city component data, demographic data, housing data and all kinds of business data. These data reflects different aspects of people, events and activities. Data generated from various systems are different in form and data source are different because they may come from different sectors. In order to reflect one or several facets of an event or rule, data from multiple sources need fusion together. Data from different sources using different ways of collection raised several issues which need to be resolved. Problem of data fusion include data update and synchronization, data exchange and sharing, file parsing and entry, duplicate data and its comparison, resource catalogue construction. Governments adopt statistical analysis, time series analysis, extrapolation, monitoring analysis, value mining, scenario prediction in order to achieve pattern discovery, law verification, root cause analysis and public opinion monitoring. The result of Multi-source data fusion is to form a uniform central database, which includes people data, location data, object data, and institution data, business data and space data. We need to use meta data to be referred to and read when application needs to access, manipulate and display the data. A uniform meta data management ensures effectiveness and consistency of data in the process of data exchange, data modeling, data cleansing, data loading, data storing, data analysis, data search and data delivery.

Keywords: multi-source data fusion, urban comprehensive management, information fusion, government data

Procedia PDF Downloads 372
24889 Adult Language Learning in the Institute of Technology Sector in the Republic of Ireland

Authors: Una Carthy

Abstract:

A recent study of third level institutions in Ireland reveals that both age and aptitude can be overcome by teaching methodologies to motivate second language learners. This PhD investigation gathered quantitative and qualitative data from 14 Institutes of Technology over a three years period from 2011 to 2014. The fundamental research question was to establish the impact of institutional language policy on attitudes towards language learning. However, other related issues around second language acquisition arose in the course of the investigation. Data were collected from both lectures and students, allowing interesting points of comparison to emerge from both datasets. Negative perceptions among lecturers regarding language provision were often associated with the view that language learning belongs to primary and secondary level and has no place in third level education. This perception was offset by substantial data showing positive attitudes towards adult language learning. Lenneberg’s Critical Age Theory postulated that the optimum age for learning a second language is before puberty. More recently, scholars have challenged this theory in their studies, revealing that mature learners can and do succeed at learning languages. With regard to aptitude, a preoccupation among lecturers regarding poor literacy skills among students emerged and was often associated with resistance to second language acquisition. This was offset by a preponderance of qualitative data from students highlighting the crucial role which teaching approaches play in the learning process. Interestingly, the data collected regarding learning disabilities reveals that, given the appropriate learning environments, individuals can be motivated to acquire second languages, and indeed succeed at learning them. These findings are in keeping with other recent studies regarding attitudes towards second language learning among students with learning disabilities. Both sets of findings reinforce the case for language policies in the Institute of Technology (IoTs). Supportive and positive learning environments can be created in third level institutions to motivate adult learners, thereby overcoming perceived obstacles relating to age and aptitude.

Keywords: age, aptitude, second language acquisition, teaching methodologies

Procedia PDF Downloads 115
24888 A Qualitative South African Study on Exploration of the Moral Identity of Nurses

Authors: Yolanda Havenga

Abstract:

Being a competent nurse requires clinical, general, and moral competencies. Moral competence is a culmination of moral perceptions, moral judgment, moral behaviour, and moral identity. Moral identity is the values, images, and fundamental principles held in the collective minds and memories of nurses about what it means to be a ‘good nurse’. It is important to explore and describe South African nurses’ moral identities and excavate the post-colonial counter-narrative to nurses moral identities as a better understanding of these identities will enable means to positively address nurses’ moral behaviours. This study explored the moral identity of nurses within the South African context. A qualitative approach was followed triangulating with phenomenological and narrative designs with the same purposively sampled group of professional nurses. In-depth interviews were conducted until saturation of data occurred about the sampled nurses lived experiences of being a nurse in South Africa. They were probed about their core personal-, social-, and professional values. Data were analysed based on the steps used by Colaizzi. These nurses were then asked to write a narrative telling a personal story that portrayed a significant time in their professional career that defines their identity as a nurse. This data were analysed using a critical narrative approach and findings of the two sets of data were merged. Ethical approval was obtained and approval from all relevant gate keepers. In the findings, themes emerged related to personal, social and professional values, images and fundamental principles of being a nurse within the South African context. The findings of this study will inform a future national study including a representative sample of South African nurses.

Keywords: moral behaviour, moral identity, nurses, qualitative research

Procedia PDF Downloads 268
24887 Determining the Number of Single Models in a Combined Forecast

Authors: Serkan Aras, Emrah Gulay

Abstract:

Combining various forecasting models is an important tool for researchers to attain more accurate forecasts. A great number of papers have shown that selecting single models as dissimilar models, or methods based on different information as possible leads to better forecasting performances. However, there is not a certain rule regarding the number of single models to be used in any combining methods. This study focuses on determining the optimal or near optimal number for single models with the help of statistical tests. An extensive experiment is carried out by utilizing some well-known time series data sets from diverse fields. Furthermore, many rival forecasting methods and some of the commonly used combining methods are employed. The obtained results indicate that some statistically significant performance differences can be found regarding the number of the single models in the combining methods under investigation.

Keywords: combined forecast, forecasting, M-competition, time series

Procedia PDF Downloads 341
24886 Effects of Aerobic, Resistance, and Concurrent Training on Secretion of Growth Hormone and Insulin-Like Growth Factor-1 in Elderly Women

Authors: Kh Jalali Dehkordi, A. Jalali Dehkordi, A. Tofighi

Abstract:

Background: The purpose of this study was to investigate the effects of 8 weeks of aerobic, resistance, and concurrent training on secretion of growth hormone (GH) and insulin-like growth factor-1 (IGF-1) in elderly women. Methods: A total number of 60 elderly women were randomly allocated to four groups of aerobic training (n = 15), resistance training (n = 15), concurrent training (n = 15), and control (n = 15). Blood samples were taken before and 4 weeks after the initiation of exercise training and also at the end of the 8-week course of training. Maximal oxygen consumption (VO2Peak) was measured after 48 hours using Rockport walk test. Inferential analysis of the collected data was performed by repeated measures analysis of variance (ANOVA). Significant differences were further evaluated by the least significant difference (LSD) test. The relation between VO2Peak and secretion of GH and IGF-1 was assessed by Pearson's correlation coefficient. The significance level was considered as P ≤ 0.05 in all tests. Findings: The results showed that 8 weeks of regular exercise significantly increased levels of GH and IGF-1. A significant increase was also observed in VO2Peak values after 8 weeks of regular exercise (P < 0.05). VO2Peak was directly correlated with GH and IGF (P < 0.001, r = 0.72). Conclusion: In conclusion, regular exercise significantly increased levels of anabolic hormones. Moreover, the combined-exercise training better enhanced GH and IGF-1. VO2Peak increased with increases in GH and IGF-1 levels.

Keywords: women, training, GH, IGF-1

Procedia PDF Downloads 304
24885 Reviewing Privacy Preserving Distributed Data Mining

Authors: Sajjad Baghernezhad, Saeideh Baghernezhad

Abstract:

Nowadays considering human involved in increasing data development some methods such as data mining to extract science are unavoidable. One of the discussions of data mining is inherent distribution of the data usually the bases creating or receiving such data belong to corporate or non-corporate persons and do not give their information freely to others. Yet there is no guarantee to enable someone to mine special data without entering in the owner’s privacy. Sending data and then gathering them by each vertical or horizontal software depends on the type of their preserving type and also executed to improve data privacy. In this study it was attempted to compare comprehensively preserving data methods; also general methods such as random data, coding and strong and weak points of each one are examined.

Keywords: data mining, distributed data mining, privacy protection, privacy preserving

Procedia PDF Downloads 506
24884 Presenting a Model Based on Artificial Neural Networks to Predict the Execution Time of Design Projects

Authors: Hamed Zolfaghari, Mojtaba Kord

Abstract:

After feasibility study the design phase is started and the rest of other phases are highly dependent on this phase. forecasting the duration of design phase could do a miracle and would save a lot of time. This study provides a fast and accurate Machine learning (ML) and optimization framework, which allows a quick duration estimation of project design phase, hence improving operational efficiency and competitiveness of a design construction company. 3 data sets of three years composed of daily time spent for different design projects are used to train and validate the ML models to perform multiple projects. Our study concluded that Artificial Neural Network (ANN) performed an accuracy of 0.94.

Keywords: time estimation, machine learning, Artificial neural network, project design phase

Procedia PDF Downloads 72
24883 Use of the Gas Chromatography Method for Hydrocarbons' Quality Evaluation in the Offshore Fields of the Baltic Sea

Authors: Pavel Shcherban, Vlad Golovanov

Abstract:

Currently, there is an active geological exploration and development of the subsoil shelf of the Kaliningrad region. To carry out a comprehensive and accurate assessment of the volumes and degree of extraction of hydrocarbons from open deposits, it is necessary to establish not only a number of geological and lithological characteristics of the structures under study, but also to determine the oil quality, its viscosity, density, fractional composition as accurately as possible. In terms of considered works, gas chromatography is one of the most capacious methods that allow the rapid formation of a significant amount of initial data. The aspects of the application of the gas chromatography method for determining the chemical characteristics of the hydrocarbons of the Kaliningrad shelf fields are observed in the article, as well as the correlation-regression analysis of these parameters in comparison with the previously obtained chemical characteristics of hydrocarbon deposits located on the land of the region. In the process of research, a number of methods of mathematical statistics and computer processing of large data sets have been applied, which makes it possible to evaluate the identity of the deposits, to specify the amount of reserves and to make a number of assumptions about the genesis of the hydrocarbons under analysis.

Keywords: computer processing of large databases, correlation-regression analysis, hydrocarbon deposits, method of gas chromatography

Procedia PDF Downloads 144
24882 The Right to Data Portability and Its Influence on the Development of Digital Services

Authors: Roman Bieda

Abstract:

The General Data Protection Regulation (GDPR) will come into force on 25 May 2018 which will create a new legal framework for the protection of personal data in the European Union. Article 20 of GDPR introduces a right to data portability. This right allows for data subjects to receive the personal data which they have provided to a data controller, in a structured, commonly used and machine-readable format, and to transmit this data to another data controller. The right to data portability, by facilitating transferring personal data between IT environments (e.g.: applications), will also facilitate changing the provider of services (e.g. changing a bank or a cloud computing service provider). Therefore, it will contribute to the development of competition and the digital market. The aim of this paper is to discuss the right to data portability and its influence on the development of new digital services.

Keywords: data portability, digital market, GDPR, personal data

Procedia PDF Downloads 458
24881 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 253
24880 Thermoelectric Cooler As A Heat Transfer Device For Thermal Conductivity Test

Authors: Abdul Murad Zainal Abidin, Azahar Mohd, Nor Idayu Arifin, Siti Nor Azila Khalid, Mohd Julzaha Zahari Mohamad Yusof

Abstract:

A thermoelectric cooler (TEC) is an electronic component that uses ‘peltier’ effect to create a temperature difference by transferring heat between two electrical junctions of two different types of materials. TEC can also be used for heating by reversing the electric current flow and even power generation. A heat flow meter (HFM) is an equipment for measuring thermal conductivity of building materials. During the test, water is used as heat transfer medium to cool the HFM. The existing re-circulating cooler in the market is very costly, and the alternative is to use piped tap water to extract heat from HFM. However, the tap water temperature is insufficiently low to enable heat transfer to take place. The operating temperature for isothermal plates in the HFM is 40°C with the range of ±0.02°C. When the temperature exceeds the operating range, the HFM stops working, and the test cannot be conducted. The aim of the research is to develop a low-cost but energy-efficient TEC prototype that enables heat transfer without compromising the function of the HFM. The objectives of the research are a) to identify potential of TEC as a cooling device by evaluating its cooling rate and b) to determine the amount of water savings using TEC compared to normal tap water. Four (4) peltier sets were used, with two (2) sets used as pre-cooler. The cooling water is re-circulated from the reservoir into HFM using a water pump. The thermal conductivity readings, the water flow rate, and the power consumption were measured while the HFM was operating. The measured data has shown decrease in average cooling temperature difference (ΔTave) of 2.42°C and average cooling rate of 0.031°C/min. The water savings accrued from using the TEC is projected to be 8,332.8 litres/year with the application of water re-circulation. The results suggest the prototype has achieved required objectives. Further research will include comparing the cooling rate of TEC prototype against conventional tap water and to optimize its design and performance in terms of size and portability. The possible application of the prototype could also be expanded to portable storage for medicine and beverages.

Keywords: energy efficiency, thermoelectric cooling, pre-cooling device, heat flow meter, sustainable technology, thermal conductivity

Procedia PDF Downloads 144
24879 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton

Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani

Abstract:

Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.

Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton

Procedia PDF Downloads 312
24878 Energy Efficient Clustering with Adaptive Particle Swarm Optimization

Authors: KumarShashvat, ArshpreetKaur, RajeshKumar, Raman Chadha

Abstract:

Wireless sensor networks have principal characteristic of having restricted energy and with limitation that energy of the nodes cannot be replenished. To increase the lifetime in this scenario WSN route for data transmission is opted such that utilization of energy along the selected route is negligible. For this energy efficient network, dandy infrastructure is needed because it impinges the network lifespan. Clustering is a technique in which nodes are grouped into disjoints and non–overlapping sets. In this technique data is collected at the cluster head. In this paper, Adaptive-PSO algorithm is proposed which forms energy aware clusters by minimizing the cost of locating the cluster head. The main concern is of the suitability of the swarms by adjusting the learning parameters of PSO. Particle Swarm Optimization converges quickly at the beginning stage of the search but during the course of time, it becomes stable and may be trapped in local optima. In suggested network model swarms are given the intelligence of the spiders which makes them capable enough to avoid earlier convergence and also help them to escape from the local optima. Comparison analysis with traditional PSO shows that new algorithm considerably enhances the performance where multi-dimensional functions are taken into consideration.

Keywords: Particle Swarm Optimization, adaptive – PSO, comparison between PSO and A-PSO, energy efficient clustering

Procedia PDF Downloads 230
24877 A New Spell-Out Mechanism

Authors: Yusra Yahya

Abstract:

In this paper, a new spell-out mechanism is developed and defended. This mechanism builds on the role of phase heads as both the loci of spell-out features and the transfer triggers via either Phase Impenetrability Condition 1 (PIC1) and/or Phase Impenetrability Condition 2 (PIC2). The assumption here is that phase heads, mainly v*, can regulate the spell-out process by deciding both the type of spell-out applying and the timing of spell-out relevant. This paper also proposes a new form of the constraint Wrap call it Wrap-XP’ and it is assumed to apply to IP as a functional maximal projection. This extension is shown to fall as a natural result once we assume the new theory of phases and multiple spell-out. Moreover, it is proposed in this work that some forms of XP movement are not motivated by an EPP feature of a strong phase head mainly v*, but they are rather motivated by a last resort strategy to accomplish the spell-out instruction of this phase head.

Keywords: linguistics, syntax, phonology, phase theory, optimality theory

Procedia PDF Downloads 501
24876 Dosimetric Comparison among Different Head and Neck Radiotherapy Techniques Using PRESAGE™ Dosimeter

Authors: Jalil ur Rehman, Ramesh C. Tailor, Muhammad Isa Khan, Jahnzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott

Abstract:

Purpose: The purpose of this analysis was to investigate dose distribution of different techniques (3D-CRT, IMRT and VMAT) of head and neck cancer using 3-dimensional dosimeter called PRESAGETM Dosimeter. Materials and Methods: Computer tomography (CT) scans of radiological physics center (RPC) head and neck anthropomorphic phantom with both RPC standard insert and PRESAGETM insert were acquired separated with Philipp’s CT scanner and both CT scans were exported via DICOM to the Pinnacle version 9.4 treatment planning system (TPS). Each plan was delivered twice to the RPC phantom first containing the RPC standard insert having TLD and film dosimeters and then again containing the Presage insert having 3-D dosimeter (PRESAGETM) by using a Varian True Beam linear accelerator. After irradiation, the standard insert including point dose measurements (TLD) and planar Gafchromic® EBT film measurement were read using RPC standard procedure. The 3D dose distribution from PRESAGETM was read out with the Duke Midsized optical scanner dedicated to RPC (DMOS-RPC). Dose volume histogram (DVH), mean and maximal doses for organs at risk were calculated and compared among each head and neck technique. The prescription dose was same for all head and neck radiotherapy techniques which was 6.60 Gy/friction. Beam profile comparison and gamma analysis were used to quantify agreements among film measurement, PRESAGETM measurement and calculated dose distribution. Quality assurances of all plans were performed by using ArcCHECK method. Results: VMAT delivered the lowest mean and maximum doses to organ at risk (spinal cord, parotid) than IMRT and 3DCRT. Such dose distribution was verified by absolute dose distribution using thermoluminescent dosimeter (TLD) system. The central axial, sagittal and coronal planes were evaluated using 2D gamma map criteria(± 5%/3 mm) and results were 99.82% (axial), 99.78% (sagital), 98.38% (coronal) for VMAT plan and found the agreement between PRESAGE and pinnacle was better than IMRT and 3D-CRT plan excludes a 7 mm rim at the edge of the dosimeter. Profile showed good agreement for all plans between film, PRESAGE and pinnacle and 3D gamma was performed for PTV and OARs, VMAT and 3DCRT endow with better agreement than IMRT. Conclusion: VMAT delivered lowered mean and maximal doses to organs at risk and better PTV coverage during head and neck radiotherapy. TLD, EBT film and PRESAGETM dosimeters suggest that VMAT was better for the treatment of head and neck cancer than IMRT and 3D-CRT.

Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD, PRESAGETM

Procedia PDF Downloads 374
24875 Damage Identification Using Experimental Modal Analysis

Authors: Niladri Sekhar Barma, Satish Dhandole

Abstract:

Damage identification in the context of safety, nowadays, has become a fundamental research interest area in the field of mechanical, civil, and aerospace engineering structures. The following research is aimed to identify damage in a mechanical beam structure and quantify the severity or extent of damage in terms of loss of stiffness, and obtain an updated analytical Finite Element (FE) model. An FE model is used for analysis, and the location of damage for single and multiple damage cases is identified numerically using the modal strain energy method and mode shape curvature method. Experimental data has been acquired with the help of an accelerometer. Fast Fourier Transform (FFT) algorithm is applied to the measured signal, and subsequently, post-processing is done in MEscopeVes software. The two sets of data, the numerical FE model and experimental results, are compared to locate the damage accurately. The extent of the damage is identified via modal frequencies using a mixed numerical-experimental technique. Mode shape comparison is performed by Modal Assurance Criteria (MAC). The analytical FE model is adjusted by the direct method of model updating. The same study has been extended to some real-life structures such as plate and GARTEUR structures.

Keywords: damage identification, damage quantification, damage detection using modal analysis, structural damage identification

Procedia PDF Downloads 97
24874 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 149
24873 Response of Selected Echocardiographic Features to Aerobic Training in Obese Hypertensive Males

Authors: Abeer Ahmed Abdelhameed

Abstract:

The aim of this study was to investigate the effect of aerobic exercises on LV parameters, lipid profile, and anthropometric measurements in hypertensive middle aged male subjects. Thirty obese patients were recruited for the study from the outpatient clinic of National Heart Institute, Egypt. Their ages ranges from 40 to 60 years. All participants underwent an aerobic training program including regular aerobic sub-maximal exercises in the form of treadmill walking and abdominal exercises 3/week for four months, the exercise were individually tailored for each participant depending on the result of cardiopulmonary exercise test. The result showed no significant difference observed in both LVPWT and LVSWT data from pre-test values to post-test values in all subjects after 4 months, with a significant reduction in WHR, systolic blood pressure, TAG and LDL records. Result also revealed a significant increase in HDL, Eƒ, LVEDD and FS records for all participants. The significant improvement in ventricular functions in form of ejection fraction of electrical group more than exercise group after 4 months at the end of the study may be due to the beneficial effect of faradic stimulation in lipolysis of storage adipose tissues, stimulation of lean body mass and muscles and/or thermal effect that improves vascularization.

Keywords: left ventricular parameters, aerobic training, electrical stimulation, lipid profile

Procedia PDF Downloads 238
24872 Fractal-Wavelet Based Techniques for Improving the Artificial Neural Network Models

Authors: Reza Bazargan lari, Mohammad H. Fattahi

Abstract:

Natural resources management including water resources requires reliable estimations of time variant environmental parameters. Small improvements in the estimation of environmental parameters would result in grate effects on managing decisions. Noise reduction using wavelet techniques is an effective approach for pre-processing of practical data sets. Predictability enhancement of the river flow time series are assessed using fractal approaches before and after applying wavelet based pre-processing. Time series correlation and persistency, the minimum sufficient length for training the predicting model and the maximum valid length of predictions were also investigated through a fractal assessment.

Keywords: wavelet, de-noising, predictability, time series fractal analysis, valid length, ANN

Procedia PDF Downloads 355
24871 Chronic and Sub-Acute Lumbosacral Radiculopathies Behave Differently to Repeated Back Extension Exercises

Authors: Sami Alabdulwahab

Abstract:

Background: Repeated back extension exercises (RBEEs) are among the management options for symptoms associated with lumbosacral radiculopathy (LSR). RBEEs have been reported to cause changes in the distribution and intensity of radicular symptoms caused by possible compression/decompression of the compromised nerve root. Purpose: The purpose of this study was to investigate the effects of the RBEEs on the neurophysiology of the compromised nerve root and on standing mobility and pain intensity in patients with sub-acute and chronic LSR. Methods: A total of 40 patients with unilateral sub-acute/chronic lumbosacral radiculopathy voluntarily participated in the study; the patients performed 3 sets of 10 RBEEs in the prone position with 1 min of rest between the sets. The soleus H-reflex, standing mobility and pain intensity were recorded before and after the RBEEs. Results: The results of the study showed that the RBEEs significantly improved the H-reflex, standing mobility and pain intensity in patients with sub-acute LSR (p<0.01); there was not a significant improvement in the patients with chronic LSR (p<0.61). Conclusion: RBEEs in prone position is recommended for improving the neurophysiological function of the compromised nerve root and standing mobility in patients with sub-acute LSR. Implication: Sub-acute and chronic LSR responded differently to RBEEs. Sub-acute LSR appear to have flexible and movable disc structures, which could be managed with RBEEs.

Keywords: h-reflex, back extension, lumbosacral radiculopathy, pain

Procedia PDF Downloads 462
24870 Experiences of Online Opportunities and Risks: Examining Internet Use and Digital Literacy of Young People in Nigeria

Authors: Isah Yahaya Aliyu

Abstract:

Research on Internet use has often approached beneficial uses (online opportunities) of the Internet as separate from the risky encounters (online risks) of young people online. However, empirical evidence from diverse contexts appears to increasingly support the fusion of the two sets of online activities. Hence, the current research investigates the correlation between Internet use (IU) and digital literacy (DL) with online opportunities (OP) and risks (OR), using data from a Nigerian context, where there appears a paucity of research and literature on integrating opportunities and risks in the same study. A web-based data collection method was used to administer a survey to 335 undergraduate students in Northeastern Nigeria. Underpinned to Livingstone and Helsper model, findings are largely consistent with existing literature; IU and DL influence OP (R2 = 0.791, SE = 0.265, F-Stats = 626.566, P-value <.001), equally IU and DL influence OR as well (R2 = 0.343, SE = 0.465, F-Stats = 86.671, P-value <.001). OP and OR were found to strongly correlate positively (r = .667, n = 335, p < 0.01). This study has provided buttressing evidence from a Nigerian context of the fusion of benefits and risks of the Internet among young people. It has also upheld the argument for improved literacy as strategy for minimizing risks/harm rather than restricting use. Other theoretical and policy implications of the findings have been discussed in line with local and global debates about the Internet and its attendant effects.

Keywords: digital, internet, literacy, opportunities, risks

Procedia PDF Downloads 71
24869 Group Consensus of Hesitant Fuzzy Linguistic Variables for Decision-Making Problem

Authors: Chen T. Chen, Hui L. Cheng

Abstract:

Due to the different knowledge, experience and expertise of experts, they usually provide the different opinions in the group decision-making process. Therefore, it is an important issue to reach the group consensus of opinions of experts in group multiple-criteria decision-making (GMCDM) process. Because the subjective opinions of experts always are fuzziness and uncertainties, it is difficult to use crisp values to describe the real opinions of experts or decision-makers. It is reasonable for experts to use the linguistic variables to express their opinions. The hesitant fuzzy set are extended from the concept of fuzzy sets. Experts use the hesitant fuzzy sets can be flexible to describe their subjective opinions. In order to aggregate the hesitant fuzzy linguistic variables of all experts effectively, an adjustment method based on distance function will be presented in this paper. Based on the opinions adjustment method, this paper will present an effective approach to adjust the hesitant fuzzy linguistic variables of all experts to reach the group consensus. Then, a new hesitant linguistic GMCDM method will be presented based on the group consensus of hesitant fuzzy linguistic variables. Finally, an example will be implemented to illustrate the computational process to enhance the practical value of the proposed model.

Keywords: group multi-criteria decision-making, linguistic variables, hesitant fuzzy linguistic variables, distance function, group consensus

Procedia PDF Downloads 138
24868 Recent Advances in Data Warehouse

Authors: Fahad Hanash Alzahrani

Abstract:

This paper describes some recent advances in a quickly developing area of data storing and processing based on Data Warehouses and Data Mining techniques, which are associated with software, hardware, data mining algorithms and visualisation techniques having common features for any specific problems and tasks of their implementation.

Keywords: data warehouse, data mining, knowledge discovery in databases, on-line analytical processing

Procedia PDF Downloads 384
24867 Constructing a Physics Guided Machine Learning Neural Network to Predict Tonal Noise Emitted by a Propeller

Authors: Arthur D. Wiedemann, Christopher Fuller, Kyle A. Pascioni

Abstract:

With the introduction of electric motors, small unmanned aerial vehicle designers have to consider trade-offs between acoustic noise and thrust generated. Currently, there are few low-computational tools available for predicting acoustic noise emitted by a propeller into the far-field. Artificial neural networks offer a highly non-linear and adaptive model for predicting isolated and interactive tonal noise. But neural networks require large data sets, exceeding practical considerations in modeling experimental results. A methodology known as physics guided machine learning has been applied in this study to reduce the required data set to train the network. After building and evaluating several neural networks, the best model is investigated to determine how the network successfully predicts the acoustic waveform. Lastly, a post-network transfer function is developed to remove discontinuity from the predicted waveform. Overall, methodologies from physics guided machine learning show a notable improvement in prediction performance, but additional loss functions are necessary for constructing predictive networks on small datasets.

Keywords: aeroacoustics, machine learning, propeller, rotor, neural network, physics guided machine learning

Procedia PDF Downloads 203
24866 The Effect of 8 Weeks Aerobic Training and Nitro-L-Arginine-Methyl Ester (L-NAME) on Plasma apelin in Male’s Rats

Authors: Abbassi Daloii Asieh, Yazdani Hoda

Abstract:

Background and Objective: evidence supports systemic inflammation in obesity and insulin resistance. Apelin that is secreted by adipose tissue plays an important role in the inflammation process and appear act as an anti-inflammatory cytokines. The aim of this study was the effect of eight weeks aerobic training and nitro-L-arginine-methyl ester (L-NAME) on plasma apelin in male’s rats. Materials and Methods: For this purpose, 24 male Wistar rats aged 20 months were randomly assigned into four groups: Control, training, training and L-NAME and L-NAME. Training intervention was eight weeks aerobic exercise (5 time/weekly) at 75-80 (%) of maximal oxygen consumption. All rats were killed 72 hours after lasted exercise session; blood samples collected and plasma were stored. Data was analyzed by one way ANOVA and Tucky Test. A p value less than 0.05 was considered statistically signigcant. Results: The results showed that after eight weeks of endurance training exercise Apelin plasma compared to the control group did not change significantly. Also, the results showed that there was significant difference in plasma Apelin between groups(P > 0/05). Also, the results showed no significant difference between the insulin levels and glucose of four groups (P > 0/05). Conclusion: It seems that aerobic exercise plasma Apelin levels in male rats is not affected. On the other hand, nitric oxide inhibitors can reduce levels of plasma Apelin.

Keywords: aerobic training, L-NAME, plasma Apelin, male’s rats

Procedia PDF Downloads 428
24865 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 141
24864 Asymmetric of the Segregation-Enhanced Brazil Nut Effect

Authors: Panupat Chaiworn, Soraya lama

Abstract:

We study the motion of particles in cylinders which are subjected to a sinusoidal vertical vibration. We measure the rising time of a large intruder from the bottom of the container to free surface of the bed particles and find that the rising time as a function of intruder density increases to a maximum and then decreases monotonically. The result is qualitatively accord to the previous findings in experiments using relative humidity of the bed particles and found speed convection of the bed particles containers it moving slowly, and the rising time of the intruder where a minimal instead of maximal rising time in the small density region was found. Our experimental results suggest that the topology of the container plays an important role in the Brazil nut effect.

Keywords: granular particles, Brazil nut effect, cylinder container, vertical vibration, convection

Procedia PDF Downloads 512
24863 Measuring Energy Efficiency Performance of Mena Countries

Authors: Azam Mohammadbagheri, Bahram Fathi

Abstract:

DEA has become a very popular method of performance measure, but it still suffers from some shortcomings. One of these shortcomings is the issue of having multiple optimal solutions to weights for efficient DMUs. The cross efficiency evaluation as an extension of DEA is proposed to avoid this problem. Lam (2010) is also proposed a mixed-integer linear programming formulation based on linear discriminate analysis and super efficiency method (MILP model) to avoid having multiple optimal solutions to weights. In this study, we modified MILP model to determine more suitable weight sets and also evaluate the energy efficiency of MENA countries as an application of the proposed model.

Keywords: data envelopment analysis, discriminate analysis, cross efficiency, MILP model

Procedia PDF Downloads 673
24862 Pulsed Laser Single Event Transients in 0.18 μM Partially-Depleted Silicon-On-Insulator Device

Authors: MeiBo, ZhaoXing, LuoLei, YuQingkui, TangMin, HanZhengsheng

Abstract:

The Single Event Transients (SETs) were investigated on 0.18μm PDSOI transistors and 100 series CMOS inverter chain using pulse laser. The effect of different laser energy and device bias for waveform on SET was characterized experimentally, as well as the generation and propagation of SET in inverter chain. In this paper, the effects of struck transistors type and struck locations on SETs were investigated. The results showed that when irradiate NMOSFETs from 100th to 2nd stages, the SET pulse width measured at the output terminal increased from 287.4 ps to 472.9 ps; and when irradiate PMOSFETs from 99th to 1st stages, the SET pulse width increased from 287.4 ps to 472.9 ps. When struck locations were close to the output of the chain, the SET pulse was narrow; however, when struck nodes were close to the input, the SET pulse was broadening. SET pulses were progressively broadened up when propagating along inverter chains. The SET pulse broadening is independent of the type of struck transistors. Through analysis, history effect induced threshold voltage hysteresis in PDSOI is the reason of pulse broadening. The positive pulse observed by oscilloscope, contrary to the expected results, is because of charging and discharging of capacitor.

Keywords: single event transients, pulse laser, partially-depleted silicon-on-insulator, propagation-induced pulse broadening effect

Procedia PDF Downloads 399
24861 FE Analysis of the Notch Effect on the Behavior of Repaired Crack with Bonded Composite Patch in Aircraft Structures

Authors: Faycal Benyahia, Abdelmohsen Albedah, Bel Abbes Bachir Bouiadjra

Abstract:

In this paper, the finite element analysis is applied to study the performance of the bonded composite reinforcement or repair for reducing stress concentration at a semi-circular lateral notch and repairing cracks emanating from this kind of notch. The effects of the adhesive properties on the variation of the stress intensity factor at the crack tip were highlighted. The obtained results show that the stress concentration factor at the notch tip is reduced about 30% and the maximal reduction of the stress intensity factor is about 80%. The adhesive properties must be optimized in order to increase the performance of the patch repair or reinforcement.

Keywords: bonded repair, notch, crack, adhesive, composite

Procedia PDF Downloads 378