Search results for: Hybrid methods
395 Anthropometric Correlates of Balance Performance in Non-Institutionalized Elderly
Authors: Okafor UAC, Ibeabuchimn, Omidina JO, Igwesi-Chidobe CN, Akinbo SRA
Abstract:
Purpose: The fear of falling is a major concern among the elderly. Sixty-five percent of individuals older than 60 years of age experience loss of balance often on a daily basis. Therefore, balance assessment in the elderly deserves special attention due to its importance in functional mobility and safety. This study aimed at assessing balance performance and comparing some anthropometric parameters among a Nigerian non-institutionalized elderly population.
Methods: Sixty one elderly subjects (31 males and 30 females) participated in this study. Their ages ranged between 62 and 84 years. Ability to maintain balance was assessed using Functional Reach Test (FRT) and Sharpened Romberg Test (SRT). Anthropometric data including age, weight, height, arm length, leg length, bi-acromial breadth, foot length and trunk length were also collected. Analysis was done using Pearson’s Product Moment Correlation Coefficient and Independent T-test, while level of significance was set as p<0.05.
Results: Age-related significant relationship was observed between balance performance and bi-acromial breadth among the elderly population. Gender and visual input also had a significant influence on balance performance. Other anthropometric variables (age, weight, height, arm length, leg length, foot length and trunk length) showed no significant relationship with balance performance among this elderly sample.
Conclusion: Only specific anthropometric variables may affect balance performances among the healthy elderly. The study further highlights the need for routine assessment of both static and dynamic balance to detect and appropriately manage aging-related diseases which could affect balance in the elderly.
Keywords: Balance Performance, Anthropometry, Non-institutionalized Elderly.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2431394 Customer Involvement in the Development of New Sustainable Products: A Review of the Literature
Authors: Natalia Moreira, Trevor Wood-Harper
Abstract:
The acceptance of sustainable products by the final consumer is still one of the challenges of the industry, which constantly seeks alternative approaches to successfully be accepted in the global market. A large set of methods and approaches have been discussed and analysed throughout the literature. Considering the current need for sustainable development and the current pace of consumption, the need for a combined solution towards the development of new products became clear, forcing researchers in product development to propose alternatives to the previous standard product development models. This paper presents, through a systemic analysis of the literature on product development, eco-design and consumer involvement, a set of alternatives regarding consumer involvement towards the development of sustainable products and how these approaches could help improve the sustainable industry’s establishment in the general market. Still being developed in the course of the author’s PhD, the initial findings of the research show that the understanding of the benefits of sustainable behaviour lead to a more conscious acquisition and eventually to the implementation of sustainable change in the consumer. Thus this paper is the initial approach towards the development of new sustainable products using the fashion industry as an example of practical implementation and acceptance by the consumers. By comparing the existing literature and critically analysing it, this paper concluded that the consumer involvement is strategic to improve the general understanding of sustainability and its features. The use of consumers and communities has been studied since the early 90s in order to exemplify uses and to guarantee a fast comprehension. The analysis done also includes the importance of this approach for the increase of innovation and ground breaking developments, thus requiring further research and practical implementation in order to better understand the implications and limitations of this methodology.Keywords: Consumer involvement, Products development, Sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527393 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.
Keywords: Conditional Generative Adversarial Net, market and credit risk management, neural network, time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1197392 The Determination of Stress Experienced by Nursing Undergraduate Students during Their Education
Authors: Gülden Küçükakça, Şefika Dilek Güven, Rahşan Kolutek, Seçil Taylan
Abstract:
Objective: Nursing students face with stress factors affecting academic performance and quality of life as from first moments of their educational life. Stress causes health problems in students such as physical, psycho-social, and behavioral disorders and might damage formation of professional identity by decreasing efficiency of education. In addition to determination of stress experienced by nursing students during their education, it was aimed to help review theoretical and clinical education settings for bringing stress of nursing students into positive level and to raise awareness of educators concerning their own professional behaviors. Methods: The study was conducted with 315 students studying at nursing department of Semra and Vefa Küçük Health High School, Nevşehir Hacı Bektaş Veli University in the academic year of 2015-2016 and agreed to participate in the study. “Personal Information Form” prepared by the researchers upon the literature review and “Nursing Education Stress Scale (NESS)” were used in this study. Data were assessed with analysis of variance and correlation analysis. Results: Mean NESS Scale score of the nursing students was estimated to be 66.46±16.08 points. Conclusions: As a result of this study, stress level experienced by nursing undergraduate students during their education was determined to be high. In accordance with this result, it can be recommended to determine sources of stress experienced by nursing undergraduate students during their education and to develop approaches to eliminate these stress sources.Keywords: Stress, nursing education, nursing student, nursing education stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2088391 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method
Authors: F. C. Amadi, G. C. Enyi, G. Nasr
Abstract:
Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.
Keywords: Special core analysis (SCAL), relative permeability, capillary pressures, drainage, imbibition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1815390 Distributed Cost-Based Scheduling in Cloud Computing Environment
Authors: Rupali, Anil Kumar Jaiswal
Abstract:
Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc. Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively. Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.
Keywords: Physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 849389 Evaluation of Gingival Hyperplasia Caused by Medications
Authors: Ilma Robo, Saimir Heta, Greta Plaka, Vera Ostreni
Abstract:
Purpose: Drug gingival hyperplasia is an uncommon pathology encountered during routine work in dental units. The purpose of this paper is to present the clinical appearance of gingival hyperplasia caused by medications. There are already three classes of medications that cause hyperplasia and based on data from the literature, the clinical cases encountered and included in this study have been compared. Materials and Methods: The study was conducted in a total of 311 patients, out of which 182 patients were included in our study, meeting the inclusion criteria. After each patient's history was recorded and it was found that patients were in their knowledge of chronic illness, undergoing treatment of gingivitis hypertrophic drugs was performed with a clinical examination of oral cavity and assessment by vertical and horizontal evaluation according to the periodontal indexes. Results: Of the data collected during the study, it was observed that 97% of patients with gingival hyperplasia are treated with nifedipine. 84% of patients treated with selected medicines and gingival hyperplasia in the oral cavity has been exposed at time period for more than 1 year and 1 month. According to the GOI, in the first rank of this index are about 21% of patients, in the second rank are 52%, in the third rank are 24% and in the fourth grade are 3%. According to the horizontal growth index of gingival hyperplasia, grade 1 included about 61% of patients and grade 2 included about 39% of patients with gingival hyperplasia. Bacterial index divides patients by degrees: grading 0 - 8.2%, grading 1 - 32.4%, grading 2 - 14% and grading 3 - 45.1%. Conclusions: The highest percentage of gingival hyperplasia caused by drugs is due to dosing of nifedipine for a duration of dosing and application for systemic healing for more than 1 year.
Keywords: Drug gingival hyperplasia, horizontal growth index, vertical growth index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 474388 Trunk and Gluteus-Medius Muscles’ Fatigability during Occupational Standing in Clinical Instructors with Low Back Pain
Authors: Eman A. Embaby, Amira A. A. Abdallah
Abstract:
Background: Occupational standing is associated with low back pain (LBP) development. Yet, trunk and gluteus-medius muscles’ fatigability has not been extensively studied during occupational standing. This study examined and correlated the rectus abdominus (RA), erector-spinae (ES), external oblique (EO), and gluteus-medius (GM) muscles’ fatigability on both sides while standing in a confined area for 30min Methods: Median frequency EMG data were collected from 15 female clinical instructors with chronic LBP (group A) and 15 asymptomatic controls (group B) (mean age 29.53±2.4 vs 29.07±2.4years, weight 63.6±7 vs 60±7.8kg, and height 162.73±4 vs 162.8±6cm respectively) using a spectrum analysis program. Data were collected in the first and last 5min of the standing task. Results: Using Mixed three-way ANOVA, group A showed significantly (p<0.05) lower frequencies for the right and left ES, and right GM in the last 5min and significantly higher frequencies for the left RA in the first and last 5min than group B. In addition, the left ES and right EO, ES and GM in group B showed significantly higher frequencies and the left ES in group A showed significantly lower frequencies in the last 5min compared with the first. Moreover, the right RA showed significantly higher frequencies than the left in the last 5min in group B. Finally, there were significant (p<0.05) correlations among the median frequencies of the tested four muscles on the same side and between both sides in both groups. Discussion/Conclusions: Clinical instructors with LBP are more liable to have higher trunk and gluteus-medius muscle fatigue than asymptomatic individuals. Thus, endurance training for these muscles should be included in the rehabilitation of such patients.
Keywords: EMG, Fatigability, Gluteus-medius, LBP, Standing, Trunk.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2523387 Improved Estimation of Evolutionary Spectrum based on Short Time Fourier Transforms and Modified Magnitude Group Delay by Signal Decomposition
Authors: H K Lakshminarayana, J S Bhat, H M Mahesh
Abstract:
A new estimator for evolutionary spectrum (ES) based on short time Fourier transform (STFT) and modified group delay function (MGDF) by signal decomposition (SD) is proposed. The STFT due to its built-in averaging, suppresses the cross terms and the MGDF preserves the frequency resolution of the rectangular window with the reduction in the Gibbs ripple. The present work overcomes the magnitude distortion observed in multi-component non-stationary signals with STFT and MGDF estimation of ES using SD. The SD is achieved either through discrete cosine transform based harmonic wavelet transform (DCTHWT) or perfect reconstruction filter banks (PRFB). The MGDF also improves the signal to noise ratio by removing associated noise. The performance of the present method is illustrated for cross chirp and frequency shift keying (FSK) signals, which indicates that its performance is better than STFT-MGDF (STFT-GD) alone. Further its noise immunity is better than STFT. The SD based methods, however cannot bring out the frequency transition path from band to band clearly, as there will be gap in the contour plot at the transition. The PRFB based STFT-SD shows good performance than DCTHWT decomposition method for STFT-GD.Keywords: Evolutionary Spectrum, Modified Group Delay, Discrete Cosine Transform, Harmonic Wavelet Transform, Perfect Reconstruction Filter Banks, Short Time Fourier Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1609386 An Acerbate Psychotics Symptoms, Social Support, Stressful Life Events, Medication Use Self-Efficacy Impact on Social Dysfunction: A Cross Sectional Self-Rated Study of Persons with Schizophrenia Patient and Misusing Methamphetamines
Authors: Ek-Uma Imkome, Jintana Yunibhand, Waraporn Chaiyawat
Abstract:
Background: Persons with schizophrenia patient and misusing methamphetamines suffering from social dysfunction that impact on their quality of life. Knowledge of factors related to social dysfunction will guide the effective intervention. Objectives: To determine the direct effect, indirect effect and total effect of an acerbate Psychotics’ Symptoms, Social Support, Stressful life events, Medication use self-efficacy impact on social dysfunction in Thai schizophrenic patient and methamphetamine misuse. Methods: Data were collected from schizophrenic and methamphetamine misuse patient by self report. A linear structural relationship was used to test the hypothesized path model. Results: The hypothesized model was found to fit the empirical data and explained 54% of the variance of the psychotic symptoms (X2 = 114.35, df = 92, p-value = 0.05, X2 /df = 1.24, GFI = 0.96, AGFI = 0.92, CFI = 1.00, NFI = 0.99, NNFI = 0.99, RMSEA = 0.02). The highest total effect on social dysfunction was psychotic symptoms (0.67, p<0.05). Medication use self-efficacy had a direct effect on psychotic symptoms (-0.25, p<0.01), and social support had direct effect on medication use self efficacy (0.36, p <0.01). Conclusions: Psychotic symptoms and stressful life events were the significance factors that influenced direct on social dysfunctioning. Therefore, interventions that are designed to manage these factors are crucial in order to enhance social functioning in this population.
Keywords: Psychotic symptoms, methamphetamine, schizophrenia, stressful life events, social dysfunction, social support, medication use self-efficacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1026385 A Finite Element/Finite Volume Method for Dam-Break Flows over Deformable Beds
Authors: Alia Alghosoun, Ashraf Osman, Mohammed Seaid
Abstract:
A coupled two-layer finite volume/finite element method was proposed for solving dam-break flow problem over deformable beds. The governing equations consist of the well-balanced two-layer shallow water equations for the water flow and a linear elastic model for the bed deformations. Deformations in the topography can be caused by a brutal localized force or simply by a class of sliding displacements on the bathymetry. This deformation in the bed is a source of perturbations, on the water surface generating water waves which propagate with different amplitudes and frequencies. Coupling conditions at the interface are also investigated in the current study and two mesh procedure is proposed for the transfer of information through the interface. In the present work a new procedure is implemented at the soil-water interface using the finite element and two-layer finite volume meshes with a conservative distribution of the forces at their intersections. The finite element method employs quadratic elements in an unstructured triangular mesh and the finite volume method uses the Rusanove to reconstruct the numerical fluxes. The numerical coupled method is highly efficient, accurate, well balanced, and it can handle complex geometries as well as rapidly varying flows. Numerical results are presented for several test examples of dam-break flows over deformable beds. Mesh convergence study is performed for both methods, the overall model provides new insight into the problems at minimal computational cost.Keywords: Dam-break flows, deformable beds, finite element method, finite volume method, linear elasticity, Shallow water equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 912384 Comparative Study Using Weka for Red Blood Cells Classification
Authors: Jameela Ali Alkrimi, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithms tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital - Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.
Keywords: K-Nearest Neighbors, Neural Network, Radial Basis Function, Red blood cells, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2993383 Enhance Indoor Environment in Buildings and Its Effect on Improving Occupant's Health
Authors: Imad M. Assali
Abstract:
Recently, the world main problem is a global warming and climate change affecting both outdoor and indoor environments, especially the air quality (AQ) as a result of vast migration of people from rural areas to urban areas. Therefore, cities became more crowded and denser from an irregular population increase, along with increasing urbanization caused many problems for the environment such as increasing the land prices, changes in life style, and the new buildings are not adapted to the climate producing uncomfortable and unhealthy indoor building conditions. As interior environments are the places that create the most intimate relationship with the user. Consequently, the indoor environment quality (IEQ) for buildings became uncomfortable and unhealthy for its occupants. The symptoms commonly associated with poor indoor environment such as itchy, headache, fatigue, and respiratory complaints such as cough and congestion, etc. The symptoms tend to improve over time or even disappear when people are away from the building. Therefore, designing a healthy indoor environment to fulfill human needs is the main concern for architects and interior designer. However, this research explores how occupant expectations and environmental attitudes may influence occupant health and satisfaction within the context of the indoor environment. In doing so, it reviews and contributes to the methods and tools used to evaluate only the indoor environment quality (IEQ) components of building performance. Its main aim is to review the literature on indoor human comfort. This is followed by a review of previous papers published related to human comfort. Finally, this paper will provide possible approaches in design level of healthy buildings.Keywords: Sustainable building, indoor environment quality (IEQ), occupant's health, active system, sick building syndrome (SBS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1902382 Reliability-Based Maintenance Management Methodology to Minimise Life Cycle Cost of Water Supply Networks
Authors: Mojtaba Mahmoodian, Joshua Phelan, Mehdi Shahparvari
Abstract:
With a large percentage of countries’ total infrastructure expenditure attributed to water network maintenance, it is essential to optimise maintenance strategies to rehabilitate or replace underground pipes before failure occurs. The aim of this paper is to provide water utility managers with a maintenance management approach for underground water pipes, subject to external loading and material corrosion, to give the lowest life cycle cost over a predetermined time period. This reliability-based maintenance management methodology details the optimal years for intervention, the ideal number of maintenance activities to perform before replacement and specifies feasible renewal options and intervention prioritisation to minimise the life cycle cost. The study was then extended to include feasible renewal methods by determining the structural condition index and potential for soil loss, then obtaining the failure impact rating to assist in prioritising pipe replacement. A case study on optimisation of maintenance plans for the Melbourne water pipe network is considered in this paper to evaluate the practicality of the proposed methodology. The results confirm that the suggested methodology can provide water utility managers with a reliable systematic approach to determining optimum maintenance plans for pipe networks.Keywords: Water pipe networks, maintenance management, reliability analysis, optimum maintenance plan.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1254381 Effect of Muscle Energy Technique on Anterior Pelvic Tilt in Lumbar Spondylosis Patients
Authors: Enas Elsayed Abutaleb, Mohamed Taher Eldesoky, Shahenda Abd El Rasol
Abstract:
Background: Muscle Energy Techniques (MET) have been widely used by manual therapists over the past years, but still limited research validated its use and there was limited evidence to substantiate the theories used to explain its effects. Objective: To investigate the effect of Muscle Energy Technique (MET) on anterior pelvic tilt in patients with lumbar spondylosis. Design: Randomized controlled trial. Subjects: Thirty patients with anterior pelvic tilt from both sexes were involved, aged between 35 to 50 years old and they were divided into MET and control groups with 15 patients in each. Methods: All patients received 3sessions/week for 4 weeks where the study group received MET, Ultrasound and Infrared, and the control group received U.S and I.R only. Pelvic angle was measured by palpation meter, pain severity by the visual analogue scale and functional disabilities by the Oswestry disability index. Results: Both groups showed significant improvement in all measured variables. The MET group was significantly better than the control group in pelvic angle, pain severity, and functional disability as p-value were (0.001, 0.0001, 0.0001) respectively. Conclusion and implication: the study group fulfilled greater improvement in all measured variables than the control group which implies that application of MET in combination with U.S and I.R were more effective in improving pelvic tilting angle, pain severity and functional disabilities than using electrotherapy only.Keywords: Anterior pelvic tilt, lumbar spondylosis, muscle energy technique exercise, palpation meter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4012380 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images
Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj
Abstract:
Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.
Keywords: Image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1178379 Vibration Analysis of Gas Turbine SIEMENS 162MW - V94.2 Related to Iran Power Plant Industry in Fars Province
Authors: Omid A. Zargar
Abstract:
Vibration analysis of most critical equipment is considered as one of the most challenging activities in preventive maintenance. Utilities are heart of the process in big industrial plants like petrochemical zones. Vibration analysis methods and condition monitoring systems of these kinds of equipments are developed too much in recent years. On the other hand, there are too much operation factors like inlet and outlet pressures and temperatures that should be monitored. In this paper, some of the most effective concepts and techniques related to gas turbine vibration analysis are discussed. In addition, a gas turbine SIEMENS 162MW - V94.2 vibration case history related to Iran power industry in Fars province is explained. Vibration monitoring system and machinery technical specification are introduced. Besides, absolute and relative vibration trends, turbine and compressor orbits, Fast Fourier transform (FFT) in absolute vibrations, vibration modal analysis, turbine and compressor start up and shut down conditions, bode diagrams for relative vibrations, Nyquist diagrams and waterfall or three-dimensional FFT diagrams in startup and trip conditions are discussed with relative graphs. Furthermore, Split Resonance in gas turbines is discussed in details. Moreover, some updated vibration monitoring system, blade manufacturing technique and modern damping mechanism are discussed in this paper.
Keywords: Gas turbine, turbine compressor, vibration data collector, utility, condition monitoring, non-contact probe, Relative Vibration, Absolute Vibration, Split Resonance, Time Wave Form (TWF), Fast Fourier transform (FFT).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3681378 Forecasting Stock Price Manipulation in Capital Market
Authors: F. Rahnamay Roodposhti, M. Falah Shams, H. Kordlouie
Abstract:
The aim of the article is extending and developing econometrics and network structure based methods which are able to distinguish price manipulation in Tehran stock exchange. The principal goal of the present study is to offer model for approximating price manipulation in Tehran stock exchange. In order to do so by applying separation method a sample consisting of 397 companies accepted at Tehran stock exchange were selected and information related to their price and volume of trades during years 2001 until 2009 were collected and then through performing runs test, skewness test and duration correlative test the selected companies were divided into 2 sets of manipulated and non manipulated companies. In the next stage by investigating cumulative return process and volume of trades in manipulated companies, the date of starting price manipulation was specified and in this way the logit model, artificial neural network, multiple discriminant analysis and by using information related to size of company, clarity of information, ratio of P/E and liquidity of stock one year prior price manipulation; a model for forecasting price manipulation of stocks of companies present in Tehran stock exchange were designed. At the end the power of forecasting models were studied by using data of test set. Whereas the power of forecasting logit model for test set was 92.1%, for artificial neural network was 94.1% and multi audit analysis model was 90.2%; therefore all of the 3 aforesaid models has high power to forecast price manipulation and there is no considerable difference among forecasting power of these 3 models.Keywords: Price Manipulation, Liquidity, Size of Company, Floating Stock, Information Clarity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2852377 Real-time Haptic Modeling and Simulation for Prosthetic Insertion
Authors: Catherine A. Todd, Fazel Naghdy
Abstract:
In this work a surgical simulator is produced which enables a training otologist to conduct a virtual, real-time prosthetic insertion. The simulator provides the Ear, Nose and Throat surgeon with real-time visual and haptic responses during virtual cochlear implantation into a 3D model of the human Scala Tympani (ST). The parametric model is derived from measured data as published in the literature and accounts for human morphological variance, such as differences in cochlear shape, enabling patient-specific pre- operative assessment. Haptic modeling techniques use real physical data and insertion force measurements, to develop a force model which mimics the physical behavior of an implant as it collides with the ST walls during an insertion. Output force profiles are acquired from the insertion studies conducted in the work, to validate the haptic model. The simulator provides the user with real-time, quantitative insertion force information and associated electrode position as user inserts the virtual implant into the ST model. The information provided by this study may also be of use to implant manufacturers for design enhancements as well as for training specialists in optimal force administration, using the simulator. The paper reports on the methods for anatomical modeling and haptic algorithm development, with focus on simulator design, development, optimization and validation. The techniques may be transferrable to other medical applications that involve prosthetic device insertions where user vision is obstructed.Keywords: Haptic modeling, medical device insertion, real-time visualization of prosthetic implantation, surgical simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2043376 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values
Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi
Abstract:
A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.
Keywords: eXtreme Gradient Boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impairment, multiclass classification, ADNI, support vector machine, random forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 957375 Co-Disposal of Coal Ash with Mine Tailings in Surface Paste Disposal Practices: A Gold Mining Case Study
Authors: M. L. Dinis, M. C. Vila, A. Fiúza, A. Futuro, C. Nunes
Abstract:
The present paper describes the study of paste tailings prepared in laboratory using gold tailings, produced in a Finnish gold mine with the incorporation of coal ash. Natural leaching tests were conducted with the original materials (tailings, fly and bottom ashes) and also with paste mixtures that were prepared with different percentages of tailings and ashes. After leaching, the solid wastes were physically and chemically characterized and the results were compared to those selected as blank – the unleached samples. The tailings and the coal ash, as well as the prepared mixtures, were characterized, in addition to the textural parameters, by the following measurements: grain size distribution, chemical composition and pH. Mixtures were also tested in order to characterize their mechanical behavior by measuring the flexural strength, the compressive strength and the consistency. The original tailing samples presented an alkaline pH because during their processing they were previously submitted to pressure oxidation with destruction of the sulfides. Therefore, it was not possible to ascertain the effect of the coal ashes in the acid mine drainage. However, it was possible to verify that the paste reactivity was affected mostly by the bottom ash and that the tailings blended with bottom ash present lower mechanical strength than when blended with a combination of fly and bottom ash. Surface paste disposal offer an attractive alternative to traditional methods in addition to the environmental benefits of incorporating large-volume wastes (e.g. bottom ash). However, a comprehensive characterization of the paste mixtures is crucial to optimize paste design in order to enhance engineer and environmental properties.Keywords: Coal ash, gold tailings, paste, surface disposal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1442374 A Numerical Strategy to Design Maneuverable Micro-Biomedical Swimming Robots Based on Biomimetic Flagellar Propulsion
Authors: Arash Taheri, Meysam Mohammadi-Amin, Seyed Hossein Moosavy
Abstract:
Medical applications are among the most impactful areas of microrobotics. The ultimate goal of medical microrobots is to reach currently inaccessible areas of the human body and carry out a host of complex operations such as minimally invasive surgery (MIS), highly localized drug delivery, and screening for diseases at their very early stages. Miniature, safe and efficient propulsion systems hold the key to maturing this technology but they pose significant challenges. A new type of propulsion developed recently, uses multi-flagella architecture inspired by the motility mechanism of prokaryotic microorganisms. There is a lack of efficient methods for designing this type of propulsion system. The goal of this paper is to overcome the lack and this way, a numerical strategy is proposed to design multi-flagella propulsion systems. The strategy is based on the implementation of the regularized stokeslet and rotlet theory, RFT theory and new approach of “local corrected velocity". The effects of shape parameters and angular velocities of each flagellum on overall flow field and on the robot net forces and moments are considered. Then a multi-layer perceptron artificial neural network is designed and employed to adjust the angular velocities of the motors for propulsion control. The proposed method applied successfully on a sample configuration and useful demonstrative results is obtained.Keywords: Artificial Neural Network, Biomimetic Microrobots, Flagellar Propulsion, Swimming Robots.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909373 Case Study of the Roma Tomato Distribution Chain: A Dynamic Interface for an Agricultural Enterprise in Mexico
Authors: Ernesto A. Lagarda-Leyva, Manuel A. Valenzuela L., José G. Oshima C., Arnulfo A. Naranjo-Flores
Abstract:
From August to December of 2016, a diagnostic and strategic planning study was carried out on the supply chain of the company Agropecuaria GABO S.A. de C.V. The final product of the study was the development of the strategic plan and a project portfolio to meet the demands of the three links in the supply chain of the Roma tomato exported annually to the United States of America. In this project, the strategic objective of ensuring the proper handling of the product was selected and one of the goals associated with this was the employment of quantitative methods to support decision making. Considering the antecedents, the objective of this case study was to develop a model to analyze the behavioral dynamics in the distribution chain, from the logistics of storage and shipment of Roma tomato in 81-case pallets (11.5 kg per case), to the two pre-cooling rooms and eventual loading onto transports, seeking to reduce the bottleneck and the associated costs by means of a dynamic interface. The methodology used was that of system dynamics, considering four phases that were adapted to the purpose of the study: 1) the conceptualization phase; 2) the formulation phase; 3) the evaluation phase; and 4) the communication phase. The main practical conclusions lead to the possibility of reducing both the bottlenecks in the cooling rooms and the costs by simulating scenarios and modifying certain policies. Furthermore, the creation of the dynamic interface between the model and the stakeholders was achieved by generating interaction with buttons and simple instructions that allow making modifications and observing diverse behaviors.
Keywords: Agrilogistics, distribution, scenarios, system dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 828372 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: Time history dynamic analysis, basic modal displacement, earthquake induced demands, shear steel structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418371 Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps
Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li
Abstract:
The widespread popularity of mobile devices and the development of artificial intelligence (AI) have led to the widespread adoption of deep learning (DL) in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace, a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Additionally, we propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. Using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We conduct an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace outperformed FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.
Keywords: Mobile computing, deep learning apps, sensitive information, static analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 595370 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG
Authors: Mamta Garg
Abstract:
While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675369 Effect of Cold, Warm or Contrast Therapy on Controlling Knee Osteoarthritis Associated Problems
Authors: Amal E. Shehata, Manal E. Fareed
Abstract:
Osteoarthritis (OA) is the most prevalent and far common debilitating form of arthritis which can be defined as a degenerative condition affecting synovial joint. Patients suffering from osteoarthritis often complain of dull ache pain on movement. Physical agents can fight the painful process when correctly indicated and used such as heat or cold therapy Aim. This study was carried out to: Compare the effect of cold, warm and contrast therapy on controlling knee osteoarthritis associated problems. Setting: The study was carried out in orthopedic outpatient clinics of Menoufia University and teaching Hospitals, Egypt. Sample: A convenient sample of 60 adult patients with unilateral knee osteoarthritis. Tools: three tools were utilized to collect the data. Tool I : An interviewing questionnaire. It comprised of three parts covering sociodemographic data, medical data and adverse effects of the treatment protocol. Tool II : Knee Injury and Osteoarthritis Outcome Score (KOOS) It consists of five main parts. Tool II1 : 0-10 Numeric pain rating scale. Results: reveled that the total knee symptoms score was decreased from moderate symptoms pre intervention to mild symptoms after warm and contrast method of therapy, but the contrast therapy had significant effect in reducing the knee symptoms and pain than the other symptoms. Conclusions: all of the three methods of therapy resulted in improvement in all knee symptoms and pain but the most appropriate protocol of treatment to relive symptoms and pain was contrast therapy.
Keywords: Knee Osteoarthritis, Cold, Warm and Contrast Therapy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5728368 Evaluation of Chiller Power Consumption Using Grey Prediction
Authors: Tien-Shun Chan, Yung-Chung Chang, Cheng-Yu Chu, Wen-Hui Chen, Yuan-Lin Chen, Shun-Chong Wang, Chang-Chun Wang
Abstract:
98% of the energy needed in Taiwan has been imported. The prices of petroleum and electricity have been increasing. In addition, facility capacity, amount of electricity generation, amount of electricity consumption and number of Taiwan Power Company customers have continued to increase. For these reasons energy conservation has become an important topic. In the past linear regression was used to establish the power consumption models for chillers. In this study, grey prediction is used to evaluate the power consumption of a chiller so as to lower the total power consumption at peak-load (so that the relevant power providers do not need to keep on increasing their power generation capacity and facility capacity). In grey prediction, only several numerical values (at least four numerical values) are needed to establish the power consumption models for chillers. If PLR, the temperatures of supply chilled-water and return chilled-water, and the temperatures of supply cooling-water and return cooling-water are taken into consideration, quite accurate results (with the accuracy close to 99% for short-term predictions) may be obtained. Through such methods, we can predict whether the power consumption at peak-load will exceed the contract power capacity signed by the corresponding entity and Taiwan Power Company. If the power consumption at peak-load exceeds the power demand, the temperature of the supply chilled-water may be adjusted so as to reduce the PLR and hence lower the power consumption.Keywords: Gery system theory, grey prediction, chller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2577367 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators
Authors: Wei Zhang
Abstract:
With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 892366 Inquiry on the Improvement Teaching Quality in the Classroom with Meta-Teaching Skills
Authors: Shahlan Surat, Saemah Rahman, Saadiah Kummin
Abstract:
When teachers reflect and evaluate whether their teaching methods actually have an impact on students’ learning, they will adjust their practices accordingly. This inevitably improves their students’ learning and performance. The approach in meta-teaching can invigorate and create a passion for teaching. It thus helps to increase the commitment and love for the teaching profession. This study was conducted to determine the level of metacognitive thinking of teachers in the process of teaching and learning in the classroom. Metacognitive thinking teachers include the use of metacognitive knowledge which consists of different types of knowledge: declarative, procedural and conditional. The ability of the teachers to plan, monitor and evaluate the teaching process can also be determined. This study was conducted on 377 graduate teachers in Klang Valley, Malaysia. The stratified sampling method was selected for the purpose of this study. The metacognitive teaching inventory consisting of 24 items is called InKePMG (Teacher Indicators of Effectiveness Meta-Teaching). The results showed the level of mean is high for two components of metacognitive knowledge; declarative knowledge (mean = 4.16) and conditional (mean = 4.11) whereas, the mean of procedural knowledge is 4.00 (moderately high). Similarly, the level of knowledge in monitoring (mean = 4.11), evaluating (mean = 4.00) which indicate high score and planning (mean = 4.00) are moderately high score among teachers. In conclusion, this study shows that the planning and procedural knowledge is an important element in improving the quality of teachers teaching in the classroom. Thus, the researcher recommended that further studies should focus on training programs for teachers on metacognitive skills and also on developing creative thinking among teachers.Keywords: Metacognitive thinking skills, procedural knowledge, conditional knowledge, declarative knowledge, meta-teaching and regulation of cognitive.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1435