Search results for: teaching methods.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4433

Search results for: teaching methods.

383 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: Airborne laser scanning, digital terrain models, filtering, forested areas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 662
382 An In-depth Experimental Study of Wax Deposition in Pipelines

Authors: M. L. Arias, J. D’Adamo, M. N. Novosad, P. A. Raffo, H. P. Burbridge, G. O. Artana

Abstract:

Shale oils are highly paraffinic and, consequently, can create wax deposits that foul pipelines during transportation. Several factors must be considered when designing pipelines or treatment programs that prevent wax deposition: including chemical species in crude oils, flowrates, pipes diameters and temperature. This paper describes the wax deposition study carried out within the framework of YPF Tecnolgía S.A. (Y-TEC) flow assurance projects, as part of the process to achieve a better understanding on wax deposition issues. Laboratory experiments were performed on a medium size, 1 inch diameter, wax deposition loop of 15 meters long equipped with a solid detector system, online microscope to visualize crystals, temperature, and pressure sensors along the loop pipe. A baseline test was performed with diesel with no added paraffin or additive content. Tests were undertaken with different temperatures of circulating and cooling fluid at different flow conditions. Then, a solution formed with a paraffin incorporated to the diesel was considered. Tests varying flowrate and cooling rate were again run. Viscosity, density, WAT (Wax Appearance Temperature) with DSC (Differential Scanning Calorimetry), pour point and cold finger measurements were carried out to determine physical properties of the working fluids. The results obtained in the loop were analyzed through momentum balance and heat transfer models. To determine possible paraffin deposition scenarios temperature and pressure loop output signals were studied. They were compared with WAT static laboratory methods.

Keywords: Paraffin deposition, wax, oil pipelines, experimental pipe loop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39
381 The Analysis of Deceptive and Truthful Speech: A Computational Linguistic Based Method

Authors: Seham El Kareh, Miramar Etman

Abstract:

Recently, detecting liars and extracting features which distinguish them from truth-tellers have been the focus of a wide range of disciplines. To the author’s best knowledge, most of the work has been done on facial expressions and body gestures but only few works have been done on the language used by both liars and truth-tellers. This paper sheds light on four axes. The first axis copes with building an audio corpus for deceptive and truthful speech for Egyptian Arabic speakers. The second axis focuses on examining the human perception of lies and proving our need for computational linguistic-based methods to extract features which characterize truthful and deceptive speech. The third axis is concerned with building a linguistic analysis program that could extract from the corpus the inter- and intra-linguistic cues for deceptive and truthful speech. The program built here is based on selected categories from the Linguistic Inquiry and Word Count program. Our results demonstrated that Egyptian Arabic speakers on one hand preferred to use first-person pronouns and present tense compared to the past tense when lying and their lies lacked of second-person pronouns, and on the other hand, when telling the truth, they preferred to use the verbs related to motion and the nouns related to time. The results also showed that there is a need for bigger data to prove the significance of words related to emotions and numbers.

Keywords: Egyptian Arabic corpus, computational analysis, deceptive features, forensic linguistics, human perception, truthful features.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1140
380 Anthropometric Correlates of Balance Performance in Non-Institutionalized Elderly

Authors: Okafor UAC, Ibeabuchimn, Omidina JO, Igwesi-Chidobe CN, Akinbo SRA

Abstract:

Purpose: The fear of falling is a major concern among the elderly. Sixty-five percent of individuals older than 60 years of age experience loss of balance often on a daily basis. Therefore, balance assessment in the elderly deserves special attention due to its importance in functional mobility and safety. This study aimed at assessing balance performance and comparing some anthropometric parameters among a Nigerian non-institutionalized elderly population.

Methods: Sixty one elderly subjects (31 males and 30 females) participated in this study. Their ages ranged between 62 and 84 years. Ability to maintain balance was assessed using Functional Reach Test (FRT) and Sharpened Romberg Test (SRT). Anthropometric data including age, weight, height, arm length, leg length, bi-acromial breadth, foot length and trunk length were also collected. Analysis was done using Pearson’s Product Moment Correlation Coefficient and Independent T-test, while level of significance was set as p<0.05.

Results: Age-related significant relationship was observed between balance performance and bi-acromial breadth among the elderly population. Gender and visual input also had a significant influence on balance performance. Other anthropometric variables (age, weight, height, arm length, leg length, foot length and trunk length) showed no significant relationship with balance performance among this elderly sample.

Conclusion: Only specific anthropometric variables may affect balance performances among the healthy elderly. The study further highlights the need for routine assessment of both static and dynamic balance to detect and appropriately manage aging-related diseases which could affect balance in the elderly.

Keywords: Balance Performance, Anthropometry, Non-institutionalized Elderly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2389
379 Customer Involvement in the Development of New Sustainable Products: A Review of the Literature

Authors: Natalia Moreira, Trevor Wood-Harper

Abstract:

The acceptance of sustainable products by the final consumer is still one of the challenges of the industry, which constantly seeks alternative approaches to successfully be accepted in the global market. A large set of methods and approaches have been discussed and analysed throughout the literature. Considering the current need for sustainable development and the current pace of consumption, the need for a combined solution towards the development of new products became clear, forcing researchers in product development to propose alternatives to the previous standard product development models. This paper presents, through a systemic analysis of the literature on product development, eco-design and consumer involvement, a set of alternatives regarding consumer involvement towards the development of sustainable products and how these approaches could help improve the sustainable industry’s establishment in the general market. Still being developed in the course of the author’s PhD, the initial findings of the research show that the understanding of the benefits of sustainable behaviour lead to a more conscious acquisition and eventually to the implementation of sustainable change in the consumer. Thus this paper is the initial approach towards the development of new sustainable products using the fashion industry as an example of practical implementation and acceptance by the consumers. By comparing the existing literature and critically analysing it, this paper concluded that the consumer involvement is strategic to improve the general understanding of sustainability and its features. The use of consumers and communities has been studied since the early 90s in order to exemplify uses and to guarantee a fast comprehension. The analysis done also includes the importance of this approach for the increase of innovation and ground breaking developments, thus requiring further research and practical implementation in order to better understand the implications and limitations of this methodology.

Keywords: Consumer involvement, Products development, Sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
378 Time Series Simulation by Conditional Generative Adversarial Net

Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto

Abstract:

Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.

Keywords: Conditional Generative Adversarial Net, market and credit risk management, neural network, time series.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1127
377 The Determination of Stress Experienced by Nursing Undergraduate Students during Their Education

Authors: Gülden Küçükakça, Şefika Dilek Güven, Rahşan Kolutek, Seçil Taylan

Abstract:

Objective: Nursing students face with stress factors affecting academic performance and quality of life as from first moments of their educational life. Stress causes health problems in students such as physical, psycho-social, and behavioral disorders and might damage formation of professional identity by decreasing efficiency of education. In addition to determination of stress experienced by nursing students during their education, it was aimed to help review theoretical and clinical education settings for bringing stress of nursing students into positive level and to raise awareness of educators concerning their own professional behaviors. Methods: The study was conducted with 315 students studying at nursing department of Semra and Vefa Küçük Health High School, Nevşehir Hacı Bektaş Veli University in the academic year of 2015-2016 and agreed to participate in the study. “Personal Information Form” prepared by the researchers upon the literature review and “Nursing Education Stress Scale (NESS)” were used in this study. Data were assessed with analysis of variance and correlation analysis. Results: Mean NESS Scale score of the nursing students was estimated to be 66.46±16.08 points. Conclusions: As a result of this study, stress level experienced by nursing undergraduate students during their education was determined to be high. In accordance with this result, it can be recommended to determine sources of stress experienced by nursing undergraduate students during their education and to develop approaches to eliminate these stress sources.

Keywords: Stress, nursing education, nursing student, nursing education stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2025
376 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method

Authors: F. C. Amadi, G. C. Enyi, G. Nasr

Abstract:

Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.

Keywords: Special core analysis (SCAL), relative permeability, capillary pressures, drainage, imbibition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
375 Distributed Cost-Based Scheduling in Cloud Computing Environment

Authors: Rupali, Anil Kumar Jaiswal

Abstract:

Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc.  Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively.  Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.

Keywords: Physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 805
374 Evaluation of Gingival Hyperplasia Caused by Medications

Authors: Ilma Robo, Saimir Heta, Greta Plaka, Vera Ostreni

Abstract:

Purpose: Drug gingival hyperplasia is an uncommon pathology encountered during routine work in dental units. The purpose of this paper is to present the clinical appearance of gingival hyperplasia caused by medications. There are already three classes of medications that cause hyperplasia and based on data from the literature, the clinical cases encountered and included in this study have been compared. Materials and Methods: The study was conducted in a total of 311 patients, out of which 182 patients were included in our study, meeting the inclusion criteria. After each patient's history was recorded and it was found that patients were in their knowledge of chronic illness, undergoing treatment of gingivitis hypertrophic drugs was performed with a clinical examination of oral cavity and assessment by vertical and horizontal evaluation according to the periodontal indexes. Results: Of the data collected during the study, it was observed that 97% of patients with gingival hyperplasia are treated with nifedipine. 84% of patients treated with selected medicines and gingival hyperplasia in the oral cavity has been exposed at time period for more than 1 year and 1 month. According to the GOI, in the first rank of this index are about 21% of patients, in the second rank are 52%, in the third rank are 24% and in the fourth grade are 3%. According to the horizontal growth index of gingival hyperplasia, grade 1 included about 61% of patients and grade 2 included about 39% of patients with gingival hyperplasia. Bacterial index divides patients by degrees: grading 0 - 8.2%, grading 1 - 32.4%, grading 2 - 14% and grading 3 - 45.1%. Conclusions: The highest percentage of gingival hyperplasia caused by drugs is due to dosing of nifedipine for a duration of dosing and application for systemic healing for more than 1 year.

Keywords: Drug gingival hyperplasia, horizontal growth index, vertical growth index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 414
373 Trunk and Gluteus-Medius Muscles’ Fatigability during Occupational Standing in Clinical Instructors with Low Back Pain

Authors: Eman A. Embaby, Amira A. A. Abdallah

Abstract:

Background: Occupational standing is associated with low back pain (LBP) development. Yet, trunk and gluteus-medius muscles’ fatigability has not been extensively studied during occupational standing. This study examined and correlated the rectus abdominus (RA), erector-spinae (ES), external oblique (EO), and gluteus-medius (GM) muscles’ fatigability on both sides while standing in a confined area for 30min Methods: Median frequency EMG data were collected from 15 female clinical instructors with chronic LBP (group A) and 15 asymptomatic controls (group B) (mean age 29.53±2.4 vs 29.07±2.4years, weight 63.6±7 vs 60±7.8kg, and height 162.73±4 vs 162.8±6cm respectively) using a spectrum analysis program. Data were collected in the first and last 5min of the standing task. Results: Using Mixed three-way ANOVA, group A showed significantly (p<0.05) lower frequencies for the right and left ES, and right GM in the last 5min and significantly higher frequencies for the left RA in the first and last 5min than group B. In addition, the left ES and right EO, ES and GM in group B showed significantly higher frequencies and the left ES in group A showed significantly lower frequencies in the last 5min compared with the first. Moreover, the right RA showed significantly higher frequencies than the left in the last 5min in group B. Finally, there were significant (p<0.05) correlations among the median frequencies of the tested four muscles on the same side and between both sides in both groups. Discussion/Conclusions: Clinical instructors with LBP are more liable to have higher trunk and gluteus-medius muscle fatigue than asymptomatic individuals. Thus, endurance training for these muscles should be included in the rehabilitation of such patients.

Keywords: EMG, Fatigability, Gluteus-medius, LBP, Standing, Trunk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2485
372 Improved Estimation of Evolutionary Spectrum based on Short Time Fourier Transforms and Modified Magnitude Group Delay by Signal Decomposition

Authors: H K Lakshminarayana, J S Bhat, H M Mahesh

Abstract:

A new estimator for evolutionary spectrum (ES) based on short time Fourier transform (STFT) and modified group delay function (MGDF) by signal decomposition (SD) is proposed. The STFT due to its built-in averaging, suppresses the cross terms and the MGDF preserves the frequency resolution of the rectangular window with the reduction in the Gibbs ripple. The present work overcomes the magnitude distortion observed in multi-component non-stationary signals with STFT and MGDF estimation of ES using SD. The SD is achieved either through discrete cosine transform based harmonic wavelet transform (DCTHWT) or perfect reconstruction filter banks (PRFB). The MGDF also improves the signal to noise ratio by removing associated noise. The performance of the present method is illustrated for cross chirp and frequency shift keying (FSK) signals, which indicates that its performance is better than STFT-MGDF (STFT-GD) alone. Further its noise immunity is better than STFT. The SD based methods, however cannot bring out the frequency transition path from band to band clearly, as there will be gap in the contour plot at the transition. The PRFB based STFT-SD shows good performance than DCTHWT decomposition method for STFT-GD.

Keywords: Evolutionary Spectrum, Modified Group Delay, Discrete Cosine Transform, Harmonic Wavelet Transform, Perfect Reconstruction Filter Banks, Short Time Fourier Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
371 An Acerbate Psychotics Symptoms, Social Support, Stressful Life Events, Medication Use Self-Efficacy Impact on Social Dysfunction: A Cross Sectional Self-Rated Study of Persons with Schizophrenia Patient and Misusing Methamphetamines

Authors: Ek-Uma Imkome, Jintana Yunibhand, Waraporn Chaiyawat

Abstract:

Background: Persons with schizophrenia patient and misusing methamphetamines suffering from social dysfunction that impact on their quality of life. Knowledge of factors related to social dysfunction will guide the effective intervention. Objectives: To determine the direct effect, indirect effect and total effect of an acerbate Psychotics’ Symptoms, Social Support, Stressful life events, Medication use self-efficacy impact on social dysfunction in Thai schizophrenic patient and methamphetamine misuse. Methods: Data were collected from schizophrenic and methamphetamine misuse patient by self report. A linear structural relationship was used to test the hypothesized path model. Results: The hypothesized model was found to fit the empirical data and explained 54% of the variance of the psychotic symptoms (X2 = 114.35, df = 92, p-value = 0.05, X2 /df = 1.24, GFI = 0.96, AGFI = 0.92, CFI = 1.00, NFI = 0.99, NNFI = 0.99, RMSEA = 0.02). The highest total effect on social dysfunction was psychotic symptoms (0.67, p<0.05). Medication use self-efficacy had a direct effect on psychotic symptoms (-0.25, p<0.01), and social support had direct effect on medication use self efficacy (0.36, p <0.01). Conclusions: Psychotic symptoms and stressful life events were the significance factors that influenced direct on social dysfunctioning. Therefore, interventions that are designed to manage these factors are crucial in order to enhance social functioning in this population.

Keywords: Psychotic symptoms, methamphetamine, schizophrenia, stressful life events, social dysfunction, social support, medication use self-efficacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 965
370 A Finite Element/Finite Volume Method for Dam-Break Flows over Deformable Beds

Authors: Alia Alghosoun, Ashraf Osman, Mohammed Seaid

Abstract:

A coupled two-layer finite volume/finite element method was proposed for solving dam-break flow problem over deformable beds. The governing equations consist of the well-balanced two-layer shallow water equations for the water flow and a linear elastic model for the bed deformations. Deformations in the topography can be caused by a brutal localized force or simply by a class of sliding displacements on the bathymetry. This deformation in the bed is a source of perturbations, on the water surface generating water waves which propagate with different amplitudes and frequencies. Coupling conditions at the interface are also investigated in the current study and two mesh procedure is proposed for the transfer of information through the interface. In the present work a new procedure is implemented at the soil-water interface using the finite element and two-layer finite volume meshes with a conservative distribution of the forces at their intersections. The finite element method employs quadratic elements in an unstructured triangular mesh and the finite volume method uses the Rusanove to reconstruct the numerical fluxes. The numerical coupled method is highly efficient, accurate, well balanced, and it can handle complex geometries as well as rapidly varying flows. Numerical results are presented for several test examples of dam-break flows over deformable beds. Mesh convergence study is performed for both methods, the overall model provides new insight into the problems at minimal computational cost.

Keywords: Dam-break flows, deformable beds, finite element method, finite volume method, linear elasticity, Shallow water equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 848
369 Comparative Study Using Weka for Red Blood Cells Classification

Authors: Jameela Ali Alkrimi, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy

Abstract:

Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithms tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital - Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.

Keywords: K-Nearest Neighbors, Neural Network, Radial Basis Function, Red blood cells, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2936
368 Enhance Indoor Environment in Buildings and Its Effect on Improving Occupant's Health

Authors: Imad M. Assali

Abstract:

Recently, the world main problem is a global warming and climate change affecting both outdoor and indoor environments, especially the air quality (AQ) as a result of vast migration of people from rural areas to urban areas. Therefore, cities became more crowded and denser from an irregular population increase, along with increasing urbanization caused many problems for the environment such as increasing the land prices, changes in life style, and the new buildings are not adapted to the climate producing uncomfortable and unhealthy indoor building conditions. As interior environments are the places that create the most intimate relationship with the user. Consequently, the indoor environment quality (IEQ) for buildings became uncomfortable and unhealthy for its occupants. The symptoms commonly associated with poor indoor environment such as itchy, headache, fatigue, and respiratory complaints such as cough and congestion, etc. The symptoms tend to improve over time or even disappear when people are away from the building. Therefore, designing a healthy indoor environment to fulfill human needs is the main concern for architects and interior designer. However, this research explores how occupant expectations and environmental attitudes may influence occupant health and satisfaction within the context of the indoor environment. In doing so, it reviews and contributes to the methods and tools used to evaluate only the indoor environment quality (IEQ) components of building performance. Its main aim is to review the literature on indoor human comfort. This is followed by a review of previous papers published related to human comfort. Finally, this paper will provide possible approaches in design level of healthy buildings.

Keywords: Sustainable building, indoor environment quality (IEQ), occupant's health, active system, sick building syndrome (SBS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1842
367 Reliability-Based Maintenance Management Methodology to Minimise Life Cycle Cost of Water Supply Networks

Authors: Mojtaba Mahmoodian, Joshua Phelan, Mehdi Shahparvari

Abstract:

With a large percentage of countries’ total infrastructure expenditure attributed to water network maintenance, it is essential to optimise maintenance strategies to rehabilitate or replace underground pipes before failure occurs. The aim of this paper is to provide water utility managers with a maintenance management approach for underground water pipes, subject to external loading and material corrosion, to give the lowest life cycle cost over a predetermined time period. This reliability-based maintenance management methodology details the optimal years for intervention, the ideal number of maintenance activities to perform before replacement and specifies feasible renewal options and intervention prioritisation to minimise the life cycle cost. The study was then extended to include feasible renewal methods by determining the structural condition index and potential for soil loss, then obtaining the failure impact rating to assist in prioritising pipe replacement. A case study on optimisation of maintenance plans for the Melbourne water pipe network is considered in this paper to evaluate the practicality of the proposed methodology. The results confirm that the suggested methodology can provide water utility managers with a reliable systematic approach to determining optimum maintenance plans for pipe networks.

Keywords: Water pipe networks, maintenance management, reliability analysis, optimum maintenance plan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1210
366 Effect of Muscle Energy Technique on Anterior Pelvic Tilt in Lumbar Spondylosis Patients

Authors: Enas Elsayed Abutaleb, Mohamed Taher Eldesoky, Shahenda Abd El Rasol

Abstract:

Background: Muscle Energy Techniques (MET) have been widely used by manual therapists over the past years, but still limited research validated its use and there was limited evidence to substantiate the theories used to explain its effects. Objective: To investigate the effect of Muscle Energy Technique (MET) on anterior pelvic tilt in patients with lumbar spondylosis. Design: Randomized controlled trial. Subjects: Thirty patients with anterior pelvic tilt from both sexes were involved, aged between 35 to 50 years old and they were divided into MET and control groups with 15 patients in each. Methods: All patients received 3sessions/week for 4 weeks where the study group received MET, Ultrasound and Infrared, and the control group received U.S and I.R only. Pelvic angle was measured by palpation meter, pain severity by the visual analogue scale and functional disabilities by the Oswestry disability index. Results: Both groups showed significant improvement in all measured variables. The MET group was significantly better than the control group in pelvic angle, pain severity, and functional disability as p-value were (0.001, 0.0001, 0.0001) respectively. Conclusion and implication: the study group fulfilled greater improvement in all measured variables than the control group which implies that application of MET in combination with U.S and I.R were more effective in improving pelvic tilting angle, pain severity and functional disabilities than using electrotherapy only.

Keywords: Anterior pelvic tilt, lumbar spondylosis, muscle energy technique exercise, palpation meter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3959
365 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images

Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj

Abstract:

Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.

Keywords: Image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1103
364 Vibration Analysis of Gas Turbine SIEMENS 162MW - V94.2 Related to Iran Power Plant Industry in Fars Province

Authors: Omid A. Zargar

Abstract:

Vibration analysis of most critical equipment is considered as one of the most challenging activities in preventive maintenance. Utilities are heart of the process in big industrial plants like petrochemical zones. Vibration analysis methods and condition monitoring systems of these kinds of equipments are developed too much in recent years. On the other hand, there are too much operation factors like inlet and outlet pressures and temperatures that should be monitored. In this paper, some of the most effective concepts and techniques related to gas turbine vibration analysis are discussed. In addition, a gas turbine SIEMENS 162MW - V94.2 vibration case history related to Iran power industry in Fars province is explained. Vibration monitoring system and machinery technical specification are introduced. Besides, absolute and relative vibration trends, turbine and compressor orbits, Fast Fourier transform (FFT) in absolute vibrations, vibration modal analysis, turbine and compressor start up and shut down conditions, bode diagrams for relative vibrations, Nyquist diagrams and waterfall or three-dimensional FFT diagrams in startup and trip conditions are discussed with relative graphs. Furthermore, Split Resonance in gas turbines is discussed in details. Moreover, some updated vibration monitoring system, blade manufacturing technique and modern damping mechanism are discussed in this paper.

Keywords: Gas turbine, turbine compressor, vibration data collector, utility, condition monitoring, non-contact probe, Relative Vibration, Absolute Vibration, Split Resonance, Time Wave Form (TWF), Fast Fourier transform (FFT).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3582
363 Forecasting Stock Price Manipulation in Capital Market

Authors: F. Rahnamay Roodposhti, M. Falah Shams, H. Kordlouie

Abstract:

The aim of the article is extending and developing econometrics and network structure based methods which are able to distinguish price manipulation in Tehran stock exchange. The principal goal of the present study is to offer model for approximating price manipulation in Tehran stock exchange. In order to do so by applying separation method a sample consisting of 397 companies accepted at Tehran stock exchange were selected and information related to their price and volume of trades during years 2001 until 2009 were collected and then through performing runs test, skewness test and duration correlative test the selected companies were divided into 2 sets of manipulated and non manipulated companies. In the next stage by investigating cumulative return process and volume of trades in manipulated companies, the date of starting price manipulation was specified and in this way the logit model, artificial neural network, multiple discriminant analysis and by using information related to size of company, clarity of information, ratio of P/E and liquidity of stock one year prior price manipulation; a model for forecasting price manipulation of stocks of companies present in Tehran stock exchange were designed. At the end the power of forecasting models were studied by using data of test set. Whereas the power of forecasting logit model for test set was 92.1%, for artificial neural network was 94.1% and multi audit analysis model was 90.2%; therefore all of the 3 aforesaid models has high power to forecast price manipulation and there is no considerable difference among forecasting power of these 3 models.

Keywords: Price Manipulation, Liquidity, Size of Company, Floating Stock, Information Clarity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2793
362 Real-time Haptic Modeling and Simulation for Prosthetic Insertion

Authors: Catherine A. Todd, Fazel Naghdy

Abstract:

In this work a surgical simulator is produced which enables a training otologist to conduct a virtual, real-time prosthetic insertion. The simulator provides the Ear, Nose and Throat surgeon with real-time visual and haptic responses during virtual cochlear implantation into a 3D model of the human Scala Tympani (ST). The parametric model is derived from measured data as published in the literature and accounts for human morphological variance, such as differences in cochlear shape, enabling patient-specific pre- operative assessment. Haptic modeling techniques use real physical data and insertion force measurements, to develop a force model which mimics the physical behavior of an implant as it collides with the ST walls during an insertion. Output force profiles are acquired from the insertion studies conducted in the work, to validate the haptic model. The simulator provides the user with real-time, quantitative insertion force information and associated electrode position as user inserts the virtual implant into the ST model. The information provided by this study may also be of use to implant manufacturers for design enhancements as well as for training specialists in optimal force administration, using the simulator. The paper reports on the methods for anatomical modeling and haptic algorithm development, with focus on simulator design, development, optimization and validation. The techniques may be transferrable to other medical applications that involve prosthetic device insertions where user vision is obstructed.

Keywords: Haptic modeling, medical device insertion, real-time visualization of prosthetic implantation, surgical simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
361 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme Gradient Boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impairment, multiclass classification, ADNI, support vector machine, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 899
360 Co-Disposal of Coal Ash with Mine Tailings in Surface Paste Disposal Practices: A Gold Mining Case Study

Authors: M. L. Dinis, M. C. Vila, A. Fiúza, A. Futuro, C. Nunes

Abstract:

The present paper describes the study of paste tailings prepared in laboratory using gold tailings, produced in a Finnish gold mine with the incorporation of coal ash. Natural leaching tests were conducted with the original materials (tailings, fly and bottom ashes) and also with paste mixtures that were prepared with different percentages of tailings and ashes. After leaching, the solid wastes were physically and chemically characterized and the results were compared to those selected as blank – the unleached samples. The tailings and the coal ash, as well as the prepared mixtures, were characterized, in addition to the textural parameters, by the following measurements: grain size distribution, chemical composition and pH. Mixtures were also tested in order to characterize their mechanical behavior by measuring the flexural strength, the compressive strength and the consistency. The original tailing samples presented an alkaline pH because during their processing they were previously submitted to pressure oxidation with destruction of the sulfides. Therefore, it was not possible to ascertain the effect of the coal ashes in the acid mine drainage. However, it was possible to verify that the paste reactivity was affected mostly by the bottom ash and that the tailings blended with bottom ash present lower mechanical strength than when blended with a combination of fly and bottom ash. Surface paste disposal offer an attractive alternative to traditional methods in addition to the environmental benefits of incorporating large-volume wastes (e.g. bottom ash). However, a comprehensive characterization of the paste mixtures is crucial to optimize paste design in order to enhance engineer and environmental properties.

Keywords: Coal ash, gold tailings, paste, surface disposal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1394
359 A Numerical Strategy to Design Maneuverable Micro-Biomedical Swimming Robots Based on Biomimetic Flagellar Propulsion

Authors: Arash Taheri, Meysam Mohammadi-Amin, Seyed Hossein Moosavy

Abstract:

Medical applications are among the most impactful areas of microrobotics. The ultimate goal of medical microrobots is to reach currently inaccessible areas of the human body and carry out a host of complex operations such as minimally invasive surgery (MIS), highly localized drug delivery, and screening for diseases at their very early stages. Miniature, safe and efficient propulsion systems hold the key to maturing this technology but they pose significant challenges. A new type of propulsion developed recently, uses multi-flagella architecture inspired by the motility mechanism of prokaryotic microorganisms. There is a lack of efficient methods for designing this type of propulsion system. The goal of this paper is to overcome the lack and this way, a numerical strategy is proposed to design multi-flagella propulsion systems. The strategy is based on the implementation of the regularized stokeslet and rotlet theory, RFT theory and new approach of “local corrected velocity". The effects of shape parameters and angular velocities of each flagellum on overall flow field and on the robot net forces and moments are considered. Then a multi-layer perceptron artificial neural network is designed and employed to adjust the angular velocities of the motors for propulsion control. The proposed method applied successfully on a sample configuration and useful demonstrative results is obtained.

Keywords: Artificial Neural Network, Biomimetic Microrobots, Flagellar Propulsion, Swimming Robots.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1861
358 Case Study of the Roma Tomato Distribution Chain: A Dynamic Interface for an Agricultural Enterprise in Mexico

Authors: Ernesto A. Lagarda-Leyva, Manuel A. Valenzuela L., José G. Oshima C., Arnulfo A. Naranjo-Flores

Abstract:

From August to December of 2016, a diagnostic and strategic planning study was carried out on the supply chain of the company Agropecuaria GABO S.A. de C.V. The final product of the study was the development of the strategic plan and a project portfolio to meet the demands of the three links in the supply chain of the Roma tomato exported annually to the United States of America. In this project, the strategic objective of ensuring the proper handling of the product was selected and one of the goals associated with this was the employment of quantitative methods to support decision making. Considering the antecedents, the objective of this case study was to develop a model to analyze the behavioral dynamics in the distribution chain, from the logistics of storage and shipment of Roma tomato in 81-case pallets (11.5 kg per case), to the two pre-cooling rooms and eventual loading onto transports, seeking to reduce the bottleneck and the associated costs by means of a dynamic interface. The methodology used was that of system dynamics, considering four phases that were adapted to the purpose of the study: 1) the conceptualization phase; 2) the formulation phase; 3) the evaluation phase; and 4) the communication phase. The main practical conclusions lead to the possibility of reducing both the bottlenecks in the cooling rooms and the costs by simulating scenarios and modifying certain policies. Furthermore, the creation of the dynamic interface between the model and the stakeholders was achieved by generating interaction with buttons and simple instructions that allow making modifications and observing diverse behaviors.

Keywords: Agrilogistics, distribution, scenarios, system dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768
357 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: Time history dynamic analysis, basic modal displacement, earthquake induced demands, shear steel structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1372
356 Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

The widespread popularity of mobile devices and the development of artificial intelligence (AI) have led to the widespread adoption of deep learning (DL) in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace, a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Additionally, we propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. Using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We conduct an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace outperformed FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: Mobile computing, deep learning apps, sensitive information, static analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 495
355 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG

Authors: Mamta Garg

Abstract:

While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.

Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
354 Evaluation of Chiller Power Consumption Using Grey Prediction

Authors: Tien-Shun Chan, Yung-Chung Chang, Cheng-Yu Chu, Wen-Hui Chen, Yuan-Lin Chen, Shun-Chong Wang, Chang-Chun Wang

Abstract:

98% of the energy needed in Taiwan has been imported. The prices of petroleum and electricity have been increasing. In addition, facility capacity, amount of electricity generation, amount of electricity consumption and number of Taiwan Power Company customers have continued to increase. For these reasons energy conservation has become an important topic. In the past linear regression was used to establish the power consumption models for chillers. In this study, grey prediction is used to evaluate the power consumption of a chiller so as to lower the total power consumption at peak-load (so that the relevant power providers do not need to keep on increasing their power generation capacity and facility capacity). In grey prediction, only several numerical values (at least four numerical values) are needed to establish the power consumption models for chillers. If PLR, the temperatures of supply chilled-water and return chilled-water, and the temperatures of supply cooling-water and return cooling-water are taken into consideration, quite accurate results (with the accuracy close to 99% for short-term predictions) may be obtained. Through such methods, we can predict whether the power consumption at peak-load will exceed the contract power capacity signed by the corresponding entity and Taiwan Power Company. If the power consumption at peak-load exceeds the power demand, the temperature of the supply chilled-water may be adjusted so as to reduce the PLR and hence lower the power consumption.

Keywords: Gery system theory, grey prediction, chller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2524