Search results for: data acquisition.
650 Modeling Reaction Time in Car-Following Behaviour Based on Human Factors
Authors: Atif Mehmood, Said M. Easa
Abstract:
This paper develops driver reaction-time models for car-following analysis based on human factors. The reaction time was classified as brake-reaction time (BRT) and acceleration/deceleration reaction time (ADRT). The BRT occurs when the lead vehicle is barking and its brake light is on, while the ADRT occurs when the driver reacts to adjust his/her speed using the gas pedal only. The study evaluates the effect of driver characteristics and traffic kinematic conditions on the driver reaction time in a car-following environment. The kinematic conditions introduced urgency and expectancy based on the braking behaviour of the lead vehicle at different speeds and spacing. The kinematic conditions were used for evaluating the BRT and are classified as normal, surprised, and stationary. Data were collected on a driving simulator integrated into a real car and included the BRT and ADRT (as dependent variables) and driver-s age, gender, driving experience, driving intensity (driving hours per week), vehicle speed, and spacing (as independent variables). The results showed that there was a significant difference in the BRT at normal, surprised, and stationary scenarios and supported the hypothesis that both urgency and expectancy had significant effects on BRT. Driver-s age, gender, speed, and spacing were found to be significant variables for the BRT in all scenarios. The results also showed that driver-s age and gender were significant variables for the ADRT. The research presented in this paper is part of a larger project to develop a driversensitive in-vehicle rear-end collision warning system.Keywords: Brake reaction time, car-following, human factors, modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4314649 The Determination of Stress Experienced by Nursing Undergraduate Students during Their Education
Authors: Gülden Küçükakça, Şefika Dilek Güven, Rahşan Kolutek, Seçil Taylan
Abstract:
Objective: Nursing students face with stress factors affecting academic performance and quality of life as from first moments of their educational life. Stress causes health problems in students such as physical, psycho-social, and behavioral disorders and might damage formation of professional identity by decreasing efficiency of education. In addition to determination of stress experienced by nursing students during their education, it was aimed to help review theoretical and clinical education settings for bringing stress of nursing students into positive level and to raise awareness of educators concerning their own professional behaviors. Methods: The study was conducted with 315 students studying at nursing department of Semra and Vefa Küçük Health High School, Nevşehir Hacı Bektaş Veli University in the academic year of 2015-2016 and agreed to participate in the study. “Personal Information Form” prepared by the researchers upon the literature review and “Nursing Education Stress Scale (NESS)” were used in this study. Data were assessed with analysis of variance and correlation analysis. Results: Mean NESS Scale score of the nursing students was estimated to be 66.46±16.08 points. Conclusions: As a result of this study, stress level experienced by nursing undergraduate students during their education was determined to be high. In accordance with this result, it can be recommended to determine sources of stress experienced by nursing undergraduate students during their education and to develop approaches to eliminate these stress sources.Keywords: Stress, nursing education, nursing student, nursing education stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2089648 Mathematics Anxiety among Male and Female Students
Authors: Wern Lin Yeo, Choo Kim Tan, Sook Ling Lew
Abstract:
The purpose of this study is to determine the relationship of anxiety level between male and female undergraduates at a private university in Malaysia. Convenient sampling method used in this study in which the students were selected based on the grouping assigned by the faculty. There were 214 undergraduates who registered the probability courses had participated in this study. Mathematics Anxiety Rating Scale (MARS) was the instrument used in study which used to determine students’ anxiety level towards probability. Reliability and validity of instrument was done before the major study was conducted. In the major study, students were given briefing about the study conducted. Participation of this study was voluntary. Students were given consent form to determine whether they agree to participate in the study. Duration of two weeks was given for students to complete the given online questionnaire. The data collected will be analyzed using Statistical Package for the Social Sciences (SPSS) to determine the level of anxiety. There were three anxiety level, i.e., low, average and high. Students’ anxiety level was determined based on their scores obtained compared with the mean and standard deviation. If the scores obtained were below mean and standard deviation, the anxiety level was low. If the scores were at below and above the mean and between one standard deviation, the anxiety level was average. If the scores were above the mean and greater than one standard deviation, the anxiety level was high. Results showed that both of genders were having average anxiety level. Among low, average and high anxiety level, frequency of males were found to be higher as compared to females. Hence, the mean values obtained for males (M = 3.62) was higher than females (M = 3.42). In order to be significant of anxiety level among the gender, the p-value should be less than .05. The p-value obtained in this study was .117. However, this value was greater than .05. Thus, there was no significant difference of anxiety level among the gender. In other words, there was no relationship of anxiety level with the gender.Keywords: Anxiety level, gender, mathematics anxiety, probability and statistics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4289647 Investigating the Effectiveness of a 3D Printed Composite Mold
Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg
Abstract:
In composite manufacturing, the fabrication of tooling and tooling maintenance contributes to a large portion of the total cost. However, as the applications of composite materials continue to increase, there is also a growing demand for more tooling. The demand for more tooling places heavy emphasis on the industry’s ability to fabricate high quality tools while maintaining the tool’s cost effectiveness. One of the popular techniques of tool fabrication currently being developed utilizes additive manufacturing technology known as 3D printing. The popularity of 3D printing is due to 3D printing’s ability to maintain low material waste, low cost, and quick fabrication time. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite mold. A steel valve cover from an aircraft reciprocating engine was modeled utilizing 3D scanning and computer-aided design (CAD) to create a 3D printed composite mold. The mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The carbon fiber valve covers were evaluated for dimensional accuracy and quality while the 3D printed composite mold was evaluated for durability and dimensional stability. The data collected from this study provided valuable information in the understanding of 3D printed composite molds, potential improvements for the molds, and considerations for future tooling design.Keywords: Additive manufacturing, carbon fiber, composite tooling, molds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 909646 MinRoot and CMesh: Interconnection Architectures for Network-on-Chip Systems
Authors: Mohammad Ali Jabraeil Jamali, Ahmad Khademzadeh
Abstract:
The success of an electronic system in a System-on- Chip is highly dependent on the efficiency of its interconnection network, which is constructed from routers and channels (the routers move data across the channels between nodes). Since neither classical bus based nor point to point architectures can provide scalable solutions and satisfy the tight power and performance requirements of future applications, the Network-on-Chip (NoC) approach has recently been proposed as a promising solution. Indeed, in contrast to the traditional solutions, the NoC approach can provide large bandwidth with moderate area overhead. The selected topology of the components interconnects plays prime rule in the performance of NoC architecture as well as routing and switching techniques that can be used. In this paper, we present two generic NoC architectures that can be customized to the specific communication needs of an application in order to reduce the area with minimal degradation of the latency of the system. An experimental study is performed to compare these structures with basic NoC topologies represented by 2D mesh, Butterfly-Fat Tree (BFT) and SPIN. It is shown that Cluster mesh (CMesh) and MinRoot schemes achieves significant improvements in network latency and energy consumption with only negligible area overhead and complexity over existing architectures. In fact, in the case of basic NoC topologies, CMesh and MinRoot schemes provides substantial savings in area as well, because they requires fewer routers. The simulation results show that CMesh and MinRoot networks outperforms MESH, BFT and SPIN in main performance metrics.
Keywords: MinRoot, CMesh, NoC, Topology, Performance Evaluation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127645 Seismic Vulnerability Assessment of Masonry Buildings in Seismic Prone Regions: The Case of Annaba City, Algeria
Authors: Allaeddine Athmani, Abdelhacine Gouasmia, Tiago Ferreira, Romeu Vicente
Abstract:
Seismic vulnerability assessment of masonry buildings is a fundamental issue even for moderate to low seismic hazard regions. This fact is even more important when dealing with old structures such as those located in Annaba city (Algeria), which the majority of dates back to the French colonial era from 1830. This category of buildings is in high risk due to their highly degradation state, heterogeneous materials and intrusive modifications to structural and non-structural elements. Furthermore, they are usually shelter a dense population, which is exposed to such risk. In order to undertake a suitable seismic risk mitigation strategies and reinforcement process for such structures, it is essential to estimate their seismic resistance capacity at a large scale. In this sense, two seismic vulnerability index methods and damage estimation have been adapted and applied to a pilot-scale building area located in the moderate seismic hazard region of Annaba city: The first one based on the EMS-98 building typologies, and the second one derived from the Italian GNDT approach. To perform this task, the authors took the advantage of an existing data survey previously performed for other purposes. The results obtained from the application of the two methods were integrated and compared using a geographic information system tool (GIS), with the ultimate goal of supporting the city council of Annaba for the implementation of risk mitigation and emergency planning strategies.Keywords: Annaba city, EMS98 concept, GNDT method, old city center, seismic vulnerability index, unreinforced masonry buildings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634644 Comparative Study Using Weka for Red Blood Cells Classification
Authors: Jameela Ali Alkrimi, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithms tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital - Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.
Keywords: K-Nearest Neighbors, Neural Network, Radial Basis Function, Red blood cells, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2995643 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code
Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader
Abstract:
In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.
Keywords: Bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913642 Evaluation of Eating Habits among Portuguese University Students: A Preliminary Study
Authors: T. H. Rodrigues, Maria J. Reis Lima, R. P. F. Guiné, E. Teixeira de Lemos
Abstract:
Portuguese diet has been gradually diverging from the basic principles of healthy eating, leading to an unbalanced dietary pattern which, associated with increasing sedentary lifestyle, has a negative impact on public health. The main objective of this work was to characterize the dietary habits of university students in Viseu, Portugal. The study consisted of a sample of 80 university students, aged between 18 and 28 years. Anthropometric data (weight (kg) and height (m)) were collected and Body Mass Index (BMI) was calculated. The dietary habits were assessed through a three-day food record and the software Medpoint was used to convert food into energy and nutrients. The results showed that students present a normal body mass index. Female university students made a higher number of daily meals than male students, and these last skipped breakfast more frequently. The values of average daily intake of energy, macronutrients and calcium were higher in males. The food pattern was characterized by a predominant consumption of meat, cereal, fats and sugar. Dietary intake of dairy products, fruits, vegetables and legumes does not meet the recommendations, revealing inadequate food habits such as hypoglycemic, hyperprotein and hyperlipidemic diet. Our findings suggest that preventive interventions should be focus in promoting healthy eating habits and physical activity in adulthood.
Keywords: Food habits, BMI, fortified foods, nutritional deficiencies, university students.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999641 Differences in Stress and Total Deformation Due to Muscle Attachment to the Femur
Authors: Jeong-Woo Seo, Jin-Seung Choi, Dong-Won Kang, Jae-Hyuk Bae, Gye-Rae Tack
Abstract:
To achieve accurate and precise results of finite element analysis (FEA) of bones, it is important to represent the load/boundary conditions as identical as possible to the human body such as the bone properties, the type and force of the muscles, the contact force of the joints, and the location of the muscle attachment. In this study, the difference in the Von-Mises stress and the total deformation was compared by classifying them into Case 1, which shows the actual anatomical form of the muscle attached to the femur when the same muscle force was applied, and Case 2, which gives a simplified representation of the attached location. An inverse dynamical musculoskeletal model was simulated using data from an actual walking experiment to complement the accuracy of the muscular force, the input value of FEA. The FEA method using the results of the muscular force that were calculated through the simulation showed that the maximum Von-Mises stress and the maximum total deformation in Case 2 were underestimated by 8.42% and 6.29%, respectively, compared to Case 1. The torsion energy and bending moment at each location of the femur occurred via the stress ingredient. Due to the geometrical/morphological feature of the femur of having a long bone shape when the stress distribution is wide, as shown in Case 1, a greater Von-Mises stress and total deformation are expected from the sum of the stress ingredients. More accurate results can be achieved only when the muscular strength and the attachment location in the FEA of the bones and the attachment form are the same as those in the actual anatomical condition under the various moving conditions of the human body.Keywords: Musculoskeletal modeling, Finite element analysis, Von-Mises stress, Deformation, Muscle attachment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2220640 Behavioral Response of Dogs to Interior Environment: An Exploratory Study on Design Parameters for Designing Dog Boarding Centers in Indian Context
Authors: M. R. Akshaya, Veena Rao
Abstract:
Pet population in India is increasing phenomenally owing to the changes in urban lifestyle with increasing number of single professionals, single parents, delayed parenthood etc. The animal companionship as a means of reducing stress levels, deriving emotional support, and unconditional love provided by dogs are a few reasons attributed for increasing pet ownership. The consequence is the booming of the pet care products and dog care centers catering to the different requirements of rearing the pets. Dog care centers quite popular in tier 1 metros of India cater to the requirement of the dog owners providing space for the dogs in absence of the owner. However, it is often reported that the absence of the owner leads to destructive and exploratory behavior issues; the main being the anxiety disorders. In the above context, it becomes imperative for a designer to design dog boarding centers that help in reducing the separation anxiety in dogs keeping in mind the different interior design parameters. An exploratory research with focus group discussion is employed involving a group of dog owners, behaviorists, proprietors of day care as well as boarding centers, and veterinarians to understand their perception on the significance of different interior parameters of color, texture, ventilation, aroma therapy and acoustics as a means of reducing the stress levels in dogs sent to the boarding centers. The data collected is organized as thematic networks thus enabling the listing of the interior design parameters that needs to be considered in designing dog boarding centers.
Keywords: Behavioral response, design parameters, dog boarding centers, interior environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1060639 Use of Social Media in PR: A Change of Trend
Authors: Tang Mui Joo, Chan Eang Teng
Abstract:
The use of social media has become more defined. It has been widely used for the purpose of business. More marketers are now using social media as tools to enhance their businesses. Whereas on the other hand, there are more and more people spending their time through mobile apps to be engaged in the social media sites like YouTube, Facebook, Twitter and others. Social media has even become common in Public Relations (PR). It has become number one platform for creating and sharing content. In view to this, social media has changed the rules in PR where it brings new challenges and opportunities to the profession. Although corporate websites, chat-rooms, email customer response facilities and electronic news release distribution are now viewed as standard aspects of PR practice, many PR practitioners are still struggling with the impact of new media though the implementation of social media is potentially reducing the cost of communication. It is to the point that PR practitioners are not fully embracing new media, they are ill-equipped to do so and they have a fear of the technology. Somehow that social media has become a new style of communication that is characterized by conversation and community. It has become a platform that allows individuals to interact with one another and build relationship among each other. Therefore, in the use of business world, consumers are able to interact with those companies that have joined any social media. Based on their experiences with social networking site interactions, they are also exposed to personal interaction while communicating. This paper is to study the impact of social media to PR. This paper discovers the potential changes of PR practices in a developing country like Malaysia. Eventually the study reflects on how PR practitioners are actually using social media in the country. This paper is based on two theories in its development of this research foundation. Media Ecology Theory is to support the impact and changes to PR. Social Penetration Theory is to reflect on how the use of social media is among PRs. This research is using survey with PR practitioners in its data collection. The results have shown that PR professionals value social media more than they actually use it and the way of organizations communicate had been changed due to the transformation of social media.Keywords: New media, social media, PR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6085638 Investigating the Viability of Small-Scale Rapid Alloy Prototyping of Interstitial Free Steels
Authors: Talal S. Abdullah, Shahin Mehraban, Geraint Lodwig, Nicholas P. Lavery
Abstract:
The defining property of Interstitial Free (IF) steels is formability, comprehensively measured using the Lankford coefficient (r-value) on uniaxial tensile test data. The contributing factors supporting this feature are grain size, orientation, and elemental additions. The processes that effectively modulate these factors are the casting procedure, hot rolling, and heat treatment. An existing methodology is well-practised in the steel industry; however, large-scale production and experimentation consume significant proportions of time, money, and material. Introducing small-scale rapid alloy prototyping (RAP) as an alternative process would considerably reduce the drawbacks relative to standard practices. The aim is to finetune the existing fundamental procedures implemented in the industrial plant to adapt to the RAP route. IF material is remelted in the 80-gram coil induction melting (CIM) glovebox. To birth small grains, maximum deformation must be induced onto the cast material during the hot rolling process. The rolled strip must then satisfy the polycrystalline behaviour of the bulk material by displaying a resemblance in microstructure, hardness, and formability to that of the literature and actual plant steel. A successful outcome of this work is that small-scale RAP can achieve target compositions with similar microstructures and statistically consistent mechanical properties which complements and accelerates the development of novel steel grades.
Keywords: Interstitial free, miniaturized tensile specimen, plastic anisotropy, rapid alloy prototyping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 145637 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study
Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb
Abstract:
The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.
Keywords: Anthropomorphic phantom, computed tomography, CT-expo, radiation dose.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464636 Analyzing the Impact of Spatio-Temporal Climate Variations on the Rice Crop Calendar in Pakistan
Authors: Muhammad Imran, Iqra Basit, Mobushir Riaz Khan, Sajid Rasheed Ahmad
Abstract:
The present study investigates the space-time impact of climate change on the rice crop calendar in tropical Gujranwala, Pakistan. The climate change impact was quantified through the climatic variables, whereas the existing calendar of the rice crop was compared with the phonological stages of the crop, depicted through the time series of the Normalized Difference Vegetation Index (NDVI) derived from Landsat data for the decade 2005-2015. Local maxima were applied on the time series of NDVI to compute the rice phonological stages. Panel models with fixed and cross-section fixed effects were used to establish the relation between the climatic parameters and the time-series of NDVI across villages and across rice growing periods. Results show that the climatic parameters have significant impact on the rice crop calendar. Moreover, the fixed effect model is a significant improvement over cross-sectional fixed effect models (R-squared equal to 0.673 vs. 0.0338). We conclude that high inter-annual variability of climatic variables cause high variability of NDVI, and thus, a shift in the rice crop calendar. Moreover, inter-annual (temporal) variability of the rice crop calendar is high compared to the inter-village (spatial) variability. We suggest the local rice farmers to adapt this change in the rice crop calendar.
Keywords: Landsat NDVI, panel models, temperature, rainfall.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 911635 Statistical Assessment of Models for Determination of Soil – Water Characteristic Curves of Sand Soils
Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha
Abstract:
Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and timeconsuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.
Keywords: Soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2665634 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images
Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj
Abstract:
Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.
Keywords: Image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1181633 Vibration Analysis of Gas Turbine SIEMENS 162MW - V94.2 Related to Iran Power Plant Industry in Fars Province
Authors: Omid A. Zargar
Abstract:
Vibration analysis of most critical equipment is considered as one of the most challenging activities in preventive maintenance. Utilities are heart of the process in big industrial plants like petrochemical zones. Vibration analysis methods and condition monitoring systems of these kinds of equipments are developed too much in recent years. On the other hand, there are too much operation factors like inlet and outlet pressures and temperatures that should be monitored. In this paper, some of the most effective concepts and techniques related to gas turbine vibration analysis are discussed. In addition, a gas turbine SIEMENS 162MW - V94.2 vibration case history related to Iran power industry in Fars province is explained. Vibration monitoring system and machinery technical specification are introduced. Besides, absolute and relative vibration trends, turbine and compressor orbits, Fast Fourier transform (FFT) in absolute vibrations, vibration modal analysis, turbine and compressor start up and shut down conditions, bode diagrams for relative vibrations, Nyquist diagrams and waterfall or three-dimensional FFT diagrams in startup and trip conditions are discussed with relative graphs. Furthermore, Split Resonance in gas turbines is discussed in details. Moreover, some updated vibration monitoring system, blade manufacturing technique and modern damping mechanism are discussed in this paper.
Keywords: Gas turbine, turbine compressor, vibration data collector, utility, condition monitoring, non-contact probe, Relative Vibration, Absolute Vibration, Split Resonance, Time Wave Form (TWF), Fast Fourier transform (FFT).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3682632 Forecasting Stock Price Manipulation in Capital Market
Authors: F. Rahnamay Roodposhti, M. Falah Shams, H. Kordlouie
Abstract:
The aim of the article is extending and developing econometrics and network structure based methods which are able to distinguish price manipulation in Tehran stock exchange. The principal goal of the present study is to offer model for approximating price manipulation in Tehran stock exchange. In order to do so by applying separation method a sample consisting of 397 companies accepted at Tehran stock exchange were selected and information related to their price and volume of trades during years 2001 until 2009 were collected and then through performing runs test, skewness test and duration correlative test the selected companies were divided into 2 sets of manipulated and non manipulated companies. In the next stage by investigating cumulative return process and volume of trades in manipulated companies, the date of starting price manipulation was specified and in this way the logit model, artificial neural network, multiple discriminant analysis and by using information related to size of company, clarity of information, ratio of P/E and liquidity of stock one year prior price manipulation; a model for forecasting price manipulation of stocks of companies present in Tehran stock exchange were designed. At the end the power of forecasting models were studied by using data of test set. Whereas the power of forecasting logit model for test set was 92.1%, for artificial neural network was 94.1% and multi audit analysis model was 90.2%; therefore all of the 3 aforesaid models has high power to forecast price manipulation and there is no considerable difference among forecasting power of these 3 models.Keywords: Price Manipulation, Liquidity, Size of Company, Floating Stock, Information Clarity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2853631 Analytical and Numerical Results for Free Vibration of Laminated Composites Plates
Authors: Mohamed Amine Ben Henni, Taher Hassaine Daouadji, Boussad Abbes, Yu Ming Li, Fazilay Abbes
Abstract:
The reinforcement and repair of concrete structures by bonding composite materials have become relatively common operations. Different types of composite materials can be used: carbon fiber reinforced polymer (CFRP), glass fiber reinforced polymer (GFRP) as well as functionally graded material (FGM). The development of analytical and numerical models describing the mechanical behavior of structures in civil engineering reinforced by composite materials is necessary. These models will enable engineers to select, design, and size adequate reinforcements for the various types of damaged structures. This study focuses on the free vibration behavior of orthotropic laminated composite plates using a refined shear deformation theory. In these models, the distribution of transverse shear stresses is considered as parabolic satisfying the zero-shear stress condition on the top and bottom surfaces of the plates without using shear correction factors. In this analysis, the equation of motion for simply supported thick laminated rectangular plates is obtained by using the Hamilton’s principle. The accuracy of the developed model is demonstrated by comparing our results with solutions derived from other higher order models and with data found in the literature. Besides, a finite-element analysis is used to calculate the natural frequencies of laminated composite plates and is compared with those obtained by the analytical approach.
Keywords: Composites materials, laminated composite plate, shear deformation theory of plates, finite element analysis, free vibration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 854630 Time Series Forecasting Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed length window in the past as an explicit input. In this paper, we study how the performance of predictive models change as a function of different look-back window sizes and different amounts of time to predict into the future. We also consider the performance of the recent attention-based transformer models, which had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), Gated Recurrent Units (GRU), and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the website of University of California, Irvine (UCI), which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Absolute Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.
Keywords: Air quality prediction, deep learning algorithms, time series forecasting, look-back window.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1168629 Prediction of Optimum Cutting Parameters to obtain Desired Surface in Finish Pass end Milling of Aluminium Alloy with Carbide Tool using Artificial Neural Network
Authors: Anjan Kumar Kakati, M. Chandrasekaran, Amitava Mandal, Amit Kumar Singh
Abstract:
End milling process is one of the common metal cutting operations used for machining parts in manufacturing industry. It is usually performed at the final stage in manufacturing a product and surface roughness of the produced job plays an important role. In general, the surface roughness affects wear resistance, ductility, tensile, fatigue strength, etc., for machined parts and cannot be neglected in design. In the present work an experimental investigation of end milling of aluminium alloy with carbide tool is carried out and the effect of different cutting parameters on the response are studied with three-dimensional surface plots. An artificial neural network (ANN) is used to establish the relationship between the surface roughness and the input cutting parameters (i.e., spindle speed, feed, and depth of cut). The Matlab ANN toolbox works on feed forward back propagation algorithm is used for modeling purpose. 3-12-1 network structure having minimum average prediction error found as best network architecture for predicting surface roughness value. The network predicts surface roughness for unseen data and found that the result/prediction is better. For desired surface finish of the component to be produced there are many different combination of cutting parameters are available. The optimum cutting parameter for obtaining desired surface finish, to maximize tool life is predicted. The methodology is demonstrated, number of problems are solved and algorithm is coded in Matlab®.Keywords: End milling, Surface roughness, Neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2164628 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling
Authors: Erfan Niazi, Marianne Fenech
Abstract:
Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.
Keywords: Red blood cell, Rouleaux, microfluidics, image processing, population balance modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1058627 Application of AIMSUN Microscopic Simulation Model in Evaluating Side Friction Impacts on Traffic Stream Performance
Authors: H. Naghawi, M. Abu Shattal, W. Idewu
Abstract:
Side friction factors can be defined as all activities taking place at the side of the road and within the traffic stream, which would negatively affect the traffic stream performance. If the effect of these factors is adequately addressed and managed, traffic stream performance and capacity could be improved. The main objective of this paper is to identify and assess the impact of different side friction factors on traffic stream performance of a hypothesized urban arterial road. Hypothetical data were assumed mainly because there is no road operating under ideal conditions, with zero side friction, in the developing countries. This is important for the creation of the base model which is important for comparison purposes. For this purpose, three essential steps were employed. Step one, a hypothetical base model was developed under ideal traffic and geometric conditions. Step two, 18 hypothetical alternative scenarios were developed including side friction factors such as on-road parking, pedestrian movement, and the presence of trucks in the traffic stream. These scenarios were evaluated for one, two, and three lane configurations and under different traffic volumes ranging from low to high. Step three, the impact of side friction, of each scenario, on speed-flow models was evaluated using AIMSUN microscopic traffic simulation software. Generally, it was found that, a noticeable negative shift in the speed flow curves from the base conditions was observed for all scenarios. This indicates negative impact of the side friction factors on free flow speed and traffic stream average speed as well as on capacity.
Keywords: AIMSUN, parked vehicles, pedestrians, side friction, traffic performance, trucks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 861626 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG
Authors: Mamta Garg
Abstract:
While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676625 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4 × 4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4 × 4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4 × 4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: Chirp signals, Image multiplexing, Image transformation, Linear canonical transform, Polynomial approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2129624 Quality of Life Assessment across the Cancer Continuum: Understanding the Role of an Exercise Rehabilitation Programme
Authors: Bernat-Carles Serdà Ferrer, Arantza Del Valle Gómez
Abstract:
The Quality of Life (QoL) paradigm is multidimensional, dynamic and modular and its definition differs across the cancer continuum. The challenge in the interpretation of QoL data in clinical research is that QoL is influenced by psychological phenomena such as adaptation to illness. This research aims to obtain a valid and sensitive assessment of QoL change over the continuum disease, and to evaluate a rehabilitation programme aimed at inverting the observed decrease in QoL when patients return to daily living activities. The sample comprised 66 men. Patients were first assessed to establish a baseline (P1-diagnosis). This was followed by a post-test (P2-discharge) and a then-test measurement (P3-retrospective evaluation) and after returning home patients were randomized in experimental and control groups. The experimental group attended a rehabilitation programme over 24 weeks (P4). Results show that from baseline to post-test, QoL decreased significantly. The recalibration then-test confirmed a low QoL in all periods evaluated. Significant differences between the experimental and control groups prove the positive effect of the Exercise Rehabilitation Programme (ERP) on QoL. Understanding the real dynamic of QoL over time would help to adapt rehabilitation programmes by improving sensitivity and efficacy and provide professionals with a more accurate perception of the impact of treatment and side effects on patients’ QoL. Our results underline the importance of changing the approach adopted by health professionals towards one of watchful waiting on patients’ QoL until their complete recovery in daily life.
Keywords: Prostate cancer, quality of life, rehabilitation programme, response shift.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1139623 Investigation of Improved Chaotic Signal Tracking by Echo State Neural Networks and Multilayer Perceptron via Training of Extended Kalman Filter Approach
Authors: Farhad Asadi, S. Hossein Sadati
Abstract:
This paper presents a prediction performance of feedforward Multilayer Perceptron (MLP) and Echo State Networks (ESN) trained with extended Kalman filter. Feedforward neural networks and ESN are powerful neural networks which can track and predict nonlinear signals. However, their tracking performance depends on the specific signals or data sets, having the risk of instability accompanied by large error. In this study we explore this process by applying different network size and leaking rate for prediction of nonlinear or chaotic signals in MLP neural networks. Major problems of ESN training such as the problem of initialization of the network and improvement in the prediction performance are tackled. The influence of coefficient of activation function in the hidden layer and other key parameters are investigated by simulation results. Extended Kalman filter is employed in order to improve the sequential and regulation learning rate of the feedforward neural networks. This training approach has vital features in the training of the network when signals have chaotic or non-stationary sequential pattern. Minimization of the variance in each step of the computation and hence smoothing of tracking were obtained by examining the results, indicating satisfactory tracking characteristics for certain conditions. In addition, simulation results confirmed satisfactory performance of both of the two neural networks with modified parameterization in tracking of the nonlinear signals.Keywords: Feedforward neural networks, nonlinear signal prediction, echo state neural networks approach, leaking rates, capacity of neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 758622 Genome-Wide Analysis of BES1/BZR1 Gene Family in Five Plant Species
Authors: Jafar Ahmadi, Zhohreh Asiaban, Sedigheh Fabriki Ourang
Abstract:
Brassinosteroids (BRs) regulate cell elongation, vascular differentiation, senescence, and stress responses. BRs signal through the BES1/BZR1 family of transcription factors, which regulate hundreds of target genes involved in this pathway. In this research a comprehensive genome-wide analysis was carried out in BES1/BZR1 gene family in Arabidopsis thaliana, Cucumis sativus, Vitis vinifera, Glycin max and Brachypodium distachyon. Specifications of the desired sequences, dot plot and hydropathy plot were analyzed in the protein and genome sequences of five plant species. The maximum amino acid length was attributed to protein sequence Brdic3g with 374aa and the minimum amino acid length was attributed to protein sequence Gm7g with 163aa. The maximum Instability index was attributed to protein sequence AT1G19350 equal with 79.99 and the minimum Instability index was attributed to protein sequence Gm5g equal with 33.22. Aliphatic index of these protein sequences ranged from 47.82 to 78.79 in Arabidopsis thaliana, 49.91 to 57.50 in Vitis vinifera, 55.09 to 82.43 in Glycin max, 54.09 to 54.28 in Brachypodium distachyon 55.36 to 56.83 in Cucumis sativus. Overall, data obtained from our investigation contributes a better understanding of the complexity of the BES1/BZR1 gene family and provides the first step towards directing future experimental designs to perform systematic analysis of the functions of the BES1/BZR1 gene family.
Keywords: BES1/BZR1, Brassinosteroids, Phylogenetic analysis, Transcription factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2255621 An Approach towards Designing an Energy Efficient Building through Embodied Energy Assessment: A Case of Apartment Building in Composite Climate
Authors: Ambalika Ekka
Abstract:
In today’s world, the growing demand for urban built forms has resulted in the production and consumption of building materials i.e. embodied energy in building construction, leading to pollution and greenhouse gas (GHG) emissions. Therefore, new buildings will offer a unique opportunity to implement more energy efficient building without compromising on building performance of the building. Embodied energy of building materials forms major contribution to embodied energy in buildings. The paper results in an approach towards designing an energy efficient apartment building through embodied energy assessment. This paper discusses the trend of residential development in Rourkela, which includes three case studies of the contemporary houses, followed by architectural elements, number of storeys, predominant material use and plot sizes using primary data. It results in identification of predominant material used and other characteristics in urban area. Further, the embodied energy coefficients of various dominant building materials and alternative materials manufactured in Indian Industry is taken in consideration from secondary source i.e. literature study. The paper analyses the embodied energy by estimating materials and operational energy of proposed building followed by altering the specifications of the materials based on the building components i.e. walls, flooring, windows, insulation and roof through res build India software and comparison of different options is assessed with consideration of sustainable parameters. This paper results that autoclaved aerated concrete block only reaches the energy performance Index benchmark i.e. 69.35 kWh/m2 yr i.e. by saving 4% of operational energy and as embodied energy has no particular index, out of all materials it has the highest EE 23206202.43 MJ.
Keywords: Energy efficient, embodied energy, energy performance index, building materials.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 998