Search results for: measuring precision
155 Amino Acid Coated Silver Nanoparticles: A Green Catalyst for Methylene Blue Reduction
Authors: Abhishek Chandra, Man Singh
Abstract:
Highly stable and homogeneously dispersed amino acid coated silver nanoparticles (ANP) of ≈ 10 nm diameter, ranging from 420 to 430 nm are prepared on AgNO3 solution addition to gum of Azadirachta indica solution at 373.15 K. The amino acids were selected based on their polarity. The synthesized nanoparticles were characterized by UV-Vis, FTIR spectroscopy, HR-TEM, XRD, SEM and 1H-NMR. The coated nanoparticles were used as catalyst for the reduction of methylene blue dye in presence of Sn(II) in aqueous, anionic and cationic micellar media. The rate of reduction of dye was determined by measuring the absorbance at 660 nm, spectrophotometrically and followed the order: Kcationic > Kanionic > Kwater. After 12 min and in absence of the ANP, only 2%, 3% and 6% of the dye reduction was completed in aqueous, anionic and cationic micellar media respectively while, in presence of ANP coated by polar neutral amino acid with non-polar -R group, the reduction completed to 84%, 95% and 98% respectively. The ANP coated with polar neutral amino acid having non-polar -R group, increased the rate of reduction of the dye by 94, 3205 and 6370 folds in aqueous, anionic and cationic micellar media respectively. Also, the rate of reduction of the dye increased by three folds when the micellar media was changed from anionic to cationic when the ANP is coated by a polar neutral amino acid having a non-polar -R group.Keywords: Silver nanoparticle, surfactant, methylene blue, amino acid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2530154 Rotor Bearing System Analysis Using the Transfer Matrix Method with Thickness Assumption of Disk and Bearing
Authors: Omid Ghasemalizadeh, Mohammad Reza Mirzaee, Hossein Sadeghi, Mohammad Taghi Ahmadian
Abstract:
There are lots of different ways to find the natural frequencies of a rotating system. One of the most effective methods which is used because of its precision and correctness is the application of the transfer matrix. By use of this method the entire continuous system is subdivided and the corresponding differential equation can be stated in matrix form. So to analyze shaft that is this paper issue the rotor is divided as several elements along the shaft which each one has its own mass and moment of inertia, which this work would create possibility of defining the named matrix. By Choosing more elements number, the size of matrix would become larger and as a result more accurate answers would be earned. In this paper the dynamics of a rotor-bearing system is analyzed, considering the gyroscopic effect. To increase the accuracy of modeling the thickness of the disk and bearings is also taken into account which would cause more complicated matrix to be solved. Entering these parameters to our modeling would change the results completely that these differences are shown in the results. As said upper, to define transfer matrix to reach the natural frequencies of probed system, introducing some elements would be one of the requirements. For the boundary condition of these elements, bearings at the end of the shaft are modeled as equivalent spring and dampers for the discretized system. Also, continuous model is used for the shaft in the system. By above considerations and using transfer matrix, exact results are taken from the calculations. Results Show that, by increasing thickness of the bearing the amplitude of vibration would decrease, but obviously the stiffness of the shaft and the natural frequencies of the system would accompany growth. Consequently it is easily understood that ignoring the influences of bearing and disk thicknesses would results not real answers.Keywords: Rotor System, Disk and Bearing Thickness, Transfer Matrix, Amplitude.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548153 Forecast of Polyethylene Properties in the Gas Phase Polymerization Aided by Neural Network
Authors: Nasrin Bakhshizadeh, Ashkan Forootan
Abstract:
A major problem that affects the quality control of polymer in the industrial polymerization is the lack of suitable on-line measurement tools to evaluate the properties of the polymer such as melt and density indices. Controlling the polymerization in ordinary method is performed manually by taking samples, measuring the quality of polymer in the lab and registry of results. This method is highly time consuming and leads to producing large number of incompatible products. An online application for estimating melt index and density proposed in this study is a neural network based on the input-output data of the polyethylene production plant. Temperature, the level of reactors' bed, the intensity of ethylene mass flow, hydrogen and butene-1, the molar concentration of ethylene, hydrogen and butene-1 are used for the process to establish the neural model. The neural network is taught based on the actual operational data and back-propagation and Levenberg-Marquart techniques. The simulated results indicate that the neural network process model established with three layers (one hidden layer) for forecasting the density and the four layers for the melt index is able to successfully predict those quality properties.
Keywords: Polyethylene, polymerization, density, melt index, neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 685152 Accurate Calculation of Free Frequencies of Beams and Rectangular Plates
Authors: R .Lassoued, M. Guenfoud
Abstract:
An accurate procedure to determine free vibrations of beams and plates is presented. The natural frequencies are exact solutions of governing vibration equations witch load to a nonlinear homogeny system. The bilinear and linear structures considered simulate a bridge. The dynamic behavior of this one is analyzed by using the theory of the orthotropic plate simply supported on two sides and free on the two others. The plate can be excited by a convoy of constant or harmonic loads. The determination of the dynamic response of the structures considered requires knowledge of the free frequencies and the shape modes of vibrations. Our work is in this context. Indeed, we are interested to develop a self-consistent calculation of the Eigen frequencies. The formulation is based on the determination of the solution of the differential equations of vibrations. The boundary conditions corresponding to the shape modes permit to lead to a homogeneous system. Determination of the noncommonplace solutions of this system led to a nonlinear problem in Eigen frequencies. We thus, develop a computer code for the determination of the eigenvalues. It is based on a method of bisection with interpolation whose precision reaches 10 -12. Moreover, to determine the corresponding modes, the calculation algorithm that we develop uses the method of Gauss with a partial optimization of the "pivots" combined with an inverse power procedure. The Eigen frequencies of a plate simply supported along two opposite sides while considering the two other free sides are thus analyzed. The results could be generalized with the case of a beam by regarding it as a plate with low width. We give, in this paper, some examples of treated cases. The comparison with results presented in the literature is completely satisfactory.Keywords: Free frequencies, beams, rectangular plates.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193151 Soil Moisture Control System: A Product Development Approach
Authors: Swapneel U. Naphade, Dushyant A. Patil, Satyabodh M. Kulkarni
Abstract:
In this work, we propose the concept and geometrical design of a soil moisture control system (SMCS) module by following the product development approach to develop an inexpensive, easy to use and quick to install product targeted towards agriculture practitioners. The module delivers water to the agricultural land efficiently by sensing the soil moisture and activating the delivery valve. We start with identifying the general needs of the potential customer. Then, based on customer needs we establish product specifications and identify important measuring quantities to evaluate our product. Keeping in mind the specifications, we develop various conceptual solutions of the product and select the best solution through concept screening and selection matrices. Then, we develop the product architecture by integrating the systems into the final product. In the end, the geometric design is done using human factors engineering concepts like heuristic analysis, task analysis, and human error reduction analysis. The result of human factors analysis reveals the remedies which should be applied while designing the geometry and software components of the product. We find that to design the best grip in terms of comfort and applied force, for a power-type grip, a grip-diameter of 35 mm is the most ideal.
Keywords: Agriculture, human factors, product design, soil moisture control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1311150 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images
Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn
Abstract:
The detection and segmentation of mitochondria from fluorescence microscopy is crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. Although there exists a number of open-source software tools and artificial intelligence (AI) methods designed for analyzing mitochondrial images, the availability of only a few combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compactibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source Python and OpenCV library, the algorithms are implemented in three stages: pre-processing; image binarization; and coarse-to-fine segmentation. The proposed model is validated using the fluorescence mitochondrial dataset. Ground truth labels generated using Labkit were also used to evaluate the performance of our detection and segmentation model using precision, recall and rand index. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks concludes the paper.
Keywords: 2D, Binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 468149 Measuring Relative Efficiency of Korean Construction Company using DEA/Window
Authors: Jung-Lo Park, Sung-Sik Kim, Sun-Young Choi, Ju-Hyung Kim, Jae-Jun Kim
Abstract:
Sub-prime mortgage crisis which began in the US is regarded as the most economic crisis since the Great Depression in the early 20th century. Especially, hidden problems on efficient operation of a business were disclosed at a time and many financial institutions went bankrupt and filed for court receivership. The collapses of physical market lead to bankruptcy of manufacturing and construction businesses. This study is to analyze dynamic efficiency of construction businesses during the five years at the turn of the global financial crisis. By discovering the trend and stability of efficiency of a construction business, this study-s objective is to improve management efficiency of a construction business in the ever-changing construction market. Variables were selected by analyzing corporate information on top 20 construction businesses in Korea and analyzed for static efficiency in 2008 and dynamic efficiency between 2006 and 2010. Unlike other studies, this study succeeded in deducing efficiency trend and stability of a construction business for five years by using the DEA/Window model. Using the analysis result, efficient and inefficient companies could be figured out. In addition, relative efficiency among DMU was measured by comparing the relationship between input and output variables of construction businesses. This study can be used as a literature to improve management efficiency for companies with low efficiency based on efficiency analysis of construction businesses.Keywords: Construction Company, DEA, DEA/Window, Efficiency Analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1985148 The Concept of an Agile Enterprise Research Model
Authors: Maja Sajdak
Abstract:
The aim of this paper is to present the concept of an agile enterprise model and to initiate discussion on the research assumptions of the model presented. The implementation of the research project "The agility of enterprises in the process of adapting to the environment and its changes" began in August 2014 and is planned to last three years. The article has the form of a work-inprogress paper which aims to verify and initiate a debate over the proposed research model. In the literature there are very few publications relating to research into agility; it can be concluded that the most controversial issue in this regard is the method of measuring agility. In previous studies the operationalization of agility was often fragmentary, focusing only on selected areas of agility, for example manufacturing, or analysing only selected sectors. As a result the measures created to date can only be treated as contributory to the development of precise measurement tools. This research project aims to fill a cognitive gap in the literature with regard to the conceptualization and operationalization of an agile company. Thus, the original contribution of the author of this project is the construction of a theoretical model that integrates manufacturing agility (consisting mainly in adaptation to the environment) and strategic agility (based on proactive measures). The author of this research project is primarily interested in the attributes of an agile enterprise which indicate that the company is able to rapidly adapt to changing circumstances and behave pro-actively.Keywords: Agile company, acuity, entrepreneurship, flexibility, research model, strategic leadership.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2488147 Impact of Dynamic Capabilities on Knowledge Management Processes
Authors: Farzad Yavari, Fereydoun Ohadi
Abstract:
Today, with the development and growth of technology and extreme environmental changes, organizations need to identify opportunities and create creativity and innovation in order to be able to maintain or improve their position in competition with others. In this regard, it is necessary that the resources and assets of the organization are coordinated and reviewed in accordance with the orientation of the strategy. One of the competitive advantages of the present age is knowledge management, which is to equip the organization with the knowledge of the day and disseminate among employees and use it in the development of products and services. Therefore, in the forthcoming research, the impact of dynamic capabilities components (sense, seize, and reconfiguration) has been investigated on knowledge management processes (acquisition, integration and knowledge utilization) in the MAPNA Engineering and Construction Company using a field survey and applied research method. For this purpose, a questionnaire was filled out in the form of 15 questions for dynamic components and 15 questions for measuring knowledge management components and distributed among 46 employees of the knowledge management organization. Validity of the questionnaire was evaluated through content validity and its reliability with Cronbach's coefficient. Pearson correlation test and structural equation technique were used to analyze the data. The results of the research indicate a positive significant correlation between the components of dynamic capabilities and knowledge management.
Keywords: Dynamic capabilities, knowledge management, sense capability, seize capability, reconfigurable capability, knowledge acquisition, knowledge integrity, knowledge utilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 828146 Well-Being Inequality Using Superimposing Satisfaction Waves: Heisenberg Uncertainty in Behavioural Economics and Econometrics
Authors: Okay Gunes
Abstract:
In this article, a new method is proposed for the measuring of well-being inequality through a model composed of superimposing satisfaction waves. The displacement of households’ satisfactory state (i.e. satisfaction) is defined in a satisfaction string. The duration of the satisfactory state for a given period is measured in order to determine the relationship between utility and total satisfactory time, itself dependent on the density and tension of each satisfaction string. Thus, individual cardinal total satisfaction values are computed by way of a one-dimensional form for scalar sinusoidal (harmonic) moving wave function, using satisfaction waves with varying amplitudes and frequencies which allow us to measure wellbeing inequality. One advantage to using satisfaction waves is the ability to show that individual utility and consumption amounts would probably not commute; hence, it is impossible to measure or to know simultaneously the values of these observables from the dataset. Thus, we crystallize the problem by using a Heisenberg-type uncertainty resolution for self-adjoint economic operators. We propose to eliminate any estimation bias by correlating the standard deviations of selected economic operators; this is achieved by replacing the aforementioned observed uncertainties with households’ perceived uncertainties (i.e. corrected standard deviations) obtained through the logarithmic psychophysical law proposed by Weber and Fechner.
Keywords: Heisenberg Uncertainty Principle, superimposing satisfaction waves, Weber–Fechner law, well-being inequality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2055145 The Empirical Survey on the Effect of Using Media in Explosive Forming of Tubular Shells
Authors: V. Hadavi, J. Zamani, R. Hosseini
Abstract:
The special and unique advantages of explosive forming, has developed its use in different industries. Considering the important influence of improving the current explosive forming techniques on increasing the efficiency and control over the explosive forming procedure, the effects of air and water as the energy-conveying medium, and also their differences will be illustrated in this paper. Hence, a large number of explosive forming tests have been conducted on two sizes of thin walled cylindrical shells by using air and water as the working medium. Comparative diagrams of the maximum radial deflection of work-pieces of the same size, as a function of the scaled distance, show that for the points with the same values of scaled distance, the maximum radial deformation caused by the under water explosive loading is 4 to 5 times more than the deflection of the shells under explosive forming, while using air. Results of this experimental research have also been compared with other studies which show that using water as the energy conveying media increases the efficiency up to 4.8 times. The effect of the media on failure modes of the shells, and the necking mechanism of the walls of the specimens, while being explosively loaded, are also discussed in this issue. Measuring the tested specimens shows that, the increase in the internal volume has been accompanied by necking of the walls, which finally results in the radial rupture of the structure.Keywords: Explosive Forming, Energy Conveying Medium, Tubular Shell
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1348144 Acceleration-Based Motion Model for Visual SLAM
Authors: Daohong Yang, Xiang Zhang, Wanting Zhou, Lei Li
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) is a technology that gathers information about the surrounding environment to ascertain its own position and create a map. It is widely used in computer vision, robotics, and various other fields. Many visual SLAM systems, such as OBSLAM3, utilize a constant velocity motion model. The utilization of this model facilitates the determination of the initial pose of the current frame, thereby enhancing the efficiency and precision of feature matching. However, it is often difficult to satisfy the constant velocity motion model in actual situations. This can result in a significant deviation between the obtained initial pose and the true value, leading to errors in nonlinear optimization results. Therefore, this paper proposes a motion model based on acceleration that can be applied to most SLAM systems. To provide a more accurate description of the camera pose acceleration, we separate the pose transformation matrix into its rotation matrix and translation vector components. The rotation matrix is now represented by a rotation vector. We assume that, over a short period, the changes in rotating angular velocity and translation vector remain constant. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of the constant velocity model is analyzed theoretically. Finally, we apply our proposed approach to the ORBSLAM3 system and evaluate two sets of sequences from the TUM datasets. The results show that our proposed method has a more accurate initial pose estimation, resulting in an improvement of 6.61% and 6.46% in the accuracy of the ORBSLAM3 system on the two test sequences, respectively.
Keywords: Error estimation, constant acceleration motion model, pose estimation, visual SLAM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 252143 Application of a Similarity Measure for Graphs to Web-based Document Structures
Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian, Max Mühlhauser
Abstract:
Due to the tremendous amount of information provided by the World Wide Web (WWW) developing methods for mining the structure of web-based documents is of considerable interest. In this paper we present a similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as linear integer strings, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments for solving a novel and challenging problem: Measuring the structural similarity of generalized trees. In other words: We first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem for developing a efficient graph similarity measure. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based document structures.Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1892142 Study of Natural Patterns on Digital Image Correlation Using Simulation Method
Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish
Abstract:
Digital image correlation (DIC) is a contactless fullfield displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.
Keywords: Digital image correlation (DIC), Deformation simulation, Natural pattern, Subset size.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2799141 Carvacrol Attenuates Lung Injury in Rats with Severe Acute Pancreatitis
Authors: Salim Cerig, Fatime Geyikoglu, Pınar Akpulat, Suat Colak, Hasan Turkez, Murat Bakir, Mirkhalil Hosseinigouzdagani, Kubra Koc
Abstract:
This study was designed to evaluate whether carvacrol (CAR) could provide protection against lung injury by acute pancreatitis development. The rats were randomized into groups to receive (I) no therapy; (II) 50 μg/kg cerulein at 1h intervals by four intraperitoneal injections (i.p.); (III) 50, 100 and 200 mg/kg CAR by one i.p.; and (IV) cerulein+CAR after 2h of cerulein injection. 12h later, serum samples were obtained to assess pancreatic function the lipase and amylase values. The animals were euthanized and lung samples were excised. The specimens were stained with hematoxylin-eosin (H&E), periodic acid–Schif (PAS), Mallory's trichrome and amyloid. Additionally, oxidative DNA damage was determined by measuring as increases in 8-hydroxy-deoxyguanosine (8-OH-dG) adducts. The results showed that the serum activity of lipase and amylase in AP rats were significantly reduced after the therapy (p<0.05). We also found that the 100 mg/kg dose of CAR significantly decreased 8-OH-dG levels. Moreover, the severe pathological findings in the lung such as necrosis, inflammation, congestion, fibrosis, and thickened alveolar septum were attenuated in the AP+CAR groups when compared with AP group. Finally, the magnitude of the protective effect on lung is certain, and CAR is an effective therapy for lung injury caused by AP.Keywords: Antioxidant activity, carvacrol, experimental acute pancreatitis, lung injury, oxidative DNA damage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497140 Construction and Validation of a Hybrid Lumbar Spine Model for the Fast Evaluation of Intradiscal Pressure and Mobility
Authors: Ali Hamadi Dicko, Nicolas Tong-Yette, Benjamin Gilles, François Faure, Olivier Palombi
Abstract:
A novel hybrid model of the lumbar spine, allowing fast static and dynamic simulations of the disc pressure and the spine mobility, is introduced in this work. Our contribution is to combine rigid bodies, deformable finite elements, articular constraints, and springs into a unique model of the spine. Each vertebra is represented by a rigid body controlling a surface mesh to model contacts on the facet joints and the spinous process. The discs are modeled using a heterogeneous tetrahedral finite element model. The facet joints are represented as elastic joints with six degrees of freedom, while the ligaments are modeled using non-linear one-dimensional elastic elements. The challenge we tackle is to make these different models efficiently interact while respecting the principles of Anatomy and Mechanics. The mobility, the intradiscal pressure, the facet joint force and the instantaneous center of rotation of the lumbar spine are validated against the experimental and theoretical results of the literature on flexion, extension, lateral bending as well as axial rotation. Our hybrid model greatly simplifies the modeling task and dramatically accelerates the simulation of pressure within the discs, as well as the evaluation of the range of motion and the instantaneous centers of rotation, without penalizing precision. These results suggest that for some types of biomechanical simulations, simplified models allow far easier modeling and faster simulations compared to usual full-FEM approaches without any loss of accuracy.
Keywords: Hybrid, modeling, fast simulation, lumbar spine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2382139 Preliminary Study of the Phonological Development in Three- and Four-Year-Old Bulgarian Children
Authors: Tsvetomira Braynova, Miglena Simonska
Abstract:
The article presents the results of a research of phonological processes in three- and four-year-old children. A test, created for the purpose of the study, was developed and conducted among 120 children. The study included three areas of research - at the level of words (96 words), at the level of sentence repetition (10 sentences) and at the level of generating own speech from a picture (15 pictures). The test also gives us additional information about the articulation errors of the assessed children. The main purpose of the research is to analyze all phonological processes that occur at this age in Bulgarian children and to identify which are typical and atypical for this age. The results show that the most common phonology errors that children make are: sound substitution, elision of sound, metathesis of sound, elision of syllable, elision of consonants clustered in a syllable. Measuring the correlation between average length of repeated speech and average length of generated speech, the analysis does not prove that the more words a child can repeat in part “repeated speech”, the more words they can be expected to generate in part “generating sentence”. The results of this study show that the task of naming a word provides sufficient and representative information to assess the child's phonology.
Keywords: Articulation, phonology, speech, language development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 384138 Fuzzy Logic System for Tractive Performance Prediction of an Intelligent Air-Cushion Track Vehicle
Authors: Altab Hossain, Ataur Rahman, A. K. M. Mohiuddin, Yulfian Aminanda
Abstract:
Fuzzy logic system (FLS) is used in this study to predict the tractive performance in terms of traction force, and motion resistance for an intelligent air cushion track vehicle while it operates in the swamp peat. The system is effective to control the intelligent air –cushion system with measuring the vehicle traction force (TF), motion resistance (MR), cushion clearance height (CH) and cushion pressure (CP). Ultrasonic displacement sensor, pull-in solenoid electromagnetic switch, pressure control sensor, micro controller, and battery pH sensor are incorporated with the Fuzzy logic system to investigate experimentally the TF, MR, CH, and CP. In this study, a comparison for tractive performance of an intelligent air cushion track vehicle has been performed with the results obtained from the predicted values of FLS and experimental actual values. The mean relative error of actual and predicted values from the FLS model on traction force, and total motion resistance are found as 5.58 %, and 6.78 % respectively. For all parameters, the relative error of predicted values are found to be less than the acceptable limits. The goodness of fit of the prediction values from the FLS model on TF, and MR are found as 0.90, and 0.98 respectively.Keywords: Cushion pressure, Fuzzy logic, Motion resistance, Traction force.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493137 Design, Simulation, and Implementation of a Digital Pulse Oxygen Saturation Measurement System Using the Arduino Microcontroller
Authors: Muhibul Haque Bhuyan, Md. Refat Sarder
Abstract:
If a person can monitor his/her oxygen saturation level intermittently then he/she can identify his/her condition early and thus he/she can seek a doctor’s help. This paper reports the design, simulation, and implementation of a low-cost pulse oxygen saturation measurement device based on a reflective photoplethysmography (PPG) system using an integrated circuit sensor as the fundamental component of this health status checking device. The measurement of the physiological parameter is the blood oxygen saturation level (SpO2) in the peripheral capillary. This work has been implemented using an Arduino Uno R3 microcontroller along with this sensor integrated circuit (IC). The system is designed in the Proteus environment and then simulated to check its performance. After that, the hardware implementation is performed. We used a clipping type optical sensor to sense the arterial oxygen saturation level of blood signal from the fingertips of an individual and then transformed it into the digital data in the microcontroller through its programming its instruction. The designed system was tested by measuring the SpO2 level for several people of different ages, from 12 to 57 years of age. Besides, the same people were tested using a standard machine purchased from the market. Test results were found very satisfactory as the average percentage of error was very low, 1.59% only.
Keywords: Digital pulse oxygen saturation level, oximeter, measurement, design, simulation, implementation, proteus, Arduino Uno microcontroller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1860136 Issues in Organizational Assessment: The Case of Frustration Tolerance Measurement in Mexico
Authors: David Ruiz, Carlos Nava, Roberto Carbajal
Abstract:
The psychological profile has become one of the most important sources of information when it comes to individual selection and the hiring process in any organization. Psychological instruments are used to collect data about variables that are considered critically important for performance in work. However, because of conceptual chaos in organizational psychology, most of the information provided by psychological testing is not directly useful for Mexican human resources professionals to take hiring decisions. The aims of this paper are 1) to underline the lack of conceptual precision in theoretical testing foundations in Mexico and 2) presenting a reliability and validity analysis of a frustration tolerance instrument created as an alternative to a heuristically conduct individual assessment in organizations. First, a description of assessment conditions in Mexico is made. Second, an instrument and a theoretical framework is presented as an alternative to the assessment practices in the country. A total of 65 Psychology Iztacala Superior Studies Faculty students were assessed. Cronbach´s alpha coefficient was calculated and an exploratory factor analysis was carried out to prove the scale unidimensionality. Reliability analysis revealed good internal consistency of the scale (Cronbach’s α = 0.825). Factor analysis produced 4 factors for the scale. However, factor loadings and explained variation give proof to the scale unidimensionality. It is concluded that the instrument has good psychometric properties that will allow human resources professionals to collect useful data. Different possibilities to conduct psychological assessment are suggested for future development.
Keywords: Psychological assessment, frustration tolerance, human resources, organizational psychology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919135 Measuring the Amount of Eroded Soil and Surface Runoff Water in the Field
Authors: Abdulfatah Faraj Aboufayed
Abstract:
Water erosion is the most important problems of the soil in the Jabel Nefusa area located in northwest of Libya; therefore, erosion station had been established in the Faculty of Veterinary and dryfarming research Station, University of the Al-japel Al-gharbi in Zentan. The length of the station is 72.6 feet, 6 feet width and the percentage of its slope is 3%. The station were established to measure the amount of soil eroded and amount of surface water produced during the seasons 95/96 and 96/97 from each rain storms. The monitoring shows that there was a difference between the two seasons in the number of rainstorms which made differences in the amount of surface runoff water and the amount of soil eroded between the two seasons. Although the slope is low (3%), the soil texture is sandy and the land ploughed twice during each season surface runoff and soil eroded were occurred. The average amount of eroded soil was 3792 grams (gr) per season and the average amount of surface runoff water was 410 liter (L) per season. The amount of surface runoff water would be much greater from Jebel Nefusa upland with steep slopes and collecting of them will save a valuable amount of water which lost as a runoff while this area is in desperate of this water. The regression analysis of variance show strong correlation between rainfall depth and the other two depended variable (the amount of surface runoff water and the amount of eroded soil. It shows also strong correlation between amount of surface runoff water and amount of eroded soil.
Keywords: Rain, Surface runoff water, Soil, Water erosion, Soil erosion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1997134 Multi-Scale Gabor Feature Based Eye Localization
Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Dusik Oh, Jaemin Kim, Seongwon Cho
Abstract:
Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported so far still need to be improved about precision and computational time for successful applications. In this paper, we propose an eye location method based on multi-scale Gabor feature vectors, which is more robust with respect to initial points. The eye localization based on Gabor feature vectors first needs to constructs an Eye Model Bunch for each eye (left or right eye) which consists of n Gabor jets and average eye coordinates of each eyes obtained from n model face images, and then tries to localize eyes in an incoming face image by utilizing the fact that the true eye coordinates is most likely to be very close to the position where the Gabor jet will have the best Gabor jet similarity matching with a Gabor jet in the Eye Model Bunch. Similar ideas have been already proposed in such as EBGM (Elastic Bunch Graph Matching). However, the method used in EBGM is known to be not robust with respect to initial values and may need extensive search range for achieving the required performance, but extensive search ranges will cause much more computational burden. In this paper, we propose a multi-scale approach with a little increased computational burden where one first tries to localize eyes based on Gabor feature vectors in a coarse face image obtained from down sampling of the original face image, and then localize eyes based on Gabor feature vectors in the original resolution face image by using the eye coordinates localized in the coarse scaled image as initial points. Several experiments and comparisons with other eye localization methods reported in the other papers show the efficiency of our proposed method.Keywords: Eye Localization, Gabor features, Multi-scale, Gabor wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821133 Indicators as Early Warning Signal Performance to Solve Underlying Safety Problem before They Emerge as Accident Risks
Authors: Benson Chizubem
Abstract:
Because of the severe hazards that substantially impact workers' lives and assets lost, the oil and gas industry has established a goal of establishing zero occurrences or accidents in operations. Using leading indicators to measure and assess an organization's safety performance is a proactive approach to safety management. Also, it will provide early warning signals to solve inherent safety issues before they lead to an accident in the study industry. The analysis of these indicators' performance was based on a questionnaire-based methodology. A total number of 1000 questionnaires were disseminated to the workers, of which 327 were returned to the researcher team. The data collected were analysed to evaluate their safety perceptions on indicators performance. Data analysis identified safety training, safety system, safety supervision, safety rules and procedures, safety auditing, strategies and policies, management commitment, safety meeting and safety behaviour, as potential leading indicators that are capable of measuring organizational safety performance and as capable of providing early warning signals of weak safety area in an operational environment. The findings of this study have provided safety researchers and industrial safety practitioners with helpful information on the improvement of the existing safety monitoring process in the oil and gas industry, both locally and globally, as proactive actions.
Keywords: Early warning, safety, accident risks, oil and gas industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 372132 Value Analysis Dashboard in Supply Chain Management: Real Case Study from Iran
Authors: Seyedehfatemeh Golrizgashti, Seyedali Dalil
Abstract:
The goal of this paper is proposing a supply chain value dashboard in home appliance manufacturing firms to create more value for all stakeholders via balanced scorecard approach. Balanced scorecard is an effective approach that managers have used to evaluate supply chain performance in many fields but there is a lack of enough attention to all supply chain stakeholders, improving value creation and, defining correlation between value indicators and performance measuring quantitatively. In this research the key stakeholders in home appliance supply chain, value indicators with respect to create more value for stakeholders and the most important metrics to evaluate supply chain value performance based on balanced scorecard approach have been selected via literature review. The most important indicators based on expert’s judgment acquired by in survey focused on creating more value for. Structural equation modelling has been used to disclose relations between value indicators and balanced scorecard metrics. The important result of this research is identifying effective value dashboard to create more value for all stakeholders in supply chain via balanced scorecard approach and based on an empirical study covering ten home appliance manufacturing firms in Iran. Home appliance manufacturing firms can increase their stakeholder's satisfaction by using this value dashboard.Keywords: Supply chain management, balanced scorecard, value, Structural modeling, Stakeholders.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2078131 Biosorption of Heavy Metals by Low Cost Adsorbents
Authors: Azam Tabatabaee, Fereshteh Dastgoshadeh, Akram Tabatabaee
Abstract:
This paper describes the use of by-products as adsorbents for removing heavy metals from aqueous effluent solutions. Products of almond skin, walnut shell, saw dust, rice bran and egg shell were evaluated as metal ion adsorbents in aqueous solutions. A comparative study was done with commercial adsorbents like ion exchange resins and activated carbon too. Batch experiments were investigated to determine the affinity of all of biomasses for, Cd(ΙΙ), Cr(ΙΙΙ), Ni(ΙΙ), and Pb(ΙΙ) metal ions at pH 5. The rate of metal ion removal in the synthetic wastewater by the biomass was evaluated by measuring final concentration of synthetic wastewater. At a concentration of metal ion (50 mg/L), egg shell adsorbed high levels (98.6 – 99.7%) of Pb(ΙΙ) and Cr(ΙΙΙ) and walnut shell adsorbed high levels (35.3 – 65.4%) of Ni(ΙΙ) and Cd(ΙΙ). In this study, it has been shown that by-products were excellent adsorbents for removal of toxic ions from wastewater with efficiency comparable to commercially available adsorbents, but at a reduced cost. Also statistical studies using Independent Sample t Test and ANOVA Oneway for statistical comparison between various elements adsorption showed that there isn’t a significant difference in some elements adsorption percentage by by-products and commercial adsorbents.Keywords: Adsorbents, heavy metals, commercial adsorbents, wastewater, by-products.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2471130 A Comprehensive Review of Adaptive Building Energy Management Systems Based on Users’ Feedback
Authors: P. Nafisi Poor, P. Javid
Abstract:
Over the past few years, the idea of adaptive buildings and specifically, adaptive building energy management systems (ABEMS) has become popular. Well-performed management in terms of energy is to create a balance between energy consumption and user comfort; therefore, in new energy management models, efficient energy consumption is not the sole factor and the user's comfortability is also considered in the calculations. One of the main ways of measuring this factor is by analyzing user feedback on the conditions to understand whether they are satisfied with conditions or not. This paper provides a comprehensive review of recent approaches towards energy management systems based on users' feedbacks and subsequently performs a comparison between them premised upon their efficiency and accuracy to understand which approaches were more accurate and which ones resulted in a more efficient way of minimizing energy consumption while maintaining users' comfortability. It was concluded that the highest accuracy rate among the presented works was 95% accuracy in determining satisfaction and up to 51.08% energy savings can be achieved without disturbing user’s comfort. Considering the growing interest in designing and developing adaptive buildings, these studies can support diverse inquiries about this subject and can be used as a resource to support studies and researches towards efficient energy consumption while maintaining the comfortability of users.
Keywords: Adaptive buildings, energy efficiency, intelligent buildings, user comfortability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 677129 Qualitative Data Analysis for Health Care Services
Authors: Taner Ersoz, Filiz Ersoz
Abstract:
This study was designed enable application of multivariate technique in the interpretation of categorical data for measuring health care services satisfaction in Turkey. The data was collected from a total of 17726 respondents. The establishment of the sample group and collection of the data were carried out by a joint team from The Ministry of Health and Turkish Statistical Institute (Turk Stat) of Turkey. The multiple correspondence analysis (MCA) was used on the data of 2882 respondents who answered the questionnaire in full. The multiple correspondence analysis indicated that, in the evaluation of health services females, public employees, younger and more highly educated individuals were more concerned and complainant than males, private sector employees, older and less educated individuals. Overall 53 % of the respondents were pleased with the improvements in health care services in the past three years. This study demonstrates the public consciousness in health services and health care satisfaction in Turkey. It was found that most the respondents were pleased with the improvements in health care services over the past three years. Awareness of health service quality increases with education levels. Older individuals and males would appear to have lower expectancies in health services.
Keywords: Multiple correspondence analysis, optimal scaling, multivariate categorical data, health care services, health satisfaction survey, statistical visualizing, Turkey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 876128 A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program
Authors: A. Esmaili Torshabi, A. Terakawa, K. Ishii, H. Yamazaki, S. Matsuyama, Y. Kikuchi, M. Nakhostin, H. Sabet, A. Ishizaki, W. Yamashita, T. Togashi, J. Arikawa, H. Akiyama, K. Koyata
Abstract:
The purpose of this study is to introduce a new interface program to calculate a dose distribution with Monte Carlo method in complex heterogeneous systems such as organs or tissues in proton therapy. This interface program was developed under MATLAB software and includes a friendly graphical user interface with several tools such as image properties adjustment or results display. Quadtree decomposition technique was used as an image segmentation algorithm to create optimum geometries from Computed Tomography (CT) images for dose calculations of proton beam. The result of the mentioned technique is a number of nonoverlapped squares with different sizes in every image. By this way the resolution of image segmentation is high enough in and near heterogeneous areas to preserve the precision of dose calculations and is low enough in homogeneous areas to reduce the number of cells directly. Furthermore a cell reduction algorithm can be used to combine neighboring cells with the same material. The validation of this method has been done in two ways; first, in comparison with experimental data obtained with 80 MeV proton beam in Cyclotron and Radioisotope Center (CYRIC) in Tohoku University and second, in comparison with data based on polybinary tissue calibration method, performed in CYRIC. These results are presented in this paper. This program can read the output file of Monte Carlo code while region of interest is selected manually, and give a plot of dose distribution of proton beam superimposed onto the CT images.Keywords: Monte Carlo, CT images, Quadtree decomposition, Interface program, Proton beam
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867127 Haemocompatibility of Surface Modified AISI 316L Austenitic Stainless Steel Tested in Artificial Plasma
Authors: W. Walke, J. Przondziono, K. Nowińska
Abstract:
The study comprises evaluation of suitability of passive layer created on the surface of AISI 316L stainless steel for products that are intended to have contact with blood. For that purpose, prior to and after chemical passivation, samples were subject to 7 day exposure in artificial plasma at the temperature of T=37°C. Next, tests of metallic ions infiltration from the surface to the solution were performed. The tests were performed with application of spectrometer JY 2000, by Yobin – Yvon, employing Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES). In order to characterize physical and chemical features of electrochemical processes taking place during exposure of samples to artificial plasma, tests with application of electrochemical impedance spectroscopy were suggested. The tests were performed with application of measuring unit equipped with potentiostat PGSTAT 302n with an attachment for impedance tests FRA2. Measurements were made in the environment simulating human blood at the temperature of T=37°C. Performed tests proved that application of chemical passivation process for AISI 316L stainless steel used for production of goods intended to have contact with blood is well-grounded and useful in order to improve safety of their usage.
Keywords: AISI 316L stainless steel, chemical passivation, artificial plasma, ions infiltration, EIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2097126 3D Rendering of American Sign Language Finger-Spelling: A Comparative Study of Two Animation Techniques
Authors: Nicoletta Adamo-Villani
Abstract:
In this paper we report a study aimed at determining the most effective animation technique for representing ASL (American Sign Language) finger-spelling. Specifically, in the study we compare two commonly used 3D computer animation methods (keyframe animation and motion capture) in order to ascertain which technique produces the most 'accurate', 'readable', and 'close to actual signing' (i.e. realistic) rendering of ASL finger-spelling. To accomplish this goal we have developed 20 animated clips of fingerspelled words and we have designed an experiment consisting of a web survey with rating questions. 71 subjects ages 19-45 participated in the study. Results showed that recognition of the words was correlated with the method used to animate the signs. In particular, keyframe technique produced the most accurate representation of the signs (i.e., participants were more likely to identify the words correctly in keyframed sequences rather than in motion captured ones). Further, findings showed that the animation method had an effect on the reported scores for readability and closeness to actual signing; the estimated marginal mean readability and closeness was greater for keyframed signs than for motion captured signs. To our knowledge, this is the first study aimed at measuring and comparing accuracy, readability and realism of ASL animations produced with different techniques.Keywords: 3D Animation, American Sign Language, DeafEducation, Motion Capture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1998