Search results for: non-contact measuring systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4823

Search results for: non-contact measuring systems

2963 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG

Authors: Mamta Garg

Abstract:

While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.

Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
2962 Using Teager Energy Cepstrum and HMM distancesin Automatic Speech Recognition and Analysis of Unvoiced Speech

Authors: Panikos Heracleous

Abstract:

In this study, the use of silicon NAM (Non-Audible Murmur) microphone in automatic speech recognition is presented. NAM microphones are special acoustic sensors, which are attached behind the talker-s ear and can capture not only normal (audible) speech, but also very quietly uttered speech (non-audible murmur). As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech conversion etc.) for sound-impaired people. Using a small amount of training data and adaptation approaches, 93.9% word accuracy was achieved for a 20k Japanese vocabulary dictation task. Non-audible murmur recognition in noisy environments is also investigated. In this study, further analysis of the NAM speech has been made using distance measures between hidden Markov model (HMM) pairs. It has been shown the reduced spectral space of NAM speech using a metric distance, however the location of the different phonemes of NAM are similar to the location of the phonemes of normal speech, and the NAM sounds are well discriminated. Promising results in using nonlinear features are also introduced, especially under noisy conditions.

Keywords: Speech recognition, unvoiced speech, nonlinear features, HMM distance measures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1647
2961 Analysis of Foaming Flow Instabilities for Dynamic Liquid Saturation in Trickle Bed Reactor

Authors: Vijay Sodhi, Ajay Bansal

Abstract:

The effects of different parameters on the hydrodynamics of trickle bed reactors were discussed for Newtonian and non-Newtonian foaming systems. The varying parameters are varying liquid velocities, gas flow velocities and surface tension. The range for gas velocity is particularly large, thanks to the use of dense gas to simulate very high pressure conditions. This data bank has been used to compare the prediction accuracy of the different trendlines and transition points from the literature. More than 240 experimental points for the trickle flow (GCF) and foaming pulsing flow (PF/FPF) regime were obtained for present study. Hydrodynamic characteristics involving dynamic liquid saturation significantly influenced by gas and liquid flow rates. For 15 and 30 ppm air-aqueous surfactant solutions, dynamic liquid saturation decreases with higher liquid and gas flow rates considerably in high interaction regime. With decrease in surface tension i.e. for 45 and 60 ppm air-aqueous surfactant systems, effect was more pronounced with decreases dynamic liquid saturation very sharply during regime transition significantly at both low liquid and gas flow rates.

Keywords: Trickle Bed Reactor, Dynamic Liquid Saturation, Foaming, Flow Regime Transition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1834
2960 Complex Network Approach to International Trade of Fossil Fuel

Authors: Semanur Soyyiğit Kaya, Ercan Eren

Abstract:

Energy has a prominent role for development of nations. Countries which have energy resources also have strategic power in the international trade of energy since it is essential for all stages of production in the economy. Thus, it is important for countries to analyze the weaknesses and strength of the system. On the other side, international trade is one of the fields that are analyzed as a complex network via network analysis. Complex network is one of the tools to analyze complex systems with heterogeneous agents and interaction between them. A complex network consists of nodes and the interactions between these nodes. Total properties which emerge as a result of these interactions are distinct from the sum of small parts (more or less) in complex systems. Thus, standard approaches to international trade are superficial to analyze these systems. Network analysis provides a new approach to analyze international trade as a network. In this network, countries constitute nodes and trade relations (export or import) constitute edges. It becomes possible to analyze international trade network in terms of high degree indicators which are specific to complex networks such as connectivity, clustering, assortativity/disassortativity, centrality, etc. In this analysis, international trade of crude oil and coal which are types of fossil fuel has been analyzed from 2005 to 2014 via network analysis. First, it has been analyzed in terms of some topological parameters such as density, transitivity, clustering etc. Afterwards, fitness to Pareto distribution has been analyzed via Kolmogorov-Smirnov test. Finally, weighted HITS algorithm has been applied to the data as a centrality measure to determine the real prominence of countries in these trade networks. Weighted HITS algorithm is a strong tool to analyze the network by ranking countries with regards to prominence of their trade partners. We have calculated both an export centrality and an import centrality by applying w-HITS algorithm to the data. As a result, impacts of the trading countries have been presented in terms of high-degree indicators.

Keywords: Complex network approach, fossil fuel, international trade, network theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2386
2959 Detecting Financial Bubbles Using Gap between Common Stocks and Preferred Stocks

Authors: Changju Lee, Seungmo Ku, Sondo Kim, Woojin Chang

Abstract:

How to detecting financial bubble? Addressing this simple question has been the focus of a vast amount of empirical research spanning almost half a century. However, financial bubble is hard to observe and varying over the time; there needs to be more research on this area. In this paper, we used abnormal difference between common stocks price and those preferred stocks price to explain financial bubble. First, we proposed the ‘W-index’ which indicates spread between common stocks and those preferred stocks in stock market. Second, to prove that this ‘W-index’ is valid for measuring financial bubble, we showed that there is an inverse relationship between this ‘W-index’ and S&P500 rate of return. Specifically, our hypothesis is that when ‘W-index’ is comparably higher than other periods, financial bubbles are added up in stock market and vice versa; according to our hypothesis, if investors made long term investments when ‘W-index’ is high, they would have negative rate of return; however, if investors made long term investments when ‘W-index’ is low, they would have positive rate of return. By comparing correlation values and adjusted R-squared values of between W-index and S&P500 return, VIX index and S&P500 return, and TED index and S&P500 return, we showed only W-index has significant relationship between S&P500 rate of return. In addition, we figured out how long investors should hold their investment position regard the effect of financial bubble. Using this W-index, investors could measure financial bubble in the market and invest with low risk.

Keywords: Financial bubbles, detection, preferred stocks, pairs trading, future return, forecast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1131
2958 Genetic Diversity Based Population Study of Freshwater Mud Eel (Monopterus cuchia) in Bangladesh

Authors: M. F. Miah, K. M. A. Zinnah, M. J. Raihan, H. Ali, M. N. Naser

Abstract:

As genetic diversity is most important for existing, breeding and production of any fish; this study was undertaken for investigating genetic diversity of freshwater mud eel, Monopterus cuchia at population level where three ecological populations such as flooded area of Sylhet (P1), open water of Moulvibazar (P2) and open water of Sunamganj (P3) districts of Bangladesh were considered. Four arbitrary RAPD primers (OPB-12, C0-4, B-03 and OPB-08) were screened and RAPD banding patterns were analyzed among the populations considering 15 individuals of each population. In total 174, 138 and 149 bands were detected in the populations of P1, P2 and P3 respectively; however, each primer revealed less number of bands in each population. 100% polymorphic loci were recorded in P2 and P3 whereas only one monomorphic locus was observed in P1, recorded 97.5% polymorphism. Different genetic parameters such as inter-individual pairwise similarity, genetic distance, Nei genetic similarity, linkage distances, cluster analysis and allelic information, etc. were considered for measuring genetic diversity. The average inter-individual pairwise similarity was recorded 2.98, 1.47 and 1.35 in P1, P2 and P3 respectively. Considering genetic distance analysis, the highest distance 1 was recorded in P2 and P3 and the lowest genetic distance 0.444 was found in P2. The average Nei genetic similarity was observed 0.19, 0.16 and 0.13 in P1, P2 and P3, respectively; however, the average linkage distance was recorded 24.92, 17.14 and 15.28 in P1, P3 and P2 respectively. Based on linkage distance, genetic clusters were generated in three populations where 6 clades and 7 clusters were found in P1, 3 clades and 5 clusters were observed in P2 and 4 clades and 7 clusters were detected in P3. In addition, allelic information was observed where the frequency of p and q alleles were observed 0.093 and 0.907 in P1, 0.076 and 0.924 in P2, 0.074 and 0.926 in P3 respectively. The average gene diversity was observed highest in P2 (0.132) followed by P3 (0.131) and P1 (0.121) respectively.

Keywords: Genetic diversity, Monopterus cuchia, population, RAPD, Bangladesh.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
2957 Network Based Intrusion Detection and Prevention Systems in IP-Level Security Protocols

Authors: R. Kabila

Abstract:

IPsec has now become a standard information security technology throughout the Internet society. It provides a well-defined architecture that takes into account confidentiality, authentication, integrity, secure key exchange and protection mechanism against replay attack also. For the connectionless security services on packet basis, IETF IPsec Working Group has standardized two extension headers (AH&ESP), key exchange and authentication protocols. It is also working on lightweight key exchange protocol and MIB's for security management. IPsec technology has been implemented on various platforms in IPv4 and IPv6, gradually replacing old application-specific security mechanisms. IPv4 and IPv6 are not directly compatible, so programs and systems designed to one standard can not communicate with those designed to the other. We propose the design and implementation of controlled Internet security system, which is IPsec-based Internet information security system in IPv4/IPv6 network and also we show the data of performance measurement. With the features like improved scalability and routing, security, ease-of-configuration, and higher performance of IPv6, the controlled Internet security system provides consistent security policy and integrated security management on IPsec-based Internet security system.

Keywords: IDS, IPS, IP-Sec, IPv6, IPv4, VPN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4542
2956 Fault Detection and Isolation using RBF Networks for Polymer Electrolyte Membrane Fuel Cell

Authors: Mahanijah Md Kamal., Dingli Yu

Abstract:

This paper presents a new method of fault detection and isolation (FDI) for polymer electrolyte membrane (PEM) fuel cell (FC) dynamic systems under an open-loop scheme. This method uses a radial basis function (RBF) neural network to perform fault identification, classification and isolation. The novelty is that the RBF model of independent mode is used to predict the future outputs of the FC stack. One actuator fault, one component fault and three sensor faults have been introduced to the PEMFC systems experience faults between -7% to +10% of fault size in real-time operation. To validate the results, a benchmark model developed by Michigan University is used in the simulation to investigate the effect of these five faults. The developed independent RBF model is tested on MATLAB R2009a/Simulink environment. The simulation results confirm the effectiveness of the proposed method for FDI under an open-loop condition. By using this method, the RBF networks able to detect and isolate all five faults accordingly and accurately.

Keywords: Polymer electrolyte membrane fuel cell, radial basis function neural networks, fault detection, fault isolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
2955 Spatial Query Localization Method in Limited Reference Point Environment

Authors: Victor Krebss

Abstract:

Task of object localization is one of the major challenges in creating intelligent transportation. Unfortunately, in densely built-up urban areas, localization based on GPS only produces a large error, or simply becomes impossible. New opportunities arise for the localization due to the rapidly emerging concept of a wireless ad-hoc network. Such network, allows estimating potential distance between these objects measuring received signal level and construct a graph of distances in which nodes are the localization objects, and edges - estimates of the distances between pairs of nodes. Due to the known coordinates of individual nodes (anchors), it is possible to determine the location of all (or part) of the remaining nodes of the graph. Moreover, road map, available in digital format can provide localization routines with valuable additional information to narrow node location search. However, despite abundance of well-known algorithms for solving the problem of localization and significant research efforts, there are still many issues that currently are addressed only partially. In this paper, we propose localization approach based on the graph mapped distances on the digital road map data basis. In fact, problem is reduced to distance graph embedding into the graph representing area geo location data. It makes possible to localize objects, in some cases even if only one reference point is available. We propose simple embedding algorithm and sample implementation as spatial queries over sensor network data stored in spatial database, allowing employing effectively spatial indexing, optimized spatial search routines and geometry functions.

Keywords: Intelligent Transportation System, Sensor Network, Localization, Spatial Query, GIS, Graph Embedding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
2954 Cost Analysis of Hybrid Wind Energy Generating System Considering CO2 Emissions

Authors: M. A. Badr, M.N. El Kordy, A. N. Mohib, M. M. Ibrahim

Abstract:

The basic objective of the research is to study the effect of hybrid wind energy on the cost of generated electricity considering the cost of reduction CO2 emissions. The system consists of small wind turbine(s), storage battery bank and a diesel generator (W/D/B). Using an optimization software package, different system configurations are investigated to reach optimum configuration based on the net present cost (NPC) and cost of energy (COE) as economic optimization criteria. The cost of avoided CO2 is taken into consideration. The system is intended to supply the electrical load of a small community (gathering six families) in a remote Egyptian area. The investigated system is not connected to the electricity grid and may replace an existing conventional diesel powered electric supply system to reduce fuel consumption and CO2 emissions. The simulation results showed that W/D energy system is more economic than diesel alone. The estimated COE is 0.308$/kWh and extracting the cost of avoided CO2, the COE reached 0.226 $/kWh which is an external benefit of wind turbine, as there are no pollutant emissions through operational phase.

Keywords: Hybrid wind turbine systems, remote areas electrification, simulation of hybrid energy systems, techno-economic study.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1194
2953 A Knowledge-Based E-mail System Using Semantic Categorization and Rating Mechanisms

Authors: Azleena Mohd Kassim, Muhamad Rashidi A. Rahman, Yu-N. Cheah

Abstract:

Knowledge-based e-mail systems focus on incorporating knowledge management approach in order to enhance the traditional e-mail systems. In this paper, we present a knowledgebased e-mail system called KS-Mail where people do not only send and receive e-mail conventionally but are also able to create a sense of knowledge flow. We introduce semantic processing on the e-mail contents by automatically assigning categories and providing links to semantically related e-mails. This is done to enrich the knowledge value of each e-mail as well as to ease the organization of the e-mails and their contents. At the application level, we have also built components like the service manager, evaluation engine and search engine to handle the e-mail processes efficiently by providing the means to share and reuse knowledge. For this purpose, we present the KS-Mail architecture, and elaborate on the details of the e-mail server and the application server. We present the ontology mapping technique used to achieve the e-mail content-s categorization as well as the protocols that we have developed to handle the transactions in the e-mail system. Finally, we discuss further on the implementation of the modules presented in the KS-Mail architecture.

Keywords: E-mail rating, knowledge-based system, ontology mapping, text categorization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449
2952 Establishment of Kinetic Zone Diagrams via Simulated Linear Sweep Voltammograms for Soluble-Insoluble Systems

Authors: Imene Atek, Abed M. Affoune, Hubert Girault, Pekka Peljo

Abstract:

Due to the need for a rigorous mathematical model that can help to estimate kinetic properties for soluble-insoluble systems, through voltammetric experiments, a Nicholson Semi Analytical Approach was used in this work for modeling and prediction of theoretical linear sweep voltammetry responses for reversible, quasi reversible or irreversible electron transfer reactions. The redox system of interest is a one-step metal electrodeposition process. A rigorous analysis of simulated linear scan voltammetric responses following variation of dimensionless factors, the rate constant and charge transfer coefficients in a broad range was studied and presented in the form of the so called kinetic zones diagrams. These kinetic diagrams were divided into three kinetics zones. Interpreting these zones leads to empirical mathematical models which can allow the experimenter to determine electrodeposition reactions kinetics whatever the degree of reversibility. The validity of the obtained results was tested and an excellent experiment–theory agreement has been showed.

Keywords: Electrodeposition, kinetics diagrams, modeling, voltammetry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 812
2951 Bone Mineral Density and Frequency of Low-Trauma Fractures in Ukrainian Women with Metabolic Syndrome

Authors: Vladyslav Povoroznyuk, Larysa Martynyuk, Iryna Syzonenko, Liliya Martynyuk

Abstract:

Osteoporosis is one of the important problems in postmenopausal women due to an increased risk of sudden and unexpected fractures. This study is aimed to determine the connection between bone mineral density (BMD) and trabecular bone score (TBS) in Ukrainian women suffering from metabolic syndrome. Participating in the study, 566 menopausal women aged 50-79 year-old were examined and divided into two groups: Group A included 336 women with no obesity (BMI ≤ 29.9 kg/m2), and Group B – 230 women with metabolic syndrome (diagnosis according to IDF criteria, 2005). Dual-energy X-ray absorptiometry was used for measuring of lumbar spine (L1-L4), femoral neck, total body and forearm BMD and bone quality indexes (last according to Med-Imaps installation). Data were analyzed using Statistical Package 6.0. A significant increase of lumbar spine (L1-L4), femoral neck, total body and ultradistal radius BMD was found in women with metabolic syndrome compared to those without obesity (p < 0.001) both in their totality and in groups of 50-59 years, 60-69 years, and 70-79 years. TBS was significantly higher in non-obese women compared to metabolic syndrome patients of 50-59 years and in the general sample (p < 0.05). Analysis showed significant positive correlation between body mass index (BMI) and BMD at all levels. Significant negative correlation between BMI and TBS (L1-L4) was established. Despite the fact that BMD indexes were significantly higher in women with metabolic syndrome, the frequency of vertebral and non-vertebral fractures did not differ significantly in the groups of patients.

Keywords: Bone mineral density, trabecular bone score, metabolic syndrome, fracture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1000
2950 Molecular Dynamics Study on Mechanical Responses of Circular Graphene Nanoflake under Nanoindentation

Authors: Jeong-Won Kang

Abstract:

Graphene, a single-atom sheet, has been considered as the most promising material for making future nanoelectromechanical systems as well as purely electrical switching with graphene transistors. Graphene-based devices have advantages in scaled-up device fabrication due to the recent progress in large area graphene growth and lithographic patterning of graphene nanostructures. Here we investigated its mechanical responses of circular graphene nanoflake under the nanoindentation using classical molecular dynamics simulations. A correlation between the load and the indentation depth was constructed. The nanoindented force in this work was applied to the center point of the circular graphene nanoflake and then, the resonance frequency could be tuned by a nanoindented depth. We found the hardening or the softening of the graphene nanoflake during its nanoindented-deflections, and such properties were recognized by the shift of the resonance frequency. The calculated mechanical parameters in the force-vs-deflection plot were in good agreement with previous experimental and theoretical works. This proposed schematics can detect the pressure via the deflection change or/and the resonance frequency shift, and also have great potential for versatile applications in nanoelectromechanical systems.

Keywords: Graphene, pressure sensor, circular graphene nanoflake, molecular dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1717
2949 Template Design Packages for Repetitive Construction Projects

Authors: Ali Youniss Aidbaiss, G. Unnikrishnan, Anoob Hakim

Abstract:

Scope changes, scope creeps, cost and time overruns have become common in projects in the oil and gas sector. Even in repetitive projects, failure to implement lessons learnt and correct past mistakes have resulted in various setbacks. This paper describes the concept of reusing successfully implemented design packages as templates for repetitive projects, and thereby lowering the instances of project failures. Units or systems successfully installed in projects can be identified and taken up for preparing template design packages. Standardization of units and systems helps to develop templates from successful designs which can be repeatedly used with confidence. These packages can be used with minimum modifications for developing FEED packages faster, saving cost and other valuable resources. Lessons learnt from the completed project incorporated in the templates avoid repeating past mistakes during detailed design, procurement and execution. With template packages, consistent quality can be maintained for similar projects, avoiding scope creep and scope changes which will ultimately result in cost and time savings.

Keywords: Engineering work package, repetitive construction, template design package, time saving in projects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1329
2948 A Design Framework for Event Recommendation in Novice Low-Literacy Communities

Authors: Yimeng Deng, Klarissa T.T. Chang

Abstract:

The proliferation of user-generated content (UGC) results in huge opportunities to explore event patterns. However, existing event recommendation systems primarily focus on advanced information technology users. Little work has been done to address novice and low-literacy users. The next billion users providing and consuming UGC are likely to include communities from developing countries who are ready to use affordable technologies for subsistence goals. Therefore, we propose a design framework for providing event recommendations to address the needs of such users. Grounded in information integration theory (IIT), our framework advocates that effective event recommendation is supported by systems capable of (1) reliable information gathering through structured user input, (2) accurate sense making through spatial-temporal analytics, and (3) intuitive information dissemination through interactive visualization techniques. A mobile pest management application is developed as an instantiation of the design framework. Our preliminary study suggests a set of design principles for novice and low-literacy users.

Keywords: Event recommendation, iconic interface, information integration, spatial-temporal clustering, user-generated content, visualization techniques

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
2947 Importance of Mobile Technology in Successful Adoption and Sustainability of a Chronic Disease Support System

Authors: Reza Ariaeinejad, Norm Archer

Abstract:

Self-management is becoming a new emphasis for healthcare systems around the world. But there are many different problems with adoption of new health-related intervention systems. The situation is even more complicated for chronically ill patients with disabilities, illiteracy, and impairment in judgment in addition to their conditions, or having multiple co-morbidities. Providing online decision support to manage patient health and to provide better support for chronically ill patients is a new way of dealing with chronic disease management. In this study, the importance of mobile technology through an m-Health system that supports self-management interventions including the care provider, family and social support, education and training, decision support, recreation, and ongoing patient motivation to promote adherence and sustainability of the intervention are discussed. A proposed theoretical model for adoption and sustainability of system use is developed, based on UTAUT2 and IS Continuance of Use models, both of which have been pre-validated through longitudinal studies. The objective of this paper is to show the importance of using mobile technology in adoption and sustainability of use of an m-Health system which will result in commercially sustainable self-management support for chronically ill patients.

Keywords: M-health, e-health, self-management, disease.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2830
2946 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method

Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan

Abstract:

The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.

Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345
2945 Assessment of Drug Delivery Systems from Molecular Dynamic Perspective

Authors: M. Rahimnejad, B. Vahidi, B. Ebrahimi Hoseinzadeh, F. Yazdian, P. Motamed Fath, R. Jamjah

Abstract:

In this study, we developed and simulated nano-drug delivery systems efficacy in compare to free drug prescription. Computational models can be utilized to accelerate experimental steps and control the experiments high cost. Molecular dynamics simulation (MDS), in particular NAMD was utilized to better understand the anti-cancer drug interaction with cell membrane model. Paclitaxel (PTX) and dipalmitoylphosphatidylcholine (DPPC) were selected for the drug molecule and as a natural phospholipid nanocarrier, respectively. This work focused on two important interaction parameters between molecules in terms of center of mass (COM) and van der Waals interaction energy. Furthermore, we compared the simulation results of the PTX interaction with the cell membrane and the interaction of DPPC as a nanocarrier loaded by the drug with the cell membrane. The molecular dynamic analysis resulted in low energy between the nanocarrier and the cell membrane as well as significant decrease of COM amount in the nanocarrier and the cell membrane system during the interaction. Thus, the drug vehicle showed notably better interaction with the cell membrane in compared to free drug interaction with the cell membrane.

Keywords: Anti-cancer drug, center of Mass, interaction energy, molecular dynamics simulation, nanocarrier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1322
2944 Dimensional Modeling of HIV Data Using Open Source

Authors: Charles D. Otine, Samuel B. Kucel, Lena Trojer

Abstract:

Selecting the data modeling technique for an information system is determined by the objective of the resultant data model. Dimensional modeling is the preferred modeling technique for data destined for data warehouses and data mining, presenting data models that ease analysis and queries which are in contrast with entity relationship modeling. The establishment of data warehouses as components of information system landscapes in many organizations has subsequently led to the development of dimensional modeling. This has been significantly more developed and reported for the commercial database management systems as compared to the open sources thereby making it less affordable for those in resource constrained settings. This paper presents dimensional modeling of HIV patient information using open source modeling tools. It aims to take advantage of the fact that the most affected regions by the HIV virus are also heavily resource constrained (sub-Saharan Africa) whereas having large quantities of HIV data. Two HIV data source systems were studied to identify appropriate dimensions and facts these were then modeled using two open source dimensional modeling tools. Use of open source would reduce the software costs for dimensional modeling and in turn make data warehousing and data mining more feasible even for those in resource constrained settings but with data available.

Keywords: About Database, Data Mining, Data warehouse, Dimensional Modeling, Open Source.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959
2943 Foot Anthropometry of Primary School Children in the South of Thailand

Authors: S. Rawangwong, J. Chatthong, W. Boonchouytan

Abstract:

The objective of the research was to study of foot anthropometry of children aged 7-12 years in the South of Thailand Thirty-three dimensions were measured on 305 male and 295 female subjects with 3 age ranges (7-12 years old). The instrumentation consists of four types of anthropometer, digital vernier caliper, digital height gauge and measuring tape. The mean values and standard deviations of average age, height, and weight of the male subjects were 9.52(±1.70) years, 137.80(±11.55) cm, and 37.57(±11.65) kg. Female average age, height, and weight subjects were 9.53(±1.70) years, 137.88(±11.55) cm, and 34.90(±11.57) kg respectively. The comparison of the 33 comparison measured anthropometric. Between male and female subjects were sexual differences in size on women in almost all areas of significance (p<0.05). The comparison of size and proportion elementary school students 11-12 years old men in Southern of Thailand with Thai boys aged 11-12 years of industrial standards at stage 4 year A.D. 2000-2001 Number nine ratio. Concluded that students male in Southern of Thailand has a size different from the proportions of research Industrial Standards. Ministry of Industry, Phase 4, when every year from A.D. 2000-2001 ratio was significantly (p<0.05).All of the feet studied were classified into 4 categories according to the ratios of diagonal foot breadth to the maximum foot length and heel breadth to the foot breadth. They were short but thick, small but long, small, and large. The numbers of the males feet classified in these categories were 86, 64, 40, and 115 persons or 28.20, 20.98, 13.11, and 37.70% respectively. For the female feet, the same values were 46, 59, 81, and 109 persons or 15.59, 20.00, 27.46, and 36.95% respectively.

Keywords: Ergonomics, foot anthropometry, male and female, primary school children

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2924
2942 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation

Authors: H. Khanfari, M. Johari Fard

Abstract:

Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.

Keywords: Carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
2941 Determination of the Pullout/Holding Strength at the Taper-Trunnion Junction of Hip Implants

Authors: Obinna K. Ihesiulor, Krishna Shankar, Paul Smith, Alan Fien

Abstract:

Excessive fretting wear at the taper-trunnion junction (trunnionosis) apparently contributes to the high failure rates of hip implants. Implant wear and corrosion lead to the release of metal particulate debris and subsequent release of metal ions at the tapertrunnion surface. This results in a type of metal poisoning referred to as metallosis. The consequences of metal poisoning include; osteolysis (bone loss), osteoarthritis (pain), aseptic loosening of the prosthesis and revision surgery. Follow up after revision surgery, metal debris particles are commonly found in numerous locations. Background: A stable connection between the femoral ball head (taper) and stem (trunnion) is necessary to prevent relative motions and corrosion at the taper junction. Hence, the importance of component assembly cannot be over-emphasized. Therefore, the aim of this study is to determine the influence of head-stem junction assembly by press fitting and the subsequent disengagement/disassembly on the connection strength between the taper ball head and stem. Methods: CoCr femoral heads were assembled with High stainless hydrogen steel stem (trunnion) by Push-in i.e. press fit; and disengaged by pull-out test. The strength and stability of the two connections were evaluated by measuring the head pull-out forces according to ISO 7206-10 standards. Findings: The head-stem junction strength linearly increases with assembly forces.

Keywords: Wear, modular hip prosthesis, taper head-stem, force assembly, force disassembly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2454
2940 Optimized Energy Scheduling Algorithm for Energy Efficient Wireless Sensor Networks

Authors: S. Arun Rajan, S. Bhavani

Abstract:

Wireless sensor networks can be tiny, low cost, intelligent sensors connected with advanced communication systems. WSNs have pulled in significant consideration as a matter of fact that, industrial as well as medical solicitations employ these in monitoring targets, conservational observation, obstacle exposure, movement regulator etc. In these applications, sensor hubs are thickly sent in the unattended environment with little non-rechargeable batteries. This constraint requires energy-efficient systems to drag out the system lifetime. There are redundancies in data sent over the network. To overcome this, multiple virtual spine scheduling has been presented. Such networks problems are called Maximum Lifetime Backbone Scheduling (MLBS) problems. Though this sleep wake cycle reduces radio usage, improvement can be made in the path in which the group heads stay selected. Cluster head selection with emphasis on geometrical relation of the system will enhance the load sharing among the nodes. Also the data are analyzed to reduce redundant transmission. Multi-hop communication will facilitate lighter loads on the network.

Keywords: WSN, wireless sensor networks, MLBS, maximum lifetime backbone scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 877
2939 Ontology-based Domain Modelling for Consistent Content Change Management

Authors: Muhammad Javed, Yalemisew M. Abgaz, Claus Pahl

Abstract:

Ontology-based modelling of multi-formatted software application content is a challenging area in content management. When the number of software content unit is huge and in continuous process of change, content change management is important. The management of content in this context requires targeted access and manipulation methods. We present a novel approach to deal with model-driven content-centric information systems and access to their content. At the core of our approach is an ontology-based semantic annotation technique for diversely formatted content that can improve the accuracy of access and systems evolution. Domain ontologies represent domain-specific concepts and conform to metamodels. Different ontologies - from application domain ontologies to software ontologies - capture and model the different properties and perspectives on a software content unit. Interdependencies between domain ontologies, the artifacts and the content are captured through a trace model. The annotation traces are formalised and a graph-based system is selected for the representation of the annotation traces.

Keywords: Consistent Content Management, Impact Categorisation, Trace Model, Ontology Evolution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684
2938 Distributed 2-Vertex Connectivity Test of Graphs Using Local Knowledge

Authors: Brahim Hamid, Bertrand Le Saec, Mohamed Mosbah

Abstract:

The vertex connectivity of a graph is the smallest number of vertices whose deletion separates the graph or makes it trivial. This work is devoted to the problem of vertex connectivity test of graphs in a distributed environment based on a general and a constructive approach. The contribution of this paper is threefold. First, using a preconstructed spanning tree of the considered graph, we present a protocol to test whether a given graph is 2-connected using only local knowledge. Second, we present an encoding of this protocol using graph relabeling systems. The last contribution is the implementation of this protocol in the message passing model. For a given graph G, where M is the number of its edges, N the number of its nodes and Δ is its degree, our algorithms need the following requirements: The first one uses O(Δ×N2) steps and O(Δ×logΔ) bits per node. The second one uses O(Δ×N2) messages, O(N2) time and O(Δ × logΔ) bits per node. Furthermore, the studied network is semi-anonymous: Only the root of the pre-constructed spanning tree needs to be identified.

Keywords: Distributed computing, fault-tolerance, graph relabeling systems, local computations, local knowledge, message passing system, networks, vertex connectivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1839
2937 Modeling and Parametric Study for CO2/CH4 Separation Using Membrane Processes

Authors: Faizan Ahmad, Lau Kok Keong, Azmi Mohd. Shariff

Abstract:

The upgrading of low quality crude natural gas (NG) is attracting interest due to high demand of pipeline-grade gas in recent years. Membrane processes are commercially proven technology for the removal of impurities like carbon dioxide from NG. In this work, cross flow mathematical model has been suggested to be incorporated with ASPEN HYSYS as a user defined unit operation in order to design the membrane system for CO2/CH4 separation. The effect of operating conditions (such as feed composition and pressure) and membrane selectivity on the design parameters (methane recovery and total membrane area required for the separation) has been studied for different design configurations. These configurations include single stage (with and without recycle) and double stage membrane systems (with and without permeate or retentate recycle). It is shown that methane recovery can be improved by recycling permeate or retentate stream as well as by using double stage membrane systems. The ASPEN HYSYS user defined unit operation proposed in the study has potential to be applied for complex membrane system design and optimization.

Keywords: CO2/CH4 Separation, Membrane Process, Membrane modeling, Natural Gas Processing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3858
2936 Assessing the Effect of the Position of the Cavities on the Inner Plate of the Steel Shear Wall under Time History Dynamic Analysis

Authors: Masoud Mahdavi, Mojtaba Farzaneh Moghadam

Abstract:

The seismic forces caused by the waves created in the depths of the earth during the earthquake hit the structure and cause the building to vibrate. Creating large seismic forces will cause low-strength sections in the structure to suffer extensive surface damage. The use of new steel shear walls in steel structures has caused the strength of the building and its main members (columns) to increase due to the reduction and depreciation of seismic forces during earthquakes. In the present study, an attempt was made to evaluate a type of steel shear wall that has regular holes in the inner sheet by modeling the finite element model with Abacus software. The shear wall of the steel plate, measuring 6000 × 3000 mm (one floor) and 3 mm thickness, was modeled with four different pores with a cross-sectional area. The shear wall was dynamically subjected to a time history of 5 seconds by three accelerators, El Centro, Imperial Valley and Kobe. The results showed that increasing the distance between the geometric center of the hole and the geometric center of the inner plate in the steel shear wall (increasing the RCS index) caused the total maximum acceleration to be transferred from the perimeter of the hole to horizontal and vertical beams. The results also show that there is no direct relationship between RCS index and total acceleration in steel shear wall and RCS index is separate from the peak ground acceleration value of earthquake.

Keywords: Hollow Steel plate shear wall, time history analysis, finite element method, Abaqus Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 576
2935 Optimal Channel Equalization for MIMO Time-Varying Channels

Authors: Ehab F. Badran, Guoxiang Gu

Abstract:

We consider optimal channel equalization for MIMO (multi-input/multi-output) time-varying channels in the sense of MMSE (minimum mean-squared-error), where the observation noise can be non-stationary. We show that all ZF (zero-forcing) receivers can be parameterized in an affine form which eliminates completely the ISI (inter-symbol-interference), and optimal channel equalizers can be designed through minimization of the MSE (mean-squarederror) between the detected signals and the transmitted signals, among all ZF receivers. We demonstrate that the optimal channel equalizer is a modified Kalman filter, and show that under the AWGN (additive white Gaussian noise) assumption, the proposed optimal channel equalizer minimizes the BER (bit error rate) among all possible ZF receivers. Our results are applicable to optimal channel equalization for DWMT (discrete wavelet multitone), multirate transmultiplexers, OFDM (orthogonal frequency division multiplexing), and DS (direct sequence) CDMA (code division multiple access) wireless data communication systems. A design algorithm for optimal channel equalization is developed, and several simulation examples are worked out to illustrate the proposed design algorithm.

Keywords: Channel equalization, Kalman filtering, Time-varying systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1834
2934 Simulation Data Management Approach for Developing Adaptronic Systems – The W-Model Methodology

Authors: Roland S. Nattermann, Reiner Anderl

Abstract:

Existing proceeding-models for the development of mechatronic systems provide a largely parallel action in the detailed development. This parallel approach is to take place also largely independent of one another in the various disciplines involved. An approach for a new proceeding-model provides a further development of existing models to use for the development of Adaptronic Systems. This approach is based on an intermediate integration and an abstract modeling of the adaptronic system. Based on this system-model a simulation of the global system behavior, due to external and internal factors or Forces is developed. For the intermediate integration a special data management system is used. According to the presented approach this data management system has a number of functions that are not part of the "normal" PDM functionality. Therefore a concept for a new data management system for the development of Adaptive system is presented in this paper. This concept divides the functions into six layers. In the first layer a system model is created, which divides the adaptronic system based on its components and the various technical disciplines. Moreover, the parameters and properties of the system are modeled and linked together with the requirements and the system model. The modeled parameters and properties result in a network which is analyzed in the second layer. From this analysis necessary adjustments to individual components for specific manipulation of the system behavior can be determined. The third layer contains an automatic abstract simulation of the system behavior. This simulation is a precursor for network analysis and serves as a filter. By the network analysis and simulation changes to system components are examined and necessary adjustments to other components are calculated. The other layers of the concept treat the automatic calculation of system reliability, the "normal" PDM-functionality and the integration of discipline-specific data into the system model. A prototypical implementation of an appropriate data management with the addition of an automatic system development is being implemented using the data management system ENOVIA SmarTeam V5 and the simulation system MATLAB.

Keywords: Adaptronic, Data-Management, LOEWE-CentreAdRIA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2368