Search results for: software cumulative failure prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9051

Search results for: software cumulative failure prediction

6501 An Automatic Generating Unified Modelling Language Use Case Diagram and Test Cases Based on Classification Tree Method

Authors: Wassana Naiyapo, Atichat Sangtong

Abstract:

The processes in software development by Object Oriented methodology have many stages those take time and high cost. The inconceivable error in system analysis process will affect to the design and the implementation process. The unexpected output causes the reason why we need to revise the previous process. The more rollback of each process takes more expense and delayed time. Therefore, the good test process from the early phase, the implemented software is efficient, reliable and also meet the user’s requirement. Unified Modelling Language (UML) is the tool which uses symbols to describe the work process in Object Oriented Analysis (OOA). This paper presents the approach for automatically generated UML use case diagram and test cases. UML use case diagram is generated from the event table and test cases are generated from use case specifications and Graphic User Interfaces (GUI). Test cases are derived from the Classification Tree Method (CTM) that classify data to a node present in the hierarchy structure. Moreover, this paper refers to the program that generates use case diagram and test cases. As the result, it can reduce work time and increase efficiency work.

Keywords: classification tree method, test case, UML use case diagram, use case specification

Procedia PDF Downloads 150
6500 Domain Driven Design vs Soft Domain Driven Design Frameworks

Authors: Mohammed Salahat, Steve Wade

Abstract:

This paper presents and compares the SSDDD “Systematic Soft Domain Driven Design Framework” to DDD “Domain Driven Design Framework” as a soft system approach of information systems development. The framework use SSM as a guiding methodology within which we have embedded a sequence of design tasks based on the UML leading to the implementation of a software system using the Naked Objects framework. This framework has been used in action research projects that have involved the investigation and modelling of business processes using object-oriented domain models and the implementation of software systems based on those domain models. Within this framework, Soft Systems Methodology (SSM) is used as a guiding methodology to explore the problem situation and to develop the domain model using UML for the given business domain. The framework is proposed and evaluated in our previous works, a comparison between SSDDD and DDD is presented in this paper, to show how SSDDD improved DDD as an approach to modelling and implementing business domain perspectives for Information Systems Development. The comparison process, the results, and the improvements are presented in the following sections of this paper.

Keywords: domain-driven design, soft domain-driven design, naked objects, soft language

Procedia PDF Downloads 280
6499 Barriers of the Development and Implementation of Health Information Systems in Iran

Authors: Abbas Sheikhtaheri, Nasim Hashemi

Abstract:

Health information systems have great benefits for clinical and managerial processes of health care organizations. However, identifying and removing constraints and barriers of implementing and using health information systems before any implementation is essential. Physicians are one of the main users of health information systems, therefore, identifying the causes of their resistance and concerns about the barriers of the implementation of these systems is very important. So the purpose of this study was to determine the barriers of the development and implementation of health information systems in terms of the Iranian physicians’ perspectives. In this study conducted in 8 selected hospitals affiliated to Tehran and Iran Universities of Medical Sciences, Tehran, Iran in 2014, physicians (GPs, residents, interns, specialists) in these hospitals were surveyed. In order to collect data, a research made questionnaire was used (Cronbach’s α = 0.95). The instrument included 25 about organizational (9), personal (4), moral and legal (3) and technical barriers (9). Participants were asked to answer the questions using 5 point scale Likert (completely disagree=1 to completely agree=5). By using a simple random sampling method, 200 physicians (from 600) were invited to study that eventually 163 questionnaires were returned. We used mean score and t-test and ANOVA to analyze the data using SPSS software version 17. 52.1% of respondents were female. The mean age was 30.18 ± 7.29. The work experience years for most of them were between 1 to 5 years (80.4 percent). The most important barriers were organizational ones (3.4 ± 0.89), followed by ethical (3.18 ± 0.98), technical (3.06 ± 0.8) and personal (3.04 ± 1.2). Lack of easy access to a fast Internet (3.67±1.91) and the lack of exchanging information (3.61±1.2) were the most important technical barriers. Among organizational barriers, the lack of efficient planning for the development and implementation systems (3.56±1.32) and was the most important ones. Lack of awareness and knowledge of health care providers about the health information systems features (3.33±1.28) and the lack of physician participation in planning phase (3.27±1.2) as well as concerns regarding the security and confidentiality of health information (3.15 ± 1.31) were the most important personal and ethical barriers, respectively. Women (P = 0.02) and those with less experience (P = 0.002) were more concerned about personal barriers. GPs also were more concerned about technical barriers (P = 0.02). According to the study, technical and ethics barriers were considered as the most important barriers however, lack of awareness in target population is also considered as one of the main barriers. Ignoring issues such as personal and ethical barriers, even if the necessary infrastructure and technical requirements were provided, may result in failure. Therefore, along with the creating infrastructure and resolving organizational barriers, special attention to education and awareness of physicians and providing solution for ethics concerns are necessary.

Keywords: barriers, development health information systems, implementation, physicians

Procedia PDF Downloads 332
6498 Studying on Pile Seismic Operation with Numerical Method by Using FLAC 3D Software

Authors: Hossein Motaghedi, Kaveh Arkani, Siavash Salamatpoor

Abstract:

Usually the piles are important tools for safety and economical design of high and heavy structures. For this aim the response of single pile under dynamic load is so effective. Also, the agents which have influence on single pile response are properties of pile geometrical, soil and subjected loads. In this study the finite difference numerical method and by using FLAC 3D software is used for evaluation of single pile behavior under peak ground acceleration (PGA) of El Centro earthquake record in California (1940). The results of this models compared by experimental results of other researchers and it will be seen that the results of this models are approximately coincide by experimental data's. For example the maximum moment and displacement in top of the pile is corresponding to the other experimental results of pervious researchers. Furthermore, in this paper is tried to evaluate the effective properties between soil and pile. The results is shown that by increasing the pile diagonal, the pile top displacement will be decreased. As well as, by increasing the length of pile, the top displacement will be increased. Also, by increasing the stiffness ratio of pile to soil, the produced moment in pile body will be increased and the taller piles have more interaction by soils and have high inertia. So, these results can help directly to optimization design of pile dimensions.

Keywords: pile seismic response, interaction between soil and pile, numerical analysis, FLAC 3D

Procedia PDF Downloads 373
6497 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables

Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner

Abstract:

High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)

Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line

Procedia PDF Downloads 157
6496 Precise Identification of Clustered Regularly Interspaced Short Palindromic Repeats-Induced Mutations via Hidden Markov Model-Based Sequence Alignment

Authors: Jingyuan Hu, Zhandong Liu

Abstract:

CRISPR genome editing technology has transformed molecular biology by accurately targeting and altering an organism’s DNA. Despite the state-of-art precision of CRISPR genome editing, the imprecise mutation outcome and off-target effects present considerable risk, potentially leading to unintended genetic changes. Targeted deep sequencing, combined with bioinformatics sequence alignment, can detect such unwanted mutations. Nevertheless, the classical method, Needleman-Wunsch (NW) algorithm may produce false alignment outcomes, resulting in inaccurate mutation identification. The key to precisely identifying CRISPR-induced mutations lies in determining optimal parameters for the sequence alignment algorithm. Hidden Markov models (HMM) are ideally suited for this task, offering flexibility across CRISPR systems by leveraging forward-backward algorithms for parameter estimation. In this study, we introduce CRISPR-HMM, a statistical software to precisely call CRISPR-induced mutations. We demonstrate that the software significantly improves precision in identifying CRISPR-induced mutations compared to NW-based alignment, thereby enhancing the overall understanding of the CRISPR gene-editing process.

Keywords: CRISPR, HMM, sequence alignment, gene editing

Procedia PDF Downloads 34
6495 Mechanical Properties of Enset Fibers Obtained from Different Breeds of Enset Plant

Authors: Diriba T. Balcha, Boris Kulig, Oliver Hensel, Eyassu Woldesenbet

Abstract:

Enset fiber is agricultural waste and available in a surplus amount in Ethiopia. However, the hypothesized variation in properties of this fiber due to diversity of its plant source breed, fiber position within plant stem and chemical treatment duration had not proven that its application for the development of composite products is problematic. Currently, limited data are known on the functional properties of the fiber as a potential functional fiber. Thus, an effort is made in this study to narrow the knowledge gaps by characterizing it. The experimental design was conducted using Design-Expert software and the tensile test was conducted on Enset fiber from 10 breeds: Dego, Dirbo, Gishera, Itine, Siskela, Neciho, Yesherkinke, Tuzuma, Ankogena, and Kucharkia. The effects of 5% Na-OH surface treatment duration and fiber location along and across the plant pseudostem was also investigated. The test result shows that the rupture stress variation is not significant among the fibers from 10 Enset breeds. However, strain variation is significant among the fibers from 10 Enset breeds that breed Dego fiber has the highest strain before failure. Surface treated fibers showed improved rupture strength and elastic modulus per 24 hours of treatment duration. Also, the result showed that chemical treatment can deteriorate the load-bearing capacity of the fiber. The raw fiber has the higher load-bearing capacity than the treated fiber. And, it was noted that both the rupture stress and strain increase in the top to bottom gradient, whereas there is no significant variation across the stem. Elastic modulus variation both along and across the stem was insignificant. The rupture stress, elastic modulus, and strain result of Enset fiber are 360.11 ± 181.86 MPa, 12.80 ± 6.85 GPa and 0.04 ± 0.02 mm/mm, respectively. These results show that Enset fiber is comparable to other natural fibers such as abaca, banana, and sisal fibers and can be used as alternatives natural fiber for composites application. Besides, the insignificant variation of properties among breeds and across stem is essential for all breeds and all leaf sheath of the Enset fiber plant for fiber extraction. The use of short natural fiber over the long is preferable to reduce the significant variation of properties along the stem or fiber direction. In conclusion, Enset fiber application for composite product design and development is mechanically feasible.

Keywords: Agricultural waste, Chemical treatment, Fiber characteristics, Natural fiber

Procedia PDF Downloads 217
6494 Information Technology Competences for Professional Accountants in Thai Small to Medium Accounting Practice

Authors: Manirath Wongsim, Chatchawarn Srimontree, Pornpichit Phosri

Abstract:

Today, the majority of the data innovation may be currently majorly influencing business, what more accepted part of the accountant may be evolving. Information Technology elements have been appearing to be crucial in triggering changes of accountants’ roles. Thus, this study aims to investigate IT competencies among professional accountants to enhance firm performance. This research was conducted with 47 respondents at five organizations in Thailand and used quantitative research. The results indicate that the factor IT competencies for professional accountants in Thai small to medium accounting within the organizational issues defines18 factors. Specifically, these new factors, based on the research findings and the literature, then unique to IT competencies for professional accountants, include ERP software skills and accounting law and legal skills. The evidence in this study suggests that Analytical skills, teamwork skills, and accounting software were ranked as much-needed skills to be acquired by accountants while communication skills were ranked as the most required skills and delegation skills as the least required. The findings of the research’s empirical evidence suggest that organizations should understand appropriate in developing information technology influence competencies for knowledge employees in general and professional accountants in particular and provide assistance in all processes of decision making.

Keywords: IT competencies, IT competences for professional accountants, IT skills for accounting, IT skills in SMEs

Procedia PDF Downloads 217
6493 Efficacy of In-Situ Surgical vs. Needle Revision on Late Failed Trabeculectomy Blebs

Authors: Xie Xiaobin, Zhang Yan, Shi Yipeng, Sun Wenying, Chen Shuang, Cai Zhipeng, Zhang Hong, Zhang Lixia, Xie Like

Abstract:

Objective: The objective of this research is to compare the efficacy of the late in-situ surgical revision augmented with continuous infusion and needle revision on failed trabeculectomy blebs. Methods From December 2018 to December 2021, a prospective randomized controlled trial was performed on 44 glaucoma patients with failed bleb ≥ 6months with medically uncontrolled in Eye Hospital, China Academy of Chinese Medical Sciences. They were randomly divided into two groups. 22 eyes of 22 patients underwent the late in-situ surgical revision with continuous anterior chamber infusion in the study group, and 22 of 22 patients were treated with needle revision in the control group. Main outcome measures include preoperative and postoperative intraocular pressure (IOP), the number of anti-glaucoma medicines, the operation success rate, and the postoperative complications. Results The postoperative IOP values decreased significantly from the baseline in both groups (both P<0.05). IOP was significantly lower in the study group than in the control group at one week, 1, and 3 months postoperatively (all P<0.05). IOP reductions in the study group were substantially more prominent than in the control group at all postoperative time points (all P<0.05). The complete success rate in the study group was significantly higher than in the control group (71.4% vs. 33.3%, P<0.05), while the complete failure rate was significantly lower in the study group (0% vs. 28.5%, P<0.05). According to Cox’s proportional hazards regression analysis, high IOP at baseline was independently associated with increased risks of complete failure (adjusted hazard ratio=1.141, 95% confidence interval=1.021-1.276, P<0.05). There was no significant difference in the incidence of postoperative complications between the two groups (P>0.05). Conclusion: Both in-situ surgical and needle revision have acceptable success rates and safety for the late failed trabeculectomy blebs, while the former is likely to have a higher level of efficacy over the latter. Needle revision may be insufficient for eyes with low target IOP.

Keywords: glaucoma, trabeculectomy blebs, in-situ surgical revision, needle revision

Procedia PDF Downloads 75
6492 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak

Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi

Abstract:

This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.

Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak

Procedia PDF Downloads 136
6491 AI Software Algorithms for Drivers Monitoring within Vehicles Traffic - SiaMOTO

Authors: Ioan Corneliu Salisteanu, Valentin Dogaru Ulieru, Mihaita Nicolae Ardeleanu, Alin Pohoata, Bogdan Salisteanu, Stefan Broscareanu

Abstract:

Creating a personalized statistic for an individual within the population using IT systems, based on the searches and intercepted spheres of interest they manifest, is just one 'atom' of the artificial intelligence analysis network. However, having the ability to generate statistics based on individual data intercepted from large demographic areas leads to reasoning like that issued by a human mind with global strategic ambitions. The DiaMOTO device is a technical sensory system that allows the interception of car events caused by a driver, positioning them in time and space. The device's connection to the vehicle allows the creation of a source of data whose analysis can create psychological, behavioural profiles of the drivers involved. The SiaMOTO system collects data from many vehicles equipped with DiaMOTO, driven by many different drivers with a unique fingerprint in their approach to driving. In this paper, we aimed to explain the software infrastructure of the SiaMOTO system, a system designed to monitor and improve driver driving behaviour, as well as the criteria and algorithms underlying the intelligent analysis process.

Keywords: artificial intelligence, data processing, driver behaviour, driver monitoring, SiaMOTO

Procedia PDF Downloads 67
6490 Trip Reduction in Turbo Machinery

Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto

Abstract:

Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBT

Keywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start

Procedia PDF Downloads 60
6489 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method

Authors: Defne Uz

Abstract:

Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.

Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration

Procedia PDF Downloads 133
6488 Mathematical Modeling and Simulation of Convective Heat Transfer System in Adjustable Flat Collector Orientation for Commercial Solar Dryers

Authors: Adeaga Ibiyemi Iyabo, Adeaga Oyetunde Adeoye

Abstract:

Interestingly, mechanical drying methods has played a major role in the commercialization of agricultural and agricultural allied sectors. In the overall, drying enhances the favorable storability and preservation of agricultural produce which in turn promotes its producibility, marketability, salability, and profitability. Recent researches have shown that solar drying is easier, affordable, controllable, and of course, cleaner and purer than other means of drying methods. It is, therefore, needful to persistently appraise solar dryers with a view to improving on the existing advantages. In this paper, mathematical equations were formulated for solar dryer using mass conservation law, material balance law and least cost savings method. Computer codes were written in Visual Basic.Net. The developed computer software, which considered Ibadan, a strategic south-western geographical location in Nigeria, was used to investigate the relationship between variable orientation angle of flat plate collector on solar energy trapped, derived monthly heat load, available energy supplied by solar and fraction supplied by solar energy when 50000 Kg/Month of produce was dried over a year. At variable collector tilt angle of 10°.13°,15°,18°, 20°, the derived monthly heat load, available energy supplied by solar were 1211224.63MJ, 102121.34MJ, 0.111; 3299274.63MJ, 10121.34MJ, 0.132; 5999364.706MJ, 171222.859MJ, 0.286; 4211224.63MJ, 132121.34MJ, 0.121; 2200224.63MJ, 112121.34MJ, 0.104, respectively .These results showed that if optimum collector angle is not reached, those factors needed for efficient and cost reduction drying will be difficult to attain. Therefore, this software has revealed that off - optimum collector angle in commercial solar drying does not worth it, hence the importance of the software in decision making as to the optimum collector angle of orientation.

Keywords: energy, ibadan, heat - load, visual-basic.net

Procedia PDF Downloads 399
6487 Adsorption Mechanism of Heavy Metals and Organic Pesticide on Industrial Construction and Demolition Waste and Its Runoff Behaviors

Authors: Sheng Huang, Xin Zhao, Xiaofeng Gao, Tao Zhou, Shijin Dai, Youcai Zhao

Abstract:

Adsorption of heavy metal pollutants (Zn, Cd, Pb, Cr, Cu) and organic pesticide (phorate, dithiophosphate diethyl, triethyl phosphorothioate), along with their multi-contamination on the surface of industrial construction & demolition waste (C&D waste) was investigated. Brick powder was selected as the appropriate waste while its maximum equilibrium adsorption amount of heavy metal under single controlled contamination matrix reached 5.41, 0.81, 0.45, 1.13 and 0.97 mg/g, respectively. Effects of pH and spiking dose of ICDW was also investigated. Equilibrium adsorption amount of organic pesticide varied from 0.02 to 0.97 mg/g, which was negatively correlated to the size distribution and hydrophilism. Existence of organic pesticide on surface of ICDW caused various effects on the heavy metal adsorption, mainly due to combination of metal ions and the floccule formation along with wrapping behaviors by pesticide pollutants. Adsorption of Zn was sharply decreased from 7.1 to 0.15 mg/g compared with clean ICDW and phorate contaminated ICDW, while that of Pb, Cr and Cd experienced an increase- then decrease procedure. On the other hand, runoff of pesticide contaminants was investigated under 25 mm/h simulated rainfall. Results showed that the cumulative runoff amount fitted well with curve obtained from a power function, of which r2=0.95 and 0.91 for 1DAA (1 day between contamination and runoff) and 7DAA, respectively. This study helps provide evaluation of industrial construction and demolition waste contamination into aquatic systems.

Keywords: adsorption mechanism, industrial construction waste, metals, pesticide, runoff

Procedia PDF Downloads 449
6486 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 378
6485 Association of Non Synonymous SNP in DC-SIGN Receptor Gene with Tuberculosis (Tb)

Authors: Saima Suleman, Kalsoom Sughra, Naeem Mahmood Ashraf

Abstract:

Mycobacterium tuberculosis is a communicable chronic illness. This disease is being highly focused by researchers as it is present approximately in one third of world population either in active or latent form. The genetic makeup of a person plays an important part in producing immunity against disease. And one important factor association is single nucleotide polymorphism of relevant gene. In this study, we have studied association between single nucleotide polymorphism of CD-209 gene (encode DC-SIGN receptor) and patients of tuberculosis. Dry lab (in silico) and wet lab (RFLP) analysis have been carried out. GWAS catalogue and GEO database have been searched to find out previous association data. No association study has been found related to CD-209 nsSNPs but role of CD-209 in pulmonary tuberculosis have been addressed in GEO database.Therefore, CD-209 has been selected for this study. Different databases like ENSEMBLE and 1000 Genome Project has been used to retrieve SNP data in form of VCF file which is further submitted to different software to sort SNPs into benign and deleterious. Selected SNPs are further annotated by using 3-D modeling techniques using I-TASSER online software. Furthermore, selected nsSNPs were checked in Gujrat and Faisalabad population through RFLP analysis. In this study population two SNPs are found to be associated with tuberculosis while one nsSNP is not found to be associated with the disease.

Keywords: association, CD209, DC-SIGN, tuberculosis

Procedia PDF Downloads 298
6484 Formulation Development and Evaluation Chlorpheniramine Maleate Containing Nanoparticles Loaded Thermo Sensitive in situ Gel for Treatment of Allergic Rhinitis

Authors: Vipin Saini, Manish Kumar, Shailendra Bhatt, A. Pandurangan

Abstract:

The aim of the present study was to fabricate a thermo sensitive gel containing Chlorpheniramine maleate (CPM) loaded nanoparticles following intranasal administration for effective treatment of allergic rhinitis. Chitosan based nanoparticles were prepared by precipitation method followed by the addition of developed NPs within the Poloxamer 407 and carbopol 934P based mucoadhesive thermo-reversible gel. Developed formulations were evaluated for Particle size, PDI, % entrapment efficiency and % cumulative drug permeation. NP3 formulation was found to be optimized on the basis of minimum particle size (143.9 nm), maximum entrapment efficiency (80.10±0.414 %) and highest drug permeation (90.92±0.531 %). The optimized formulation NP3 was then formulated into thermo reversible in situ gel. This intensifies the contact between nasal mucosa and the drug, increases and facilitates the drug absorption which results in increased bioavailability. G4 formulation was selected as the optimize on the basis of gelation ability and mucoadhesive strength. Histology was carried out to examine the damage caused by the optimized G4 formulation. Results revealed no visual signs of tissue damage thus indicated safe nasal delivery of nanoparticulate in situ gel formulation G4. Thus, intranasal CPM NP-loaded in situ gel was found to be a promising formulation for the treatment of allergic rhinitis.

Keywords: chitosan, nanoparticles, in situ gel, chlorpheniramine maleate, poloxamer 407

Procedia PDF Downloads 165
6483 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 151
6482 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 138
6481 Impact of the Non-Energy Sectors Diversification on the Energy Dependency Mitigation: Visualization by the “IntelSymb” Software Application

Authors: Ilaha Rzayeva, Emin Alasgarov, Orkhan Karim-Zada

Abstract:

This study attempts to consider the linkage between management and computer sciences in order to develop the software named “IntelSymb” as a demo application to prove data analysis of non-energy* fields’ diversification, which will positively influence on energy dependency mitigation of countries. Afterward, we analyzed 18 years of economic fields of development (5 sectors) of 13 countries by identifying which patterns mostly prevailed and which can be dominant in the near future. To make our analysis solid and plausible, as a future work, we suggest developing a gateway or interface, which will be connected to all available on-line data bases (WB, UN, OECD, U.S. EIA) for countries’ analysis by fields. Sample data consists of energy (TPES and energy import indicators) and non-energy industries’ (Main Science and Technology Indicator, Internet user index, and Sales and Production indicators) statistics from 13 OECD countries over 18 years (1995-2012). Our results show that the diversification of non-energy industries can have a positive effect on energy sector dependency (energy consumption and import dependence on crude oil) deceleration. These results can provide empirical and practical support for energy and non-energy industries diversification’ policies, such as the promoting of Information and Communication Technologies (ICTs), services and innovative technologies efficiency and management, in other OECD and non-OECD member states with similar energy utilization patterns and policies. Industries, including the ICT sector, generate around 4 percent of total GHG, but this is much higher — around 14 percent — if indirect energy use is included. The ICT sector itself (excluding the broadcasting sector) contributes approximately 2 percent of global GHG emissions, at just under 1 gigatonne of carbon dioxide equivalent (GtCO2eq). Ergo, this can be a good example and lesson for countries which are dependent and independent on energy, and mainly emerging oil-based economies, as well as to motivate non-energy industries diversification in order to be ready to energy crisis and to be able to face any economic crisis as well.

Keywords: energy policy, energy diversification, “IntelSymb” software, renewable energy

Procedia PDF Downloads 212
6480 Software Tool Design for Heavy Oil Upgrading by Hydrogen Donor Addition in a Hydrodynamic Cavitation Process

Authors: Munoz A. Tatiana, Solano R. Brandon, Montes C. Juan, Cierco G. Javier

Abstract:

The hydrodynamic cavitation is a process in which the energy that the fluids have in the phase changes is used. From this energy, local temperatures greater than 5000 °C are obtained where thermal cracking of the fluid molecules takes place. The process applied to heavy oil affects variables such as viscosity, density, and composition, which constitutes an important improvement in the quality of crude oil. In this study, the need to design a software through mathematical integration models of mixing, cavitation, kinetics, and reactor, allows modeling changes in density, viscosity, and composition of a heavy oil crude, when the fluid passes through a hydrodynamic cavitation reactor. In order to evaluate the viability of this technique in the industry, a heavy oil of 18° API gravity, was simulated using naphtha as a hydrogen donor at concentrations of 1, 2 and 5% vol, where the simulation results showed an API gravity increase to 0.77, 1.21 and 1.93° respectively and a reduction viscosity by 9.9, 12.9 and 15.8%. The obtained results allow to have a favorable panorama on this technological development, an appropriate visualization on the generation of innovative knowledge of this technique and the technical-economic opportunity that benefits the development of the hydrocarbon sector related to heavy crude oil that includes the largest world oil production.

Keywords: hydrodynamic cavitation, thermal cracking, hydrogen donor, heavy oil upgrading, simulator

Procedia PDF Downloads 136
6479 Prediction of Formation Pressure Using Artificial Intelligence Techniques

Authors: Abdulmalek Ahmed

Abstract:

Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).

Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)

Procedia PDF Downloads 139
6478 Apollo Clinical Excellence Scorecard (ACE@25): An Initiative to Drive Quality Improvement in Hospitals

Authors: Anupam Sibal

Abstract:

Whatever is measured tends to improve. With a view to objectively measuring and improving clinical quality across the Apollo Group Hospitals, the initiative of ACE @ 25 (Apollo Clinical Excellence@25) was launched on Jan 09. ACE @ 25 is a clinically balanced scorecard incorporating 25 clinical quality parameters involving complication rates, mortality rates, one-year survival rates and average length of stay after major procedures like liver and renal transplant, CABG, TKR, THR, TURP, PTCA, endoscopy, large bowel resection and MRM covering all major specialties. Also included are hospital acquired infection rates, pain satisfaction and medication errors. Benchmarks have been chosen from the world’s best hospitals. There are weighted scores for outcomes color coded green, orange and red. The cumulative score is 100. Data is reported monthly by 43 Group Hospitals online on the Lighthouse platform. Action taken reports for parameters falling in red are submitted quarterly and reviewed by the board. An audit team audits the data at all locations every six months. Scores are linked to appraisal of the medical head and there is an “ACE @ 25” Champion Award for the highest scorer. Scores for different parameters were variable from green to red at the start of the initiative. Most hospitals showed an improvement in scores over the last four years for parameters where they had showed scores in red or orange at the start of the initiative. The overall scores for the group have shown an increase from 72 in 2010 to 81 in 2015.

Keywords: benchmarks, clinical quality, lighthouse, platform, scores

Procedia PDF Downloads 284
6477 An Insite to the Probabilistic Assessment of Reserves in Conventional Reservoirs

Authors: Sai Sudarshan, Harsh Vyas, Riddhiman Sherlekar

Abstract:

The oil and gas industry has been unwilling to adopt stochastic definition of reserves. Nevertheless, Monte Carlo simulation methods have gained acceptance by engineers, geoscientists and other professionals who want to evaluate prospects or otherwise analyze problems that involve uncertainty. One of the common applications of Monte Carlo simulation is the estimation of recoverable hydrocarbon from a reservoir.Monte Carlo Simulation makes use of random samples of parameters or inputs to explore the behavior of a complex system or process. It finds application whenever one needs to make an estimate, forecast or decision where there is significant uncertainty. First, the project focuses on performing Monte-Carlo Simulation on a given data set using U. S Department of Energy’s MonteCarlo Software, which is a freeware e&p tool. Further, an algorithm for simulation has been developed for MATLAB and program performs simulation by prompting user for input distributions and parameters associated with each distribution (i.e. mean, st.dev, min., max., most likely, etc.). It also prompts user for desired probability for which reserves are to be calculated. The algorithm so developed and tested in MATLAB further finds implementation in Python where existing libraries on statistics and graph plotting have been imported to generate better outcome. With PyQt designer, codes for a simple graphical user interface have also been written. The graph so plotted is then validated with already available results from U.S DOE MonteCarlo Software.

Keywords: simulation, probability, confidence interval, sensitivity analysis

Procedia PDF Downloads 367
6476 An Interactive Platform Displaying Mixed Reality Media

Authors: Alfred Chen, Cheng Chieh Hsu, Yu-Pin Ma, Meng-Jie Lin, Fu Pai Chiu, Yi-Yan Sie

Abstract:

This study is attempted to construct a human-computer interactive platform system that has mainly consisted of an augmented hardware system, a software system, a display table, and mixed media. This system has provided with human-computer interaction services through an interactive platform for the tourism industry. A well designed interactive platform, integrating of augmented reality and mixed media, has potential to enhance museum display quality and diversity. Besides, it will create a comprehensive and creative display mode for most museums and historical heritages. Therefore, it is essential to let public understand what the platform is, how it functions, and most importantly how one builds an interactive augmented platform. Hence the authors try to elaborate the construction process of the platform in detail. Thus, there are three issues to be considered, i.e.1) the theory and application of augmented reality, 2) the hardware and software applied, and 3) the mixed media presented. In order to describe how the platform works, Courtesy Door of Tainan Confucius Temple has been selected as case study in this study. As a result, a developed interactive platform has been presented by showing the physical entity object, along with virtual mixing media such as text, images, animation, and video. This platform will result in providing diversified and effective information that will be delivered to the users.

Keywords: human-computer interaction, mixed reality, mixed media, tourism

Procedia PDF Downloads 472
6475 Hybrid Wind Solar Gas Reliability Optimization Using Harmony Search under Performance and Budget Constraints

Authors: Meziane Rachid, Boufala Seddik, Hamzi Amar, Amara Mohamed

Abstract:

Today’s energy industry seeks maximum benefit with maximum reliability. In order to achieve this goal, design engineers depend on reliability optimization techniques. This work uses a harmony search algorithm (HS) meta-heuristic optimization method to solve the problem of wind-Solar-Gas power systems design optimization. We consider the case where redundant electrical components are chosen to achieve a desirable level of reliability. The electrical power components of the system are characterized by their cost, capacity and reliability. The reliability is considered in this work as the ability to satisfy the consumer demand which is represented as a piecewise cumulative load curve. This definition of the reliability index is widely used for power systems. The proposed meta-heuristic seeks for the optimal design of series-parallel power systems in which a multiple choice of wind generators, transformers and lines are allowed from a list of product available in the market. Our approach has the advantage to allow electrical power components with different parameters to be allocated in electrical power systems. To allow fast reliability estimation, a universal moment generating function (UMGF) method is applied. A computer program has been developed to implement the UMGF and the HS algorithm. An illustrative example is presented.

Keywords: reliability optimization, harmony search optimization (HSA), universal generating function (UMGF)

Procedia PDF Downloads 564
6474 Engineering Properties of Different Lithological Varieties of a Singapore Granite

Authors: Louis Ngai Yuen Wong, Varun Maruvanchery

Abstract:

The Bukit Timah Granite, which is a major rock formation in Singapore, encompasses different rock types such as granite, adamellite, and granodiorite with various hybrid rocks. The present study focuses on the Central Singapore Granite found in the Mandai area. Even within this small aerial extent, lithological variations with respect to the composition, texture as well as the grain size have been recognized in this igneous body. Over the years, the research effort on the Bukit Timah Granite has been focused on achieving a better understanding of its engineering properties in association with civil engineering projects. To our best understanding, a few types of research attempted to systematically investigate the influence of grain size, mineral composition, texture etc. on the strength of Bukit Timah Granite rocks in a comprehensive manner. In typical local industry practices, the different lithological varieties are not differentiated, but all are grouped under Bukit Timah Granite during core logging and the subsequent determination of engineering properties. To address such a major gap in the local engineering geological practice, a preliminary study is conducted on the variations of uniaxial compressive strength (UCS) in seven distinctly different lithological varieties found in the Bukit Timah Granite. Other physical properties including Young’s modulus, P-wave velocity and dry density determined from laboratory testing will also be discussed. The study is supplemented by a petrographical thin section examination. In addition, the specimen failure mode is classified and further correlated with the lithological varieties by carefully observing the details of crack initiation, propagation and coalescence processes in the specimens undergoing loading tests using a high-speed camera. The outcome of this research, which is the first of its type in Singapore, will have a direct implication on the sampling and design practices in the field of civil engineering and particularly underground space development in Singapore.

Keywords: Bukit Timah Granite, lithological variety, thin section study, high speed video, failure mode

Procedia PDF Downloads 309
6473 Variability in Contraception Choices and Abortion Rates among Female Garment Factory Workers in Urban and Rural Cambodia

Authors: Olalekan Olaluwoye, Joanne Williams, Elizabeth Hoban

Abstract:

Background: Modern contraceptives are effective in preventing unwanted pregnancies and therefore the potential to reduce abortion rates. There is a need for information about how rates of contraceptive use and abortion vary across Cambodia and the relationship between the prevalence of modern contraception use and abortion rates. This study compares the use of contraception and abortion among female garment factory workers in rural and urban areas of Cambodia. Method: Cross-sectional surveys were conducted with 1701 women working in eleven garment factories in rural and urban areas of Cambodia. Sexual and reproductive health data were collected using Audio-Assisted Survey Interviews and analysed using STATA 14 software. Findings: Over 70% of the respondents were less than 30 years of age across both rural and urban settings and over 50% have only primary education, thus the study population was largely young women with limited education. A significantly higher proportion of the rural women earned over $200 in the previous month compared with their urban counterparts. The majority of the urban women (51.5%) were married, while single women (46.9%) made up the largest group working in the rural factories. A significantly larger proportion of women in the rural areas (83.9%) were sexually active compared to the urban women (50.9%). More women from the rural areas (41.4%) had been pregnant at some time compared with the urban population (37.7%). The use of any contraceptive method among sexually active women was significantly higher in the rural areas (80.1%) compared to the urban areas (65.7%) with p-value=0.000. However, among those women who used contraception, the prevalence of modern contraception use was slightly higher in the urban population (68.8% urban, 63.4% rural, p-value=0.1). For women who had a history of pregnancy the abortion prevalence was higher among rural women (43.8%) compared to their urban counterparts (37.7%). Regression analysis showed that after adjustment for the demographic variables (age, relationship status, income, education) only age and relationship status had a significant influence on the use of modern contraception.Single females who were sexually active and older women, who had potentially completed their families, were more likely to choose modern contraception. Conclusion: Although overall the use of contraception was higher among rural women, the use of modern contraception was higher among urban women.This finding may partly explain the higher rates of abortion among women in the rural areas as traditional contraception methods have higher failure rates and are more likely to result in an unplanned pregnancy.Despite the regional variation, the high rates of abortion across the country suggest there is a need for improve education on family planning among female garment factory workers in Cambodia.

Keywords: abortion, Cambodia, contraception, garment factory

Procedia PDF Downloads 135
6472 Analysis of a Lignocellulose Degrading Microbial Consortium to Enhance the Anaerobic Digestion of Rice Straws

Authors: Supanun Kangrang, Kraipat Cheenkachorn, Kittiphong Rattanaporn, Malinee Sriariyanun

Abstract:

Rice straw is lignocellulosic biomass which can be utilized as substrate for the biogas production. However, due to the property and composition of rice straw, it is difficult to be degraded by hydrolysis enzymes. One of the pretreatment method that modifies such properties of lignocellulosic biomass is the application of lignocellulose-degrading microbial consortia. The aim of this study is to investigate the effect of microbial consortia to enhance biogas production. To select the high efficient consortium, cellulase enzymes were extracted and their activities were analyzed. The results suggested that microbial consortium culture obtained from cattle manure is the best candidate compared to decomposed wood and horse manure. A microbial consortium isolated from cattle manure was then mixed with anaerobic sludge and used as inoculum for biogas production. The optimal conditions for biogas production were investigated using response surface methodology (RSM). The tested parameters were the ratio of amount of microbial consortium isolated and amount of anaerobic sludge (MI:AS), substrate to inoculum ratio (S:I) and temperature. Here, the value of the regression coefficient R2 = 0.7661 could be explained by the model which is high to advocate the significance of the model. The highest cumulative biogas yield was 104.6 ml/g-rice straw at optimum ratio of MI:AS, ratio of S:I, and temperature of 2.5:1, 15:1 and 44°C respectively.

Keywords: lignocellulolytic biomass, microbial consortium, cellulase, biogas, Response Surface Methodology (RSM)

Procedia PDF Downloads 383