Search results for: mixed integer linear programming
4902 Ni Mixed Oxides Type-Spinel for Energy: Application in Dry Reforming of Methane for Syngas (H2 and CO) Production
Authors: Bedarnia Ishak
Abstract:
In the recent years, the dry reforming of methane has received considerable attention from an environmental view point because it consumes and eliminates two gases (CH4 and CO2) responsible for global warming by greenhouse effect. Many catalysts containing noble metal (Rh, Ru, Pd, Pt and Ir) or transition metal (Ni, Co and Fe) have been reported to be active in this reaction. Compared to noble metals, Ni-materials are cheap but very easily deactivated by coking. Ni-based mixed oxides structurally well-defined like perovskites and spinels are being studied because they possibly make solid solutions and allow to vary the composition and thus the performances properties. In this work, nano-sized nickel ferrite oxides are synthesized using three different methods: Co-precipitation (CP), hydrothermal (HT) and sol gel (SG) methods and characterized by XRD, Raman, XPS, BET, TPR, SEM-EDX and TEM-EDX. XRD patterns of all synthesized oxides showed the presence of NiFe2O4 spinel, confirmed by Raman spectroscopy. Hematite was present only in CP sample. Depending on the synthesis method, the surface area, particle size, as well as the surface Ni/Fe atomic ratio (XPS) and the behavior upon reduction varied. The materials were tested in methane dry reforming with CO2 at 1 atm and 650-800 °C. The catalytic activity of the spinel samples was not very high (XCH4 = 5-20 mol% and XCO2 = 25-40 mol %) when no pre-reduction step was carried out. A significant contribution of RWGS explained the low values of H2/CO ratio obtained. The reoxidation step of the catalyst carried out after reaction showed little amounts of coke deposition. The reducing pretreatment was particularly efficient in the case of SG (XCH4 = 80 mol% and XCO2 = 92 mol%, at 800 °C), with H2/CO > 1. In conclusion, the influence of preparation was strong for most samples and the catalytic behavior could be interpreted by considering the distribution of cations among octahedral (Oh) and tetrahedral (Td) sites as in (Ni2+1-xFe3+x) Td (Ni2+xFe3+2-x) OhO2-4 influenced the reducibility of materials and thus their catalytic performance.Keywords: NiFe2O4, dry reforming of methane, spinel oxide, oxide zenc
Procedia PDF Downloads 2824901 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 364900 Adaptability in Older People: A Mixed Methods Approach
Authors: V. Moser-Siegmeth, M. C. Gambal, M. Jelovcak, B. Prytek, I. Swietalsky, D. Würzl, C. Fida, V. Mühlegger
Abstract:
Adaptability is the capacity to adjust without great difficulty to changing circumstances. Within our project, we aimed to detect whether older people living within a long-term care hospital lose the ability to adapt. Theoretical concepts are contradictory in their statements. There is also lack of evidence in the literature how the adaptability of older people changes over the time. Following research questions were generated: Are older residents of a long-term care facility able to adapt to changes within their daily routine? How long does it take for older people to adapt? The study was designed as a convergent parallel mixed method intervention study, carried out within a four-month period and took place within seven wards of a long-term care hospital. As a planned intervention, a change of meal-times was established. The inhabitants were surveyed with qualitative interviews and quantitative questionnaires and diaries before, during and after the intervention. In addition, a survey of the nursing staff was carried out in order to detect changes of the people they care for and how long it took them to adapt. Quantitative data was analysed with SPSS, qualitative data with a summarizing content analysis. The average age of the involved residents was 82 years, the average length of stay 45 months. The adaptation to new situations does not cause problems for older residents. 47% of the residents state that their everyday life has not changed by changing the meal times. 24% indicate ‘neither nor’ and only 18% respond that their daily life has changed considerably due to the changeover. The diaries of the residents, which were conducted over the entire period of investigation showed no changes with regard to increased or reduced activity. With regard to sleep quality, assessed with the Pittsburgh sleep quality index, there is little change in sleep behaviour compared to the two survey periods (pre-phase to follow-up phase) in the cross-table. The subjective sleep quality of the residents is not affected. The nursing staff points out that, with good information in advance, changes are not a problem. The ability to adapt to changes does not deteriorate with age or by moving into a long-term care facility. It only takes a few days to get used to new situations. This can be confirmed by the nursing staff. Although there are different determinants like the health status that might make an adjustment to new situations more difficult. In connection with the limitations, the small sample size of the quantitative data collection must be emphasized. Furthermore, the extent to which the quantitative and qualitative sample represents the total population, since only residents without cognitive impairments of selected units participated. The majority of the residents has cognitive impairments. It is important to discuss whether and how well the diary method is suitable for older people to examine their daily structure.Keywords: adaptability, intervention study, mixed methods, nursing home residents
Procedia PDF Downloads 1484899 Ni Mixed Oxides Type-Spinel for Energy: Application in Dry Reforming of Methane for Syngas (H2 & Co) Production
Authors: Bouhenni Mohamed Saif El Islam
Abstract:
In the recent years, the dry reforming of methane has received considerable attention from an environmental view point because it consumes and eliminates two gases (CH4 and CO2) responsible for global warming by greenhouse effect. Many catalysts containing noble metal (Rh, Ru, Pd, Pt and Ir) or transition metal (Ni, Co and Fe) have been reported to be active in this reaction. Compared to noble metals, Ni-materials are cheap but very easily deactivated by coking. Ni-based mixed oxides structurally well-defined like perovskites and spinels are being studied because they possibly make solid solutions and allow to vary the composition and thus the performances properties. In this work, nano-sized nickel ferrite oxides are synthesized using three different methods: Co-precipitation (CP), hydrothermal (HT) and sol gel (SG) methods and characterized by XRD, Raman, XPS, BET, TPR, SEM-EDX and TEM-EDX. XRD patterns of all synthesized oxides showed the presence of NiFe2O4 spinel, confirmed by Raman spectroscopy. Hematite was present only in CP sample. Depending on the synthesis method, the surface area, particle size, as well as the surface Ni/Fe atomic ratio (XPS) and the behavior upon reduction varied. The materials were tested in methane dry reforming with CO2 at 1 atm and 650-800 °C. The catalytic activity of the spinel samples was not very high (XCH4 = 5-20 mol% and XCO2 = 25-40 mol %) when no pre-reduction step was carried out. A significant contribution of RWGS explained the low values of H2/CO ratio obtained. The reoxidation step of the catalyst carried out after reaction showed little amounts of coke deposition. The reducing pretreatment was particularly efficient in the case of SG (XCH4 = 80 mol% and XCO2 = 92 mol%, at 800 °C), with H2/CO > 1. In conclusion, the influence of preparation was strong for most samples and the catalytic behavior could be interpreted by considering the distribution of cations among octahedral (Oh) and tetrahedral (Td) sites as in (Ni2+1-xFe3+x)Td (Ni2+xFe3+2-x)OhO2-4 influenced the reducibility of materials and thus their catalytic performance.Keywords: NiFe2O4, dry reforming of methane, spinel oxide, XCO2
Procedia PDF Downloads 3834898 Numerical and Experimental Analysis of Stiffened Aluminum Panels under Compression
Authors: Ismail Cengiz, Faruk Elaldi
Abstract:
Within the scope of the study presented in this paper, load carrying capacity and buckling behavior of a stiffened aluminum panel designed by adopting current ‘buckle-resistant’ design application and ‘Post –Buckling’ design approach were investigated experimentally and numerically. The test specimen that is stabilized by Z-type stiffeners and manufactured from aluminum 2024 T3 Clad material was test under compression load. Buckling behavior was observed by means of 3 – dimensional digital image correlation (DIC) and strain gauge pairs. The experimental study was followed by developing an efficient and reliable finite element model whose ability to predict behavior of the stiffened panel used for compression test is verified by compering experimental and numerical results in terms of load – shortening curve, strain-load curves and buckling mode shapes. While finite element model was being constructed, non-linear behaviors associated with material and geometry was considered. Finally, applicability of aluminum stiffened panel in airframe design against to composite structures was evaluated thorough the concept of ‘Structural Efficiency’. This study reveals that considerable amount of weight saving could be gained if the concept of ‘post-buckling design’ is preferred to the already conventionally used ‘buckle resistant design’ concept in aircraft industry without scarifying any of structural integrity under load spectrum.Keywords: post-buckling, stiffened panel, non-linear finite element method, aluminum, structural efficiency
Procedia PDF Downloads 1484897 Effect of Retention Time on Kitchen Wastewater Treatment Using Mixed Algal-Bacterial Consortia
Authors: Keerthi Katam, Abhinav B. Tirunaghari, Vinod Vadithya, Toshiyuki Shimizu, Satoshi Soda, Debraj Bhattacharyya
Abstract:
Researchers worldwide are increasingly focusing on the removal of carbon and nutrient from wastewater using algal-bacterial hybrid systems. Algae produce oxygen during photosynthesis, which is taken up by heterotrophic bacteria for mineralizing organic carbon to carbon dioxide. This phenomenon reduces the net mechanical aeration requirement of aerobic biological wastewater treatment processes. Consequently, the treatment cost is also reduced. Microalgae also participate in the treatment process by taking up nutrient (N, P) from wastewater. Algal biomass, if harvested, can generate value-added by-products. The aim of the present study was to compare the performance of two systems - System A (mixed microalgae and bacteria) and System B (diatoms and bacteria) in treating kitchen wastewater (KWW). The test reactors were operated at five different solid retention times (SRTs) -2, 4, 6, 8, and 10-days in draw-and-fill mode. The KWW was collected daily from the dining hall-kitchen area of the Indian Institute of Technology Hyderabad. The influent and effluent samples were analyzed for total organic carbon (TOC), total nitrogen (TN) using TOC-L analyzer. A colorimetric method was used to analyze anionic surfactant. Phosphorus (P) and chlorophyll were measured by following standard methods. The TOC, TN, and P of KWW were in the range of 113.5 to 740 mg/L, 2 to 22.8 mg/L, and 1 to 4.5 mg/L, respectively. Both the systems gave similar results with 85% of TOC removal and 60% of TN removal at 10-d SRT. However, the anionic surfactant removal in System A was 99% and 60% in System B. The chlorophyll concentration increased with an increase in SRT in both the systems. At 2-d SRT, no chlorophyll was observed in System B, whereas 0.5 mg/L was observed in System A. At 10-d SRT, the chlorophyll concentration in System A was 7.5 mg/L, whereas it was 4.5 mg/L in System B. Although both the systems showed similar performance in treatment, the increase in chlorophyll concentration suggests that System A demonstrated a better algal-bacterial symbiotic relationship in treating KWW than System B.Keywords: diatoms, microalgae, retention time, wastewater treatment
Procedia PDF Downloads 1294896 Subjective Well-being, Beliefs, and Lifestyles of First Year University Students in the UK
Authors: Kaili C. Zhang
Abstract:
Mental well-being is an integral part of university students’ overall well-being and has been a matter of increasing concern in the UK. This study addressed the impact of university experience on students by investigating the changes students experience in their beliefs, lifestyles, and well-being during their first year of study, as well as the factors contributing to such changes. Using a longitudinal two-wave mixed method design, this project identified importantfactors that contribute to or inhibit these changes. Implications for universities across the UK are discussed.Keywords: subjective well-being, beliefs, lifestyles, university students
Procedia PDF Downloads 1994895 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals
Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty
Abstract:
A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs, and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine-learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient but not the magnitude. A neural network with two hidden layers were then used to learn the coefficient magnitudes along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.Keywords: quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction
Procedia PDF Downloads 1154894 The Quantitative Analysis of the Influence of the Superficial Abrasion on the Lifetime of the Frog Rail
Authors: Dong Jiang
Abstract:
Turnout is the essential equipment on the railway, which also belongs to one of the strongest demanded infrastructural facilities of railway on account of the more seriously frog rail failures. In cooperation with Germany Company (DB Systemtechnik AG), our research team focuses on the quantitative analysis about the frog rails to predict their lifetimes. Moreover, the suggestions for the timely and effective maintenances are made to improve the economy of the frog rails. The lifetime of the frog rail depends strongly on the internal damage of the running surface until the breakages occur. On the basis of Hertzian theory of the contact mechanics, the dynamic loads of the running surface are calculated in form of the contact pressures on the running surface and the equivalent tensile stress inside the running surface. According to material mechanics, the strength of the frog rail is determined quantitatively in form of the Stress-cycle (S-N) curve. Under the interaction between the dynamic loads and the strength, the internal damage of the running surface is calculated by means of the linear damage hypothesis of the Miner’s rule. The emergence of the first Breakage on the running surface is to be defined as the failure criterion that the damage degree equals 1.0. From the microscopic perspective, the running surface of the frog rail is divided into numerous segments for the detailed analysis. The internal damage of the segment grows slowly in the beginning and disproportionately quickly in the end until the emergence of the breakage. From the macroscopic perspective, the internal damage of the running surface develops simply always linear along the lifetime. With this linear growth of the internal damages, the lifetime of the frog rail could be predicted simply through the immediate introduction of the slope of the linearity. However, the superficial abrasion plays an essential role in the results of the internal damages from the both perspectives. The influences of the superficial abrasion on the lifetime are described in form of the abrasion rate. It has two contradictory effects. On the one hand, the insufficient abrasion rate causes the concentration of the damage accumulation on the same position below the running surface to accelerate the rail failure. On the other hand, the excessive abrasion rate advances the disappearance of the head hardened surface of the frog rail to result in the untimely breakage on the surface. Thus, the relationship between the abrasion rate and the lifetime is subdivided into an initial phase of the increased lifetime and a subsequent phase of the more rapid decreasing lifetime with the continuous growth of the abrasion rate. Through the compensation of these two effects, the critical abrasion rate is discussed to reach the optimal lifetime.Keywords: breakage, critical abrasion rate, frog rail, internal damage, optimal lifetime
Procedia PDF Downloads 2264893 Learning Model Applied to Cope with Professional Knowledge Gaps in Final Project of Information System Students
Authors: Ilana Lavy, Rami Rashkovits
Abstract:
In this study, we describe Information Systems students' learning model which was applied by students in order to cope with professional knowledge gaps in the context of their final project. The students needed to implement a software system according to specifications and design they have made beforehand. They had to select certain technologies and use them. Most of them decided to use programming environments that were learned during their academic studies. The students had to cope with various levels of knowledge gaps. For that matter they used learning strategies that were organized by us as a learning model which includes two phases each suitable for different learning tasks. We analyze the learning model, describing advantages and shortcomings as perceived by the students, and provide excerpts to support our findings.Keywords: knowledge gaps, independent learner skills, self-regulated learning, final project
Procedia PDF Downloads 4794892 Perceived Effects of Work-Family Balance on Employee’s Job Satisfaction among Extension Agents in Southwest Nigeria
Authors: B. G. Abiona, A. A. Onaseso, T. D. Odetayo, J. Yila, O. E. Fapojuwo, K. G. Adeosun
Abstract:
This study determines the perceived effects of work-family balance on employees’ job satisfaction among Extension Agents in the Agricultural Development Programme (ADP) in southwest Nigeria. A multistage sampling technique was used to select 256 respondents for the study. Data on personal characteristics, work-family balance domain, and job satisfaction were collected. The collected data were analysed using descriptive statistics, Chi-square, Pearson Product Moment Correlation (PPMC), multiple linear regression, and Student T-test. Results revealed that the mean age of the respondents was 40 years; the majority (59.3%) of the respondents were male, and slightly above half (51.6%) of the respondents had MSc as their highest academic qualification. Findings revealed that turnover intention (x ̅ = 3.20) and work-role conflict (x ̅ = 3.06) were the major perceived work-family balance domain in the studied areas. Further, the result showed that the respondents have a high (79%) level of job satisfaction. Multiple linear regression revealed that job involvement (ß=0.167, p<0.01) and work-role conflict (ß= -0.221, p<0.05) contributed significantly to employees’ level of job satisfaction. The results of the Student T-test revealed a significant difference in the perceived work-family balance domain (t = 0.43, p<0.05) between the two studied areas. The study concluded that work-role conflict among employees causes work-family imbalance and, therefore, negatively affects employees’ job satisfaction. The definition of job design among the respondents that will create a balance between work and family is highly recommended.Keywords: work-life, conflict, job satisfaction, extension agent
Procedia PDF Downloads 954891 Coordinated Voltage Control in a Radial Distribution System
Authors: Shivarudraswamy, Anubhav Shrivastava, Lakshya Bhat
Abstract:
Distributed generation has indeed become a major area of interest in recent years. Distributed Generation can address large number of loads in a power line and hence has better efficiency over the conventional methods. However there are certain drawbacks associated with it, increase in voltage being the major one. This paper addresses the voltage control at the buses for an IEEE 30 bus system by regulating reactive power. For carrying out the analysis, the suitable location for placing distributed generators (DG) is identified through load flow analysis and seeing where the voltage profile is dipping. MATLAB programming is used to regulate the voltage at all buses within +/-5% of the base value even after the introduction of DG’s. Three methods for regulation of voltage are discussed. A sensitivity based analysis is later carried out to determine the priority among the various methods listed in the paper.Keywords: distributed generators, distributed system, reactive power, voltage control
Procedia PDF Downloads 5004890 A Comparative Study on Behavior Among Different Types of Shear Connectors using Finite Element Analysis
Authors: Mohd Tahseen Islam Talukder, Sheikh Adnan Enam, Latifa Akter Lithi, Soebur Rahman
Abstract:
Composite structures have made significant advances in construction applications during the last few decades. Composite structures are composed of structural steel shapes and reinforced concrete combined with shear connectors, which benefit each material's unique properties. Significant research has been conducted on different types of connectors’ behavior and shear capacity. Moreover, the AISC 360-16 “Specification for Steel Structural Buildings” consists of a formula for channel shear connectors' shear capacity. This research compares the behavior of C type and L type shear connectors using Finite Element Analysis. Experimental results from published literature are used to validate the finite element models. The 3-D Finite Element Model (FEM) was built using ABAQUS 2017 to investigate non-linear capabilities and the ultimate load-carrying potential of the connectors using push-out tests. The changes in connector dimensions were analyzed using this non-linear model in parametric investigations. The parametric study shows that by increasing the length of the shear connector by 10 mm, its shear strength increases by 21%. Shear capacity increased by 13% as the height was increased by 10 mm. The thickness of the specimen was raised by 1 mm, resulting in a 2% increase in shear capacity. However, the shear capacity of channel connectors was reduced by 21% due to an increase of thickness by 2 mm.Keywords: finite element method, channel shear connector, angle shear connector, ABAQUS, composite structure, shear connector, parametric study, ultimate shear capacity, push-out test
Procedia PDF Downloads 1254889 Modeling and Simulation of Ship Structures Using Finite Element Method
Authors: Javid Iqbal, Zhu Shifan
Abstract:
The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis
Procedia PDF Downloads 1374888 Innovations in the Lithium Chain Value
Authors: Fiúza A., Góis J. Leite M., Braga H., Lima A., Jorge P., Moutela P., Martins L., Futuro A.
Abstract:
Lepidolite is an important lithium mineral that, to the author’s best knowledge, has not been used to produce lithium hydroxide, necessary for energy conversion to electric vehicles. Alkaline leaching of lithium concentrates allows the establishment of a production diagram avoiding most of the environmental drawbacks that are associated with the usage of acid reagents. The tested processes involve a pretreatment by digestion at high temperatures with additives, followed by leaching at hot atmospheric pressure. The solutions obtained must be compatible with solutions from the leaching of spodumene concentrates, allowing the development of a common treatment diagram, an important accomplishment for the feasible exploitation of Portuguese resources. Statistical programming and interpretation techniques are used to minimize the laboratory effort required by conventional approaches and also allow phenomenological comprehension.Keywords: artificial intelligence, tailings free process, ferroelectric electrolyte battery, life cycle assessment
Procedia PDF Downloads 1224887 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 1904886 Identification, Isolation and Characterization of Unknown Degradation Products of Cefprozil Monohydrate by HPTLC
Authors: Vandana T. Gawande, Kailash G. Bothara, Chandani O. Satija
Abstract:
The present research work was aimed to determine stability of cefprozil monohydrate (CEFZ) as per various stress degradation conditions recommended by International Conference on Harmonization (ICH) guideline Q1A (R2). Forced degradation studies were carried out for hydrolytic, oxidative, photolytic and thermal stress conditions. The drug was found susceptible for degradation under all stress conditions. Separation was carried out by using High Performance Thin Layer Chromatographic System (HPTLC). Aluminum plates pre-coated with silica gel 60F254 were used as the stationary phase. The mobile phase consisted of ethyl acetate: acetone: methanol: water: glacial acetic acid (7.5:2.5:2.5:1.5:0.5v/v). Densitometric analysis was carried out at 280 nm. The system was found to give compact spot for cefprozil monohydrate (0.45 Rf). The linear regression analysis data showed good linear relationship in the concentration range 200-5.000 ng/band for cefprozil monohydrate. Percent recovery for the drug was found to be in the range of 98.78-101.24. Method was found to be reproducible with % relative standard deviation (%RSD) for intra- and inter-day precision to be < 1.5% over the said concentration range. The method was validated for precision, accuracy, specificity and robustness. The method has been successfully applied in the analysis of drug in tablet dosage form. Three unknown degradation products formed under various stress conditions were isolated by preparative HPTLC and characterized by mass spectroscopic studies.Keywords: cefprozil monohydrate, degradation products, HPTLC, stress study, stability indicating method
Procedia PDF Downloads 2994885 STML: Service Type-Checking Markup Language for Services of Web Components
Authors: Saqib Rasool, Adnan N. Mian
Abstract:
Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.Keywords: REST, STML, type checking, web component
Procedia PDF Downloads 2554884 Fixed Point Iteration of a Damped and Unforced Duffing's Equation
Authors: Paschal A. Ochang, Emmanuel C. Oji
Abstract:
The Duffing’s Equation is a second order system that is very important because they are fundamental to the behaviour of higher order systems and they have applications in almost all fields of science and engineering. In the biological area, it is useful in plant stem dependence and natural frequency and model of the Brain Crash Analysis (BCA). In Engineering, it is useful in the study of Damping indoor construction and Traffic lights and to the meteorologist it is used in the prediction of weather conditions. However, most Problems in real life that occur are non-linear in nature and may not have analytical solutions except approximations or simulations, so trying to find an exact explicit solution may in general be complicated and sometimes impossible. Therefore we aim to find out if it is possible to obtain one analytical fixed point to the non-linear ordinary equation using fixed point analytical method. We started by exposing the scope of the Duffing’s equation and other related works on it. With a major focus on the fixed point and fixed point iterative scheme, we tried different iterative schemes on the Duffing’s Equation. We were able to identify that one can only see the fixed points to a Damped Duffing’s Equation and not to the Undamped Duffing’s Equation. This is because the cubic nonlinearity term is the determining factor to the Duffing’s Equation. We finally came to the results where we identified the stability of an equation that is damped, forced and second order in nature. Generally, in this research, we approximate the solution of Duffing’s Equation by converting it to a system of First and Second Order Ordinary Differential Equation and using Fixed Point Iterative approach. This approach shows that for different versions of Duffing’s Equations (damped), we find fixed points, therefore the order of computations and running time of applied software in all fields using the Duffing’s equation will be reduced.Keywords: damping, Duffing's equation, fixed point analysis, second order differential, stability analysis
Procedia PDF Downloads 2934883 Bartlett Factor Scores in Multiple Linear Regression Equation as a Tool for Estimating Economic Traits in Broilers
Authors: Oluwatosin M. A. Jesuyon
Abstract:
In order to propose a simpler tool that eliminates the age-long problems associated with the traditional index method for selection of multiple traits in broilers, the Barttlet factor regression equation is being proposed as an alternative selection tool. 100 day-old chicks each of Arbor Acres (AA) and Annak (AN) broiler strains were obtained from two rival hatcheries in Ibadan Nigeria. These were raised in deep litter system in a 56-day feeding trial at the University of Ibadan Teaching and Research Farm, located in South-west Tropical Nigeria. The body weight and body dimensions were measured and recorded during the trial period. Eight (8) zoometric measurements namely live weight (g), abdominal circumference, abdominal length, breast width, leg length, height, wing length and thigh circumference (all in cm) were recorded randomly from 20 birds within strain, at a fixed time on the first day of the new week respectively with a 5-kg capacity Camry scale. These records were analyzed and compared using completely randomized design (CRD) of SPSS analytical software, with the means procedure, Factor Scores (FS) in stepwise Multiple Linear Regression (MLR) procedure for initial live weight equations. Bartlett Factor Score (BFS) analysis extracted 2 factors for each strain, termed Body-length and Thigh-meatiness Factors for AA, and; Breast Size and Height Factors for AN. These derived orthogonal factors assisted in deducing and comparing trait-combinations that best describe body conformation and Meatiness in experimental broilers. BFS procedure yielded different body conformational traits for the two strains, thus indicating the different economic traits and advantages of strains. These factors could be useful as selection criteria for improving desired economic traits. The final Bartlett Factor Regression equations for prediction of body weight were highly significant with P < 0.0001, R2 of 0.92 and above, VIF of 1.00, and DW of 1.90 and 1.47 for Arbor Acres and Annak respectively. These FSR equations could be used as a simple and potent tool for selection during poultry flock improvement, it could also be used to estimate selection index of flocks in order to discriminate between strains, and evaluate consumer preference traits in broilers.Keywords: alternative selection tool, Bartlet factor regression model, consumer preference trait, linear and body measurements, live body weight
Procedia PDF Downloads 2034882 An Improved Robust Algorithm Based on Cubature Kalman Filter for Single-Frequency Global Navigation Satellite System/Inertial Navigation Tightly Coupled System
Authors: Hao Wang, Shuguo Pan
Abstract:
The Global Navigation Satellite System (GNSS) signal received by the dynamic vehicle in the harsh environment will be frequently interfered with and blocked, which generates gross error affecting the positioning accuracy of the GNSS/Inertial Navigation System (INS) integrated navigation. Therefore, this paper put forward an improved robust Cubature Kalman filter (CKF) algorithm for single-frequency GNSS/INS tightly coupled system ambiguity resolution. Firstly, the dynamic model and measurement model of a single-frequency GNSS/INS tightly coupled system was established, and the method for GNSS integer ambiguity resolution with INS aided is studied. Then, we analyzed the influence of pseudo-range observation with gross error on GNSS/INS integrated positioning accuracy. To reduce the influence of outliers, this paper improved the CKF algorithm and realized an intelligent selection of robust strategies by judging the ill-conditioned matrix. Finally, a field navigation test was performed to demonstrate the effectiveness of the proposed algorithm based on the double-differenced solution mode. The experiment has proved the improved robust algorithm can greatly weaken the influence of separate, continuous, and hybrid observation anomalies for enhancing the reliability and accuracy of GNSS/INS tightly coupled navigation solutions.Keywords: GNSS/INS integrated navigation, ambiguity resolution, Cubature Kalman filter, Robust algorithm
Procedia PDF Downloads 1004881 ‘Nature Will Slow You Down for a Reason’: Virtual Elder-Led Support Services during COVID-19
Authors: Grandmother Roberta Oshkawbewisens, Elder Isabelle Meawasige, Lynne Groulx, Chloë Hamilton, Lee Allison Clark, Dana Hickey, Wansu Qiu, Jared Leedham, Nishanthini Mahendran, Cameron Maclaine
Abstract:
In March of 2020, the world suddenly shifted with the onset of the COVID-19 pandemic; in-person programs and services were unavailable and a scramble to shift to virtual service delivery began. The Native Women’s Association of Canada (NWAC) established virtual programming through the Resiliency Lodge model and connected with Indigenous women, girls, Two-Spirit, transgender, and gender-diverse people across Turtle Island and Inuit Nunangat through programs that provide a safe space to slow down and reflect on their lives, environment, and well-being. To continue to grow the virtual Resiliency Lodge model, NWAC needed to develop an understanding of three questions: how COVID-19 affects Elder-led support services, how Elder-led support services have adapted during the pandemic, and what Wise Practices need to be implemented to continue to develop, refine, and evaluate virtual Elder-led support services specifically for Indigenous women, girls, two-Spirit, transgender, and gender-diverse people. Through funding from the Canadian Institute of Health Research (CIHR), NWAC gained deeper insight into these questions and developed a series of key findings and recommendations that are outlined throughout this report. The goals of this project are to contribute to a more robust participatory analysis that reflects the complexities of Elder-led virtual cultural responses and the impacts of COVID-19 on Elder-led support services; develop culturally and contextually meaningful virtual protocols and wise practices for virtual Indigenous-led support; and develop an Evaluation Strategy to improve the capacity of the Resiliency Lodge model. Significant findings from the project include Resiliency Lodge programs, especially crafting and business sessions, have provided participants with a sense of community and contributed to healing and wellness; Elder-led support services need greater and more stable funding to offer more workshops to more Indigenous women, girls, Two-Spirit, transgender, and gender-diverse people; and Elder- and Indigenous-led programs play a significant role in healing and building a sense of purpose and belonging among Indigenous people. Ultimately, the findings and recommendations outlined in this research project help to guide future Elder-led virtual support services and emphasize the critical need to increase access to Elder-led programming for Indigenous women, girls, Two-Spirit, transgender, and gender-diverse people.Keywords: indigenous women, traditional healing, virtual programs, covid-19
Procedia PDF Downloads 1394880 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 1234879 Synthesis of Liposomal Vesicles by a Novel Supercritical Fluid Process
Authors: Wen-Chyan Tsai, Syed S. H. Rizvi
Abstract:
Organic solvent residues are always associated with liposomes produced by the traditional techniques like the thin film hydration and reverse phase evaporation methods, which limit the applications of these vesicles in the pharmaceutical, food and cosmetic industries. Our objective was to develop a novel and benign process of liposomal microencapsulation by using supercritical carbon dioxide (SC-CO2) as the sole phospholipid-dissolving medium and a green substitute for organic solvents. This process consists of supercritical fluid extraction followed by rapid expansion via a nozzle and automatic cargo suction. Lecithin and cholesterol mixed in 10:1 mass ratio were dissolved in SC-CO2 at 20 ± 0.5 MPa and 60 oC. After at least two hours of equilibrium, the lecithin/cholesterol-laden SC-CO2 was passed through a 1000-micron nozzle and immediately mixed with the cargo solution to form liposomes. Liposomal micro-encapsulation was conducted at three pressures (8.27, 12.41, 16.55 MPa), three temperatures (75, 83 and 90 oC) and two flow rates (0.25 ml/sec and 0.5 ml/sec). Liposome size, zeta potential and encapsulation efficiency were characterized as functions of the operating parameters. The average liposomal size varied from 400-500 nm to 1000-1200 nm when the pressure was increased from 8.27 to 16.55 MPa. At 12.41 MPa, 90 oC and 0.25 ml per second of 0.2 M glucose cargo loading rate, the highest encapsulation efficiency of 31.65 % was achieved. Under a confocal laser scanning microscope, large unilamellar vesicles and multivesicular vesicles were observed to make up a majority of the liposomal emulsion. This new approach is a rapid and continuous process for bulk production of liposomes using a green solvent. Based on the results to date, it is feasible to apply this technique to encapsulate hydrophilic compounds inside the aqueous core as well as lipophilic compounds in the phospholipid bilayers of the liposomes for controlled release, solubility improvement and targeted therapy of bioactive compounds.Keywords: liposome, micro encapsulation, supercritical carbon dioxide, non-toxic process
Procedia PDF Downloads 4314878 Study and Solving High Complex Non-Linear Differential Equations Applied in the Engineering Field by Analytical New Approach AGM
Authors: Mohammadreza Akbari, Sara Akbari, Davood Domiri Ganji, Pooya Solimani, Reza Khalili
Abstract:
In this paper, three complicated nonlinear differential equations(PDE,ODE) in the field of engineering and non-vibration have been analyzed and solved completely by new method that we have named it Akbari-Ganji's Method (AGM) . As regards the previous published papers, investigating this kind of equations is a very hard task to do and the obtained solution is not accurate and reliable. This issue will be emerged after comparing the achieved solutions by Numerical Method. Based on the comparisons which have been made between the gained solutions by AGM and Numerical Method (Runge-Kutta 4th), it is possible to indicate that AGM can be successfully applied for various differential equations particularly for difficult ones. Furthermore, It is necessary to mention that a summary of the excellence of this method in comparison with the other approaches can be considered as follows: It is noteworthy that these results have been indicated that this approach is very effective and easy therefore it can be applied for other kinds of nonlinear equations, And also the reasons of selecting the mentioned method for solving differential equations in a wide variety of fields not only in vibrations but also in different fields of sciences such as fluid mechanics, solid mechanics, chemical engineering, etc. Therefore, a solution with high precision will be acquired. With regard to the afore-mentioned explanations, the process of solving nonlinear equation(s) will be very easy and convenient in comparison with the other methods. And also one of the important position that is explored in this paper is: Trigonometric and exponential terms in the differential equation (the method AGM) , is no need to use Taylor series Expansion to enhance the precision of the result.Keywords: new method (AGM), complex non-linear partial differential equations, damping ratio, energy lost per cycle
Procedia PDF Downloads 4694877 Portfolio Selection with Constraints on Trading Frequency
Authors: Min Dai, Hong Liu, Shuaijie Qian
Abstract:
We study a portfolio selection problem of an investor who faces constraints on rebalancing frequency, which is common in pension fund investment. We formulate it as a multiple optimal stopping problem and utilize the dynamic programming principle. By numerically solving the corresponding Hamilton-Jacobi-Bellman (HJB) equation, we find a series of free boundaries characterizing optimal strategy, and the constraints significantly impact the optimal strategy. Even in the absence of transaction costs, there is a no-trading region, depending on the number of the remaining trading chances. We also find that the equivalent wealth loss caused by the constraints is large. In conclusion, our model clarifies the impact of the constraints on transaction frequency on the optimal strategy.Keywords: portfolio selection, rebalancing frequency, optimal strategy, free boundary, optimal stopping
Procedia PDF Downloads 884876 Hospital Acquired Bloodstream Infections Among Patients With Hematological and Solid Malignancies: Epidemiology, Causative Pathogens and Mortality
Authors: Marah El-Beeli, Abdullah Balkhair, Zakaryia Al Muharmi, Samir Al Adawi, Mansoor Al-Jabri, Abdullah Al Rawahi, Hazaa Al Yahyae, Eman Al Balushi, Yahya M. Al-Farsi
Abstract:
The health care service and the anticancer chemotherapeutics has changed the natural history of cancer into manageable chronic disease and improve the cancer patient’s lifestyle and increase the survival time. Despite that, still, infection is the major dilemma opposing the cancer patient either because of the clinical presentation of the cancer type and impaired immune system or as a consequence of anticancer therapy. This study has been conducted to1) track changes in the epidemiology of hospital-acquired bloodstream infections among patients with malignancies in the last five years. 2) To explore the causative pathogens and 3) the outcome of HA-BSIs in patients with a different types of malignancies. An ampi-directional study (retrospective and prospective follow up) of patients with malignancies admitted at Sultan Qaboos University hospital (570-bed tertiary hospital) during the study period (from January 2015 to December 2019). The cumulative frequency and prevalence rates of HA-BSIs by patients and isolates were calculated. In addition, the cumulative frequency of participants with single versus mixed infections and types of causative micro-organisms of HA-BSIs were obtained. A total of 1246 event of HA-BSIs has occurred during the study period. Nearly the third (30.25%) of the HA-BSI events was identified among 288 patients with malignancies. About 20% of cases were mixed infections (more than one isolate). Staphylococcus spp were the predominant isolated pathogen (24.7%), followed by Klebsiella spp (15.8%), Escherichia spp (13%), and Pseudomonas spp (9.3%). About half (51%) of cases died in the same year, and (64%) of the deaths occur within two weeks after the infection. According to the observations, no changes in the trends of epidemiology, causative pathogens, morbidity, and mortality rates in the last five years.Keywords: epidemiology, haematological malignancies, hospital acquired bloodstream infections, solid malignancies
Procedia PDF Downloads 1504875 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 2074874 Evaluation of the Photo Neutron Contamination inside and outside of Treatment Room for High Energy Elekta Synergy® Linear Accelerator
Authors: Sharib Ahmed, Mansoor Rafi, Kamran Ali Awan, Faraz Khaskhali, Amir Maqbool, Altaf Hashmi
Abstract:
Medical linear accelerators (LINAC’s) used in radiotherapy treatments produce undesired neutrons when they are operated at energies above 8 MeV, both in electron and photon configuration. Neutrons are produced by high-energy photons and electrons through electronuclear (e, n) a photonuclear giant dipole resonance (GDR) reactions. These reactions occurs when incoming photon or electron incident through the various materials of target, flattening filter, collimators, and other shielding components in LINAC’s structure. These neutrons may reach directly to the patient, or they may interact with the surrounding materials until they become thermalized. A work has been set up to study the effect of different parameter on the production of neutron around the room by photonuclear reactions induced by photons above ~8 MeV. One of the commercial available neutron detector (Ludlum Model 42-31H Neutron Detector) is used for the detection of thermal and fast neutrons (0.025 eV to approximately 12 MeV) inside and outside of the treatment room. Measurements were performed for different field sizes at 100 cm source to surface distance (SSD) of detector, at different distances from the isocenter and at the place of primary and secondary walls. Other measurements were performed at door and treatment console for the potential radiation safety concerns of the therapists who must walk in and out of the room for the treatments. Exposures have taken place from Elekta Synergy® linear accelerators for two different energies (10 MV and 18 MV) for a given 200 MU’s and dose rate of 600 MU per minute. Results indicates that neutron doses at 100 cm SSD depend on accelerator characteristics means jaw settings as jaws are made of high atomic number material so provides significant interaction of photons to produce neutrons, while doses at the place of larger distance from isocenter are strongly influenced by the treatment room geometry and backscattering from the walls cause a greater doses as compare to dose at 100 cm distance from isocenter. In the treatment room the ambient dose equivalent due to photons produced during decay of activation nuclei varies from 4.22 mSv.h−1 to 13.2 mSv.h−1 (at isocenter),6.21 mSv.h−1 to 29.2 mSv.h−1 (primary wall) and 8.73 mSv.h−1 to 37.2 mSv.h−1 (secondary wall) for 10 and 18 MV respectively. The ambient dose equivalent for neutrons at door is 5 μSv.h−1 to 2 μSv.h−1 while at treatment console room it is 2 μSv.h−1 to 0 μSv.h−1 for 10 and 18 MV respectively which shows that a 2 m thick and 5m longer concrete maze provides sufficient shielding for neutron at door as well as at treatment console for 10 and 18 MV photons.Keywords: equivalent doses, neutron contamination, neutron detector, photon energy
Procedia PDF Downloads 4494873 Generalized Linear Modeling of HCV Infection Among Medical Waste Handlers in Sidama Region, Ethiopia
Authors: Birhanu Betela Warssamo
Abstract:
Background: There is limited evidence on the prevalence and risk factors for hepatitis C virus (HCV) infection among waste handlers in the Sidama region, Ethiopia; however, this knowledge is necessary for the effective prevention of HCV infection in the region. Methods: A cross-sectional study was conducted among randomly selected waste collectors from October 2021 to 30 July 2022 in different public hospitals in the Sidama region of Ethiopia. Serum samples were collected from participants and screened for anti-HCV using a rapid immunochromatography assay. Socio-demographic and risk factor information of waste handlers was gathered by pretested and well-structured questionnaires. The generalized linear model (GLM) was conducted using R software, and P-value < 0.05 was declared statistically significant. Results: From a total of 282 participating waste handlers, 16 (5.7%) (95% CI, 4.2 – 8.7) were infected with the hepatitis C virus. The educational status of waste handlers was the significant demographic variable that was associated with the hepatitis C virus (AOR = 0.055; 95% CI = 0.012 – 0.248; P = 0.000). More married waste handlers, 12 (75%), were HCV positive than unmarried, 4 (25%) and married waste handlers were 2.051 times (OR = 2.051, 95%CI = 0.644 –6.527, P = 0.295) more prone to HCV infection, compared to unmarried, which was statistically insignificant. The GLM showed that exposure to blood (OR = 8.26; 95% CI = 1.878–10.925; P = 0.037), multiple sexual partners (AOR = 3.63; 95% CI = 2.751–5.808; P = 0.001), sharp injury (AOR = 2.77; 95% CI = 2.327–3.173; P = 0.036), not using PPE (AOR = 0.77; 95% CI = 0.032–0.937; P = 0.001), contact with jaundiced patient (AOR = 3.65; 95% CI = 1.093–4.368; P = 0 .0048) and unprotected sex (AOR = 11.91; 95% CI = 5.847–16.854; P = 0.001) remained statistically significantly associated with HCV positivity. Conclusions: The study revealed that there was a high prevalence of hepatitis C virus infection among waste handlers in the Sidama region, Ethiopia. This demonstrated that there is an urgent need to increase preventative efforts and strategic policy orientations to control the spread of the hepatitis C virus.Keywords: Hepatitis C virus, risk factors, waste handlers, prevalence, Sidama Ethiopia
Procedia PDF Downloads 16