Search results for: immersive technology (IMT)
581 Ozonation as an Effective Method to Remove Pharmaceuticals from Biologically Treated Wastewater of Different Origin
Authors: Agne Jucyte Cicine, Vytautas Abromaitis, Zita Rasuole Gasiunaite, I. Vybernaite-Lubiene, D. Overlinge, K. Vilke
Abstract:
Pharmaceutical pollution in aquatic environments has become a growing concern. Various active pharmaceutical ingredient (API) residues, hormones, antibiotics, or/and psychiatric drugs, have already been discovered in different environmental compartments. Due to existing ineffective wastewater treatment technologies to remove APIs, an underestimated amount can enter the ecosystem by discharged treated wastewater. Especially, psychiatric compounds, such as carbamazepine (CBZ) and venlafaxine (VNX), persist in effluent even post-treatment. Therefore, these pharmaceuticals usually exceed safe environmental levels and pose risks to the aquatic environment, particularly to sensitive ecosystems such as the Baltic Sea. CBZ, known for its chemical stability and long biodegradation time, accumulates in the environment, threatening aquatic life and human health through the food chain. As the use of medication rises, there is an urgent need for advanced wastewater treatment to reduce pharmaceutical contamination and meet future regulatory requirements. In this study, we tested advanced oxidation technology using ozone to remove two commonly used psychiatric drugs (carbamazepine and venlafaxine) from biologically treated wastewater effluent. Additionally, general water quality parameters (suspended matter (SPM), dissolved organic carbon (DOC), chemical oxygen demand (COD), and bacterial presence were analyzed. Three wastewater treatment plants (WWTPs) representing different anthropogenic pressures were selected: 1) resort, 2) resort and residential, and 3) residential, industrial, and resort. Wastewater samples for the experiment were collected during the summer season after mechanical and biological treatment and ozonated for 5, 10, and 15 minutes. The initial dissolved ozone concentration of 7,3±0,7 mg/L was held constant during all the experiments. Pharmaceutical levels in this study exceeded the predicted no-effect concentration (PNEC) of 500 and 90 ng L⁻¹ for CBZ and VNX, respectively, in all WWTPs, except CBZ in WWTP 1. Initial CBZ contamination was found to be lower in WWTP 1 (427.4 ng L⁻¹), compared with WWTP 2 (1266.5 ng L⁻¹) and 3 (119.2 ng L⁻¹). VNX followed a similar trend with concentrations of 341.2 ng L⁻¹, 361.4 ng L⁻¹, and 390.0 ng L⁻¹, respectively, for WWTPs 1, 2, and 3. It was determined that CBZ was not detected in the effluent after 5 minutes of ozonation in any of the WWTPs. Contrarily, VNX was still detected after 5, 10, and 15 minutes of treatment with ozone, however, under the limits of quantification (LOD) (<5ng L⁻¹). Additionally, general pollution of SPM, DOC, COD, and bacterial contamination was reduced notably after 5 minutes of treatment with ozone, while no bacterial growth was obtained. Although initial pharmaceutical levels exceeded PNECs, indicating ongoing environmental risks, ozonation demonstrated high efficiency in reducing pharmaceutical and general contamination in wastewater with different pollution matrices.Keywords: Baltic Sea, ozonation, pharmaceuticals, wastewater treatment plants
Procedia PDF Downloads 19580 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective
Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou
Abstract:
The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.Keywords: mortality map, spatial patterns, statistical area, variation
Procedia PDF Downloads 258579 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India
Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar
Abstract:
The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose
Procedia PDF Downloads 257578 Metalorganic Chemical Vapor Deposition Overgrowth on the Bragg Grating for Gallium Nitride Based Distributed Feedback Laser
Abstract:
Laser diodes fabricated from the III-nitride material system are emerging solutions for the next generation telecommunication systems and optical clocks based on Ca at 397nm, Rb at 420.2nm and Yb at 398.9nm combined 556 nm. Most of the applications require single longitudinal optical mode lasers, with very narrow linewidth and compact size, such as communication systems and laser cooling. In this case, the GaN based distributed feedback (DFB) laser diode is one of the most effective candidates with gratings are known to operate with narrow spectra as well as high power and efficiency. Given the wavelength range, the period of the first-order diffraction grating is under 100 nm, and the realization of such gratings is technically difficult due to the narrow line width and the high quality nitride overgrowth based on the Bragg grating. Some groups have reported GaN DFB lasers with high order distributed feedback surface gratings, which avoids the overgrowth. However, generally the strength of coupling is lower than that with Bragg grating embedded into the waveguide within the GaN laser structure by two-step-epitaxy. Therefore, the overgrowth on the grating technology need to be studied and optimized. Here we propose to fabricate the fine step shape structure of first-order grating by the nanoimprint combined inductively coupled plasma (ICP) dry etching, then carry out overgrowth high quality AlGaN film by metalorganic chemical vapor deposition (MOCVD). Then a series of gratings with different period, depths and duty ratios are designed and fabricated to study the influence of grating structure to the nano-heteroepitaxy. Moreover, we observe the nucleation and growth process by step-by-step growth to study the growth mode for nitride overgrowth on grating, under the condition that the grating period is larger than the mental migration length on the surface. The AFM images demonstrate that a smooth surface of AlGaN film is achieved with an average roughness of 0.20 nm over 3 × 3 μm2. The full width at half maximums (FWHMs) of the (002) reflections in the XRD rocking curves are 278 arcsec for the AlGaN film, and the component of the Al within the film is 8% according to the XRD mapping measurement, which is in accordance with design values. By observing the samples with growth time changing from 200s, 400s to 600s, the growth model is summarized as the follow steps: initially, the nucleation is evenly distributed on the grating structure, as the migration length of Al atoms is low; then, AlGaN growth alone with the grating top surface; finally, the AlGaN film formed by lateral growth. This work contributed to carrying out GaN DFB laser by fabricating grating and overgrowth on the nano-grating patterned substrate by wafer scale, moreover, growth dynamics had been analyzed as well.Keywords: DFB laser, MOCVD, nanoepitaxy, III-niitride
Procedia PDF Downloads 189577 Agricultural Education and Research in India: Challenges and Way Forward
Authors: Kiran Kumar Gellaboina, Padmaja Kaja
Abstract:
Agricultural Education and Research in India needs a transformation to serve the needs of the farmers and that of the nation. The fact that Agriculture and allied activities act as main source of livelihood for more than 70% population of rural India reinforces its importance in administrative and policy arena. As per Census 2011 of India it provides employment to approximately 56.6 % of labour. India has achieved significant growth in agriculture, milk, fish, oilseeds and fruits and vegetables owing to green, white, blue and yellow revolutions which have brought prosperity to farmers. Many factors are responsible for these achievement viz conducive government policies, receptivity of the farmers and also establishment of higher agricultural education institutions. The new breed of skilled human resources were instrumental in generating new technologies, and in its assessment, refinement and finally its dissemination to the farming community through extension methods. In order to sustain, diversify and realize the potential of agriculture sectors, it is necessary to develop skilled human resources. Agricultural human resource development is a continuous process undertaken by agricultural universities. The Department of Agricultural Research and Education (DARE) coordinates and promotes agricultural research & education in India. In India, agricultural universities were established on ‘land grant’ pattern of USA which helped incorporation of a number of diverse subjects in the courses as also provision of hands-on practical exposure to the student. The State Agricultural Universities (SAUs) established through the legislative acts of the respective states and with major financial support from them leading to administrative and policy controls. It has been observed that pace and quality of technology generation and human resource development in many of the SAUs has gone down. The reason for this slackening are inadequate state funding, reduced faculty strength, inadequate faculty development programmes, lack of modern infrastructure for education and research etc. Establishment of new state agricultural universities and new faculties/colleges without providing necessary financial and faculty support has aggrieved the problem. The present work highlights some of the key issues affecting agricultural education and research in India and the impact it would have on farm productivity and sustainability. Secondary data pertaining to budgetary spend on agricultural education and research will be analyzed. This paper will study the trends in public spending on agricultural education and research and the per capita income of farmers in India. This paper tries to suggest that agricultural education and research has a key role in equipping the human resources for enhanced agricultural productivity and sustainable use of natural resources. Further, a total re-orientation of agricultural education with emphasis on other agricultural related social sciences is needed for effective agricultural policy research.Keywords: agriculture, challenges, education, research
Procedia PDF Downloads 232576 Solutions to Reduce CO2 Emissions in Autonomous Robotics
Authors: Antoni Grau, Yolanda Bolea, Alberto Sanfeliu
Abstract:
Mobile robots can be used in many different applications, including mapping, search, rescue, reconnaissance, hazard detection, and carpet cleaning, exploration, etc. However, they are limited due to their reliance on traditional energy sources such as electricity and oil which cannot always provide a convenient energy source in all situations. In an ever more eco-conscious world, solar energy offers the most environmentally clean option of all energy sources. Electricity presents threats of pollution resulting from its production process, and oil poses a huge threat to the environment. Not only does it pose harm by the toxic emissions (for instance CO2 emissions), it produces the combustion process necessary to produce energy, but there is the ever present risk of oil spillages and damages to ecosystems. Solar energy can help to mitigate carbon emissions by replacing more carbon intensive sources of heat and power. The challenge of this work is to propose the design and the implementation of electric battery recharge stations. Those recharge docks are based on the use of renewable energy such as solar energy (with photovoltaic panels) with the object to reduce the CO2 emissions. In this paper, a comparative study of the CO2 emission productions (from the use of different energy sources: natural gas, gas oil, fuel and solar panels) in the charging process of the Segway PT batteries is carried out. To make the study with solar energy, a photovoltaic panel, and a Buck-Boost DC/DC block has been used. Specifically, the STP005S-12/Db solar panel has been used to carry out our experiments. This module is a 5Wp-photovoltaic (PV) module, configured with 36 monocrystalline cells serially connected. With those elements, a battery recharge station is made to recharge the robot batteries. For the energy storage DC/DC block, a series of ultracapacitors have been used. Due to the variation of the PV panel with the temperature and irradiation, and the non-integer behavior of the ultracapacitors as well as the non-linearities of the whole system, authors have been used a fractional control method to achieve that solar panels supply the maximum allowed power to recharge the robots in the lesser time. Greenhouse gas emissions for production of electricity vary due to regional differences in source fuel. The impact of an energy technology on the climate can be characterised by its carbon emission intensity, a measure of the amount of CO2, or CO2 equivalent emitted by unit of energy generated. In our work, the coal is the fossil energy more hazardous, providing a 53% more of gas emissions than natural gas and a 30% more than fuel. Moreover, it is remarkable that existing fossil fuel technologies produce high carbon emission intensity through the combustion of carbon-rich fuels, whilst renewable technologies such as solar produce little or no emissions during operation, but may incur emissions during manufacture. The solar energy thus can help to mitigate carbon emissions.Keywords: autonomous robots, CO2 emissions, DC/DC buck-boost, solar energy
Procedia PDF Downloads 422575 The Current Application of BIM - An Empirical Study Focusing on the BIM-Maturity Level
Authors: Matthias Stange
Abstract:
Building Information Modelling (BIM) is one of the most promising methods in the building design process and plays an important role in the digitalization of the Architectural, Engineering, and Construction (AEC) Industry. The application of BIM is seen as the key enabler for increasing productivity in the construction industry. The model-based collaboration using the BIM method is intended to significantly reduce cost increases, schedule delays, and quality problems in the planning and construction of buildings. Numerous qualitative studies based on expert interviews support this theory and report perceived benefits from the use of BIM in terms of achieving project objectives related to cost, schedule, and quality. However, there is a large research gap in analysing quantitative data collected from real construction projects regarding the actual benefits of applying BIM based on representative sample size and different application regions as well as different project typologies. In particular, the influence of the project-related BIM maturity level is completely unexplored. This research project examines primary data from 105 construction projects worldwide using quantitative research methods. Projects from the areas of residential, commercial, and industrial construction as well as infrastructure and hydraulic engineering were examined in application regions North America, Australia, Europe, Asia, MENA region, and South America. First, a descriptive data analysis of 6 independent project variables (BIM maturity level, application region, project category, project type, project size, and BIM level) were carried out using statistical methods. With the help of statisticaldata analyses, the influence of the project-related BIM maturity level on 6 dependent project variables (deviation in planning time, deviation in construction time, number of planning collisions, frequency of rework, number of RFIand number of changes) was investigated. The study revealed that most of the benefits of using BIM perceived through numerous qualitative studies have not been confirmed. The results of the examined sample show that the application of BIM did not have an improving influence on the dependent project variables, especially regarding the quality of the planning itself and the adherence to the schedule targets. The quantitative research suggests the conclusion that the BIM planning method in its current application has not (yet) become a recognizable increase in productivity within the planning and construction process. The empirical findings indicate that this is due to the overall low level of BIM maturity in the projects of the examined sample. As a quintessence, the author suggests that the further implementation of BIM should primarily focus on an application-oriented and consistent development of the project-related BIM maturity level instead of implementing BIM for its own sake. Apparently, there are still significant difficulties in the interweaving of people, processes, and technology.Keywords: AEC-process, building information modeling, BIM maturity level, project results, productivity of the construction industry
Procedia PDF Downloads 73574 Information Seeking and Evaluation Tasks to Enhance Multiliteracies in Health Education
Authors: Tuula Nygard
Abstract:
This study contributes to the pedagogical discussion on how to promote adolescents’ multiliteracies with the emphasis on information seeking and evaluation skills in contemporary media environments. The study is conducted in the school environment utilizing perspectives of educational sciences and information studies to health communication and teaching. The research focus is on the teacher role as a trusted person, who guides students to choose and use credible information sources. Evaluating the credibility of information may often be challenging. Specifically, children and adolescents may find it difficult to know what to believe and who to trust, for instance, in health and well-being communication. Thus, advanced multiliteracy skills are needed. In the school environment, trust is based on the teacher’s subject content knowledge, but also the teacher’s character and caring. Teacher’s benevolence and approachability generate trustworthiness, which lays the foundation for good interaction with students and further, for the teacher’s pedagogical authority. The study explores teachers’ perceptions of their pedagogical authority and the role of a trustee. In addition, the study examines what kind of multiliteracy practices teachers utilize in their teaching. The data will be collected by interviewing secondary school health education teachers during Spring 2019. The analysis method is a nexus analysis, which is an ethnographic research orientation. Classroom interaction as the interviewed teachers see it is scrutinized through a nexus analysis lens in order to expound a social action, where people, places, discourses, and objects are intertwined. The crucial social actions in this study are information seeking and evaluation situations, where the teacher and the students together assess the credibility of the information sources. The study is based on the hypothesis that a trustee’s opinions of credible sources and guidance in information seeking and evaluation affect students’, that is, trustors’ choices. In the school context, the teacher’s own experiences and perceptions of health-related issues cannot be brushed aside. Furthermore, adolescents are used to utilize digital technology for day-to-day information seeking, but the chosen information sources are often not very high quality. In the school, teachers are inclined to recommend familiar sources, such as health education textbook and web pages of well-known health authorities. Students, in turn, rely on the teacher’s guidance of credible information sources without using their own judgment. In terms of students’ multiliteracy competences, information seeking and evaluation tasks in health education are excellent opportunities to practice and enhance these skills. To distinguish the right information from a wrong one is particularly important in health communication because experts by experience are easy to find and their opinions are convincing. This can be addressed by employing the ideas of multiliteracy in the school subject health education and in teacher education and training.Keywords: multiliteracies, nexus analysis, pedagogical authority, trust
Procedia PDF Downloads 107573 Numerical Simulation of the Heat Transfer Process in a Double Pipe Heat Exchanger
Authors: J. I. Corcoles, J. D. Moya-Rico, A. Molina, J. F. Belmonte, J. A. Almendros-Ibanez
Abstract:
One of the most common heat exchangers technology in engineering processes is the use of double-pipe heat exchangers (DPHx), mainly in the food industry. To improve the heat transfer performance, several passive geometrical devices can be used, such as the wall corrugation of tubes, which increases the wet perimeter maintaining a constant cross-section area, increasing consequently the convective surface area. It contributes to enhance heat transfer in forced convection, promoting secondary recirculating flows. One of the most extended tools to analyse heat exchangers' efficiency is the use of computational fluid dynamic techniques (CFD), a complementary activity to the experimental studies as well as a previous step for the design of heat exchangers. In this study, a double pipe heat exchanger behaviour with two different inner tubes, smooth and spirally corrugated tube, have been analysed. Hence, experimental analysis and steady 3-D numerical simulations using the commercial code ANSYS Workbench v. 17.0 are carried out to analyse the influence of geometrical parameters for spirally corrugated tubes at turbulent flow. To validate the numerical results, an experimental setup has been used. To heat up or cool down the cold fluid as it passes through the heat exchanger, the installation includes heating and cooling loops served by an electric boiler with a heating capacity of 72 kW and a chiller, with a cooling capacity of 48 kW. Two tests have been carried out for the smooth tube and for the corrugated one. In all the tests, the hot fluid has a constant flowrate of 50 l/min and inlet temperature of 59.5°C. For the cold fluid, the flowrate range from 25 l/min (Test 1) and 30 l/min (Test 2) with an inlet temperature of 22.1°C. The heat exchanger is made of stainless steel, with an external diameter of 35 mm and wall thickness of 1.5 mm. Both inner tubes have an external diameter of 24 mm and 1 mm thickness of stainless steel with a length of 2.8 m. The corrugated tube has a corrugation height (H) of 1.1 mm and helical pitch (P) of 25 mm. It is characterized using three non-dimensional parameters, the ratio of the corrugation shape and the diameter (H/D), the helical pitch (P/D) and the severity index (SI = H²/P x D). The results showed good agreement between the numerical and the experimental results. Hence, the lowest differences were shown for the fluid temperatures. In all the analysed tests and for both analysed tubes, the temperature obtained numerically was slightly higher than the experimental results, with values ranged between 0.1% and 0.7%. Regarding the pressure drop, the maximum differences between the values obtained numerically, and the experimental values were close to 16%. Based on the experimental and the numerical results, for the corrugated tube, it can be highlighted that the temperature difference between the inlet and the outlet of the cold fluid is 42%, higher than the smooth tube.Keywords: corrugated tube, heat exchanger, heat transfer, numerical simulation
Procedia PDF Downloads 147572 An Integrated Approach to Handle Sour Gas Transportation Problems and Pipeline Failures
Authors: Venkata Madhusudana Rao Kapavarapu
Abstract:
The Intermediate Slug Catcher (ISC) facility was built to process nominally 234 MSCFD of export gas from the booster station on a day-to-day basis and to receive liquid slugs up to 1600 m³ (10,000 BBLS) in volume when the incoming 24” gas pipelines are pigged following upsets or production of non-dew-pointed gas from gathering centers. The maximum slug sizes expected are 812 m³ (5100 BBLS) in winter and 542 m³ (3400 BBLS) in summer after operating for a month or more at 100 MMSCFD of wet gas, being 60 MMSCFD of treated gas from the booster station, combined with 40 MMSCFD of untreated gas from gathering center. The water content is approximately 60% but may be higher if the line is not pigged for an extended period, owing to the relative volatility of the condensate compared to water. In addition to its primary function as a slug catcher, the ISC facility will receive pigged liquids from the upstream and downstream segments of the 14” condensate pipeline, returned liquids from the AGRP, pigged through the 8” pipeline, and blown-down fluids from the 14” condensate pipeline prior to maintenance. These fluids will be received in the condensate flash vessel or the condensate separator, depending on the specific operation, for the separation of water and condensate and settlement of solids scraped from the pipelines. Condensate meeting the colour and 200 ppm water specifications will be dispatched to the AGRP through the 14” pipeline, while off-spec material will be returned to BS-171 via the existing 10” condensate pipeline. When they are not in operation, the existing 24” export gas pipeline and the 10” condensate pipeline will be maintained under export gas pressure, ready for operation. The gas manifold area contains the interconnecting piping and valves needed to align the slug catcher with either of the 24” export gas pipelines from the booster station and to direct the gas to the downstream segment of either of these pipelines. The manifold enables the slug catcher to be bypassed if it needs to be maintained or if through-pigging of the gas pipelines is to be performed. All gas, whether bypassing the slug catcher or returning to the gas pipelines from it, passes through black powder filters to reduce the level of particulates in the stream. These items are connected to the closed drain vessel to drain the liquid collected. Condensate from the booster station is transported to AGRP through 14” condensate pipeline. The existing 10” condensate pipeline will be used as a standby and for utility functions such as returning condensate from AGRP to the ISC or booster station or for transporting off-spec fluids from the ISC back to booster station. The manifold contains block valves that allow the two condensate export lines to be segmented at the ISC, thus facilitating bi-directional flow independently in the upstream and downstream segments, which ensures complete pipeline integrity and facility integrity. Pipeline failures will be attended to with the latest technologies by remote techno plug techniques, and repair activities will be carried out as needed. Pipeline integrity will be evaluated with ili pigging to estimate the pipeline conditions.Keywords: integrity, oil & gas, innovation, new technology
Procedia PDF Downloads 72571 Influence of a Cationic Membrane in a Double Compartment Filter-Press Reactor on the Atenolol Electro-Oxidation
Authors: Alan N. A. Heberle, Salatiel W. Da Silva, Valentin Perez-Herranz, Andrea M. Bernardes
Abstract:
Contaminants of emerging concern are substances widely used, such as pharmaceutical products. These compounds represent risk for both wild and human life since they are not completely removed from wastewater by conventional wastewater treatment plants. In the environment, they can be harm even in low concentration (µ or ng/L), causing bacterial resistance, endocrine disruption, cancer, among other harmful effects. One of the most common taken medicine to treat cardiocirculatory diseases is the Atenolol (ATL), a β-Blocker, which is toxic to aquatic life. In this way, it is necessary to implement a methodology, which is capable to promote the degradation of the ATL, to avoid the environmental detriment. A very promising technology is the advanced electrochemical oxidation (AEO), which mechanisms are based on the electrogeneration of reactive radicals (mediated oxidation) and/or on the direct substance discharge by electron transfer from contaminant to electrode surface (direct oxidation). The hydroxyl (HO•) and sulfate (SO₄•⁻) radicals can be generated, depending on the reactional medium. Besides that, at some condition, the peroxydisulfate (S₂O₈²⁻) ion is also generated from the SO₄• reaction in pairs. Both radicals, ion, and the direct contaminant discharge can break down the molecule, resulting in the degradation and/or mineralization. However, ATL molecule and byproducts can still remain in the treated solution. On this wise, some efforts can be done to implement the AEO process, being one of them the use of a cationic membrane to separate the cathodic (reduction) from the anodic (oxidation) reactor compartment. The aim of this study is investigate the influence of the implementation of a cationic membrane (Nafion®-117) to separate both cathodic and anodic, AEO reactor compartments. The studied reactor was a filter-press, with bath recirculation mode, flow 60 L/h. The anode was an Nb/BDD2500 and the cathode a stainless steel, both bidimensional, geometric surface area 100 cm². The solution feeding the anodic compartment was prepared with ATL 100 mg/L using Na₂SO₄ 4 g/L as support electrolyte. In the cathodic compartment, it was used a solution containing Na₂SO₄ 71 g/L. Between both solutions was placed the membrane. The applied currents densities (iₐₚₚ) of 5, 20 and 40 mA/cm² were studied over 240 minutes treatment time. Besides that, the ATL decay was analyzed by ultraviolet spectroscopy (UV/Vis). The mineralization was determined performing total organic carbon (TOC) in TOC-L CPH Shimadzu. In the cases without membrane, the iₐₚₚ 5, 20 and 40 mA/cm² resulted in 55, 87 and 98 % ATL degradation at the end of treatment time, respectively. However, with membrane, the degradation, for the same iₐₚₚ, was 90, 100 and 100 %, spending 240, 120, 40 min for the maximum degradation, respectively. The mineralization, without membrane, for the same studied iₐₚₚ, was 40, 55 and 72 %, respectively at 240 min, but with membrane, all tested iₐₚₚ reached 80 % of mineralization, differing only in the time spent, 240, 150 and 120 min, for the maximum mineralization, respectively. The membrane increased the ATL oxidation, probably due to avoid oxidant ions (S₂O₈²⁻) reduction on the cathode surface.Keywords: contaminants of emerging concern, advanced electrochemical oxidation, atenolol, cationic membrane, double compartment reactor
Procedia PDF Downloads 137570 Option Pricing Theory Applied to the Service Sector
Authors: Luke Miller
Abstract:
This paper develops an options pricing methodology to value strategic pricing strategies in the services sector. More specifically, this study provides a unifying taxonomy of current service sector pricing practices, frames these pricing decisions as strategic real options, demonstrates accepted option valuation techniques to assess service sector pricing decisions, and suggests future research areas where pricing decisions and real options overlap. Enhancing revenue in the service sector requires proactive decision making in a world of uncertainty. In an effort to strategically price service products, revenue enhancement necessitates a careful study of the service costs, customer base, competition, legalities, and shared economies with the market. Pricing decisions involve the quality of inputs, manpower, and best practices to maintain superior service. These decisions further hinge on identifying relevant pricing strategies and understanding how these strategies impact a firm’s value. A relatively new area of research applies option pricing theory to investments in real assets and is commonly known as real options. The real options approach is based on the premise that many corporate decisions to invest or divest in assets are simply an option wherein the firm has the right to make an investment without any obligation to act. The decision maker, therefore, has more flexibility and the value of this operating flexibility should be taken into consideration. The real options framework has already been applied to numerous areas including manufacturing, inventory, natural resources, research and development, strategic decisions, technology, and stock valuation. Additionally, numerous surveys have identified a growing need for the real options decision framework within all areas of corporate decision-making. Despite the wide applicability of real options, no study has been carried out linking service sector pricing decisions and real options. This is surprising given the service sector comprises 80% of the US employment and Gross Domestic Product (GDP). Identifying real options as a practical tool to value different service sector pricing strategies is believed to have a significant impact on firm decisions. This paper identifies and discusses four distinct pricing strategies available to the service sector from an options’ perspective: (1) Cost-based profit margin, (2) Increased customer base, (3) Platform pricing, and (4) Buffet pricing. Within each strategy lie several pricing tactics available to the service firm. These tactics can be viewed as options the decision maker has to best manage a strategic position in the market. To demonstrate the effectiveness of including flexibility in the pricing decision, a series of pricing strategies were developed and valued using a real options binomial lattice structure. The options pricing approach discussed in this study allows service firms to directly incorporate market-driven perspectives into the decision process and thus synchronizing service operations with organizational economic goals.Keywords: option pricing theory, real options, service sector, valuation
Procedia PDF Downloads 355569 Application of a Submerged Anaerobic Osmotic Membrane Bioreactor Hybrid System for High-Strength Wastewater Treatment and Phosphorus Recovery
Authors: Ming-Yeh Lu, Shiao-Shing Chen, Saikat Sinha Ray, Hung-Te Hsu
Abstract:
Recently, anaerobic membrane bioreactors (AnMBRs) has been widely utilized, which combines anaerobic biological treatment process and membrane filtration, that can be present an attractive option for wastewater treatment and water reuse. Conventional AnMBR is having several advantages, such as improving effluent quality, compact space usage, lower sludge yield, without aeration and production of energy. However, the removal of nitrogen and phosphorus in the AnMBR permeate was negligible which become the biggest disadvantage. In recent years, forward osmosis (FO) is an emerging technology that utilizes osmotic pressure as driving force to extract clean water without additional external pressure. The pore size of FO membrane is kindly mentioned the pore size, so nitrogen or phosphorus could effectively improve removal of nitrogen or phosphorus. Anaerobic bioreactor with FO membrane (AnOMBR) can retain the concentrate organic matters and nutrients. However, phosphorus is a non-renewable resource. Due to the high rejection property of FO membrane, the high amount of phosphorus could be recovered from the combination of AnMBR and FO. In this study, development of novel submerged anaerobic osmotic membrane bioreactor integrated with periodic microfiltration (MF) extraction for simultaneous phosphorus and clean water recovery from wastewater was evaluated. A laboratory-scale AnOMBR utilizes cellulose triacetate (CTA) membranes with effective membrane area of 130 cm² was fully submerged into a 5.5 L bioreactor at 30-35℃. Active layer-facing feed stream orientation was utilized, for minimizing fouling and scaling. Additionally, a peristaltic pump was used to circulate draw solution (DS) at a cross flow velocity of 0.7 cm/s. Magnesium sulphate (MgSO₄) solution was used as DS. Microfiltration membrane periodically extracted about 1 L solution when the TDS reaches to 5 g/L to recover phosphorus and simultaneous control the salt accumulation in the bioreactor. During experiment progressed, the average water flux was achieved around 1.6 LMH. The AnOMBR process show greater than 95% removal of soluble chemical oxygen demand (sCOD), nearly 100% of total phosphorous whereas only partial removal of ammonia, and finally average methane production of 0.22 L/g sCOD was obtained. Therefore, AnOMBR system periodically utilizes MF membrane extracted for phosphorus recovery with simultaneous pH adjustment. The overall performance demonstrates that a novel submerged AnOMBR system is having potential for simultaneous wastewater treatment and resource recovery from wastewater, and hence, the new concept of this system can be used to replace for conventional AnMBR in the future.Keywords: anaerobic treatment, forward osmosis, phosphorus recovery, membrane bioreactor
Procedia PDF Downloads 270568 The Use of Stroke Journey Map in Improving Patients' Perceived Knowledge in Acute Stroke Unit
Authors: C. S. Chen, F. Y. Hui, B. S. Farhana, J. De Leon
Abstract:
Introduction: Stroke can lead to long-term disability, affecting one’s quality of life. Providing stroke education to patient and family members is essential to optimize stroke recovery and prevent recurrent stroke. Currently, nurses conduct stroke education by handing out pamphlets and explaining their contents to patients. However, this is not always effective as nurses have varying levels of knowledge and depth of content discussed with the patient may not be consistent. With the advancement of information technology, health education is increasingly being disseminated via electronic software and studies have shown this to have benefitted patients. Hence, a multi-disciplinary team consisting of doctors, nurses and allied health professionals was formed to create the stroke journey map software to deliver consistent and concise stroke education. Research Objectives: To evaluate the effectiveness of using a stroke journey map software in improving patients’ perceived knowledge in the acute stroke unit during hospitalization. Methods: Patients admitted to the acute stroke unit were given stroke journey map software during patient education. The software consists of 31 interactive slides that are brightly coloured and 4 videos, based on input provided by the multi-disciplinary team. Participants were then assessed with pre-and-post survey questionnaires before and after viewing the software. The questionnaire consists of 10 questions with a 5-point Likert scale which sums up to a total score of 50. The inclusion criteria are patients diagnosed with ischemic stroke and are cognitively alert and oriented. This study was conducted between May 2017 to October 2017. Participation was voluntary. Results: A total of 33 participants participated in the study. The results demonstrated that the use of a stroke journey map as a stroke education medium was effective in improving patients’ perceived knowledge. A comparison of pre- and post-implementation data of stroke journey map revealed an overall mean increase in patients’ perceived knowledge from 24.06 to 40.06. The data is further broken down to evaluate patients’ perceived knowledge in 3 domains: (1) Understanding of disease process; (2) Management and treatment plans; (3) Post-discharge care. Each domain saw an increase in mean score from 10.7 to 16.2, 6.9 to 11.9 and 6.6 to 11.7 respectively. Project Impact: The implementation of stroke journey map has a positive impact in terms of (1) Increasing patient’s perceived knowledge which could contribute to greater empowerment of health; (2) Reducing need for stroke education material printouts making it environmentally friendly; (3) Decreasing time nurses spent on giving education resulting in more time to attend to patients’ needs. Conclusion: This study has demonstrated the benefit of using stroke journey map as a platform for stroke education. Overall, it has increased patients’ perceived knowledge in understanding their disease process, the management and treatment plans as well as the discharge process.Keywords: acute stroke, education, ischemic stroke, knowledge, stroke
Procedia PDF Downloads 161567 Revolutionizing Healthcare Communication: The Transformative Role of Natural Language Processing and Artificial Intelligence
Authors: Halimat M. Ajose-Adeogun, Zaynab A. Bello
Abstract:
Artificial Intelligence (AI) and Natural Language Processing (NLP) have transformed computer language comprehension, allowing computers to comprehend spoken and written language with human-like cognition. NLP, a multidisciplinary area that combines rule-based linguistics, machine learning, and deep learning, enables computers to analyze and comprehend human language. NLP applications in medicine range from tackling issues in electronic health records (EHR) and psychiatry to improving diagnostic precision in orthopedic surgery and optimizing clinical procedures with novel technologies like chatbots. The technology shows promise in a variety of medical sectors, including quicker access to medical records, faster decision-making for healthcare personnel, diagnosing dysplasia in Barrett's esophagus, boosting radiology report quality, and so on. However, successful adoption requires training for healthcare workers, fostering a deep understanding of NLP components, and highlighting the significance of validation before actual application. Despite prevailing challenges, continuous multidisciplinary research and collaboration are critical for overcoming restrictions and paving the way for the revolutionary integration of NLP into medical practice. This integration has the potential to improve patient care, research outcomes, and administrative efficiency. The research methodology includes using NLP techniques for Sentiment Analysis and Emotion Recognition, such as evaluating text or audio data to determine the sentiment and emotional nuances communicated by users, which is essential for designing a responsive and sympathetic chatbot. Furthermore, the project includes the adoption of a Personalized Intervention strategy, in which chatbots are designed to personalize responses by merging NLP algorithms with specific user profiles, treatment history, and emotional states. The synergy between NLP and personalized medicine principles is critical for tailoring chatbot interactions to each user's demands and conditions, hence increasing the efficacy of mental health care. A detailed survey corroborated this synergy, revealing a remarkable 20% increase in patient satisfaction levels and a 30% reduction in workloads for healthcare practitioners. The poll, which focused on health outcomes and was administered to both patients and healthcare professionals, highlights the improved efficiency and favorable influence on the broader healthcare ecosystem.Keywords: natural language processing, artificial intelligence, healthcare communication, electronic health records, patient care
Procedia PDF Downloads 76566 Secure Optimized Ingress Filtering in Future Internet Communication
Authors: Bander Alzahrani, Mohammed Alreshoodi
Abstract:
Information-centric networking (ICN) using architectures such as the Publish-Subscribe Internet Technology (PURSUIT) has been proposed as a new networking model that aims at replacing the current used end-centric networking model of the Internet. This emerged model focuses on what is being exchanged rather than which network entities are exchanging information, which gives the control plane functions such as routing and host location the ability to be specified according to the content items. The forwarding plane of the PURSUIT ICN architecture uses a simple and light mechanism based on Bloom filter technologies to forward the packets. Although this forwarding scheme solve many problems of the today’s Internet such as the growth of the routing table and the scalability issues, it is vulnerable to brute force attacks which are starting point to distributed- denial-of-service (DDoS) attacks. In this work, we design and analyze a novel source-routing and information delivery technique that keeps the simplicity of using Bloom filter-based forwarding while being able to deter different attacks such as denial of service attacks at the ingress of the network. To achieve this, special forwarding nodes called Edge-FW are directly attached to end user nodes and used to perform a security test for malicious injected random packets at the ingress of the path to prevent any possible attack brute force attacks at early stage. In this technique, a core entity of the PURSUIT ICN architecture called topology manager, that is responsible for finding shortest path and creating a forwarding identifiers (FId), uses a cryptographically secure hash function to create a 64-bit hash, h, over the formed FId for authentication purpose to be included in the packet. Our proposal restricts the attacker from injecting packets carrying random FIds with a high amount of filling factor ρ, by optimizing and reducing the maximum allowed filling factor ρm in the network. We optimize the FId to the minimum possible filling factor where ρ ≤ ρm, while it supports longer delivery trees, so the network scalability is not affected by the chosen ρm. With this scheme, the filling factor of any legitimate FId never exceeds the ρm while the filling factor of illegitimate FIds cannot exceed the chosen small value of ρm. Therefore, injecting a packet containing an FId with a large value of filling factor, to achieve higher attack probability, is not possible anymore. The preliminary analysis of this proposal indicates that with the designed scheme, the forwarding function can detect and prevent malicious activities such DDoS attacks at early stage and with very high probability.Keywords: forwarding identifier, filling factor, information centric network, topology manager
Procedia PDF Downloads 154565 Convective Boiling of CO₂/R744 in Macro and Micro-Channels
Authors: Adonis Menezes, J. C. Passos
Abstract:
The current panorama of technology in heat transfer and the scarcity of information about the convective boiling of CO₂ and hydrocarbon in small diameter channels motivated the development of this work. Among non-halogenated refrigerants, CO₂/ R744 has distinct thermodynamic properties compared to other fluids. The R744 presents significant differences in operating pressures and temperatures, operating at higher values compared to other refrigerants, and this represents a challenge for the design of new evaporators, as the original systems must normally be resized to meet the specific characteristics of the R744, which creates the need for a new design and optimization criteria. To carry out the convective boiling tests of CO₂, an experimental apparatus capable of storing (m= 10kg) of saturated CO₂ at (T = -30 ° C) in an accumulator tank was used, later this fluid was pumped using a positive displacement pump with three pistons, and the outlet pressure was controlled and could reach up to (P = 110bar). This high-pressure saturated fluid passed through a Coriolis type flow meter, and the mass velocities varied between (G = 20 kg/m².s) up to (G = 1000 kg/m².s). After that, the fluid was sent to the first test section of circular cross-section in diameter (D = 4.57mm), where the inlet and outlet temperatures and pressures, were controlled and the heating was promoted by the Joule effect using a source of direct current with a maximum heat flow of (q = 100 kW/m²). The second test section used a cross-section with multi-channels (seven parallel channels) with a square cross-section of (D = 2mm) each; this second test section has also control of temperature and pressure at the inlet and outlet as well as for heating a direct current source was used, with a maximum heat flow of (q = 20 kW/m²). The fluid in a biphasic situation was directed to a parallel plate heat exchanger so that it returns to the liquid state, thus being able to return to the accumulator tank, continuing the cycle. The multi-channel test section has a viewing section; a high-speed CMOS camera was used for image acquisition, where it was possible to view the flow patterns. The experiments carried out and presented in this report were conducted in a rigorous manner, enabling the development of a database on the convective boiling of the R744 in macro and micro channels. The analysis prioritized the processes from the beginning of the convective boiling until the drying of the wall in a subcritical regime. The R744 resurfaces as an excellent alternative to chlorofluorocarbon refrigerants due to its negligible ODP (Ozone Depletion Potential) and GWP (Global Warming Potential) rates, among other advantages. The results found in the experimental tests were very promising for the use of CO₂ in micro-channels in convective boiling and served as a basis for determining the flow pattern map and correlation for determining the heat transfer coefficient in the convective boiling of CO₂.Keywords: convective boiling, CO₂/R744, macro-channels, micro-channels
Procedia PDF Downloads 143564 Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Sources
Authors: Annisa Ulfah Pristya, Andi Setiawan
Abstract:
Electricity is the primary requirement today's world, including Indonesia. This is because electricity is a source of electrical energy that is flexible to use. Fossil energy sources are the major energy source that is used as a source of energy power plants. Unfortunately, this conversion process impacts on the depletion of fossil fuel reserves and causes an increase in the amount of CO2 in the atmosphere, disrupting health, ozone depletion, and the greenhouse effect. Solutions have been applied are solar cells, ocean wave power, the wind, water, and so forth. However, low efficiency and complicated treatment led to most people and industry in Indonesia still using fossil fuels. Referring to this Fuel Cell was developed. Fuel Cells are electrochemical technology that continuously converts chemical energy into electrical energy for the fuel and oxidizer are the efficiency is considerably higher than the previous natural source of electrical energy, which is 40-60%. However, Fuel Cells still have some weaknesses in terms of the use of an expensive platinum catalyst which is limited and not environmentally friendly. Because of it, required the simultaneous source of electrical energy and environmentally friendly. On the other hand, Indonesia is a rich country in marine sediments and organic content that is never exhausted. Stacking the organic component can be an alternative energy source continued development of fuel cell is A Microbial Fuel Cell. Microbial Fuel Cells (MFC) is a tool that uses bacteria to generate electricity from organic and non-organic compounds. MFC same tools as usual fuel cell composed of an anode, cathode and electrolyte. Its main advantage is the catalyst in the microbial fuel cell is a microorganism and working conditions carried out in neutral solution, low temperatures, and environmentally friendly than previous fuel cells (Chemistry Fuel Cell). However, when compared to Chemistry Fuel Cell, MFC only have an efficiency of 40%. Therefore, the authors provide a solution in the form of Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Source. Nano-MFC has the advantage of an effective, high efficiency, cheap and environmental friendly. Related stakeholders that helped are government ministers, especially Energy Minister, the Institute for Research, as well as the industry as a production executive facilitator. strategic steps undertaken to achieve that begin from conduct preliminary research, then lab scale testing, and dissemination and build cooperation with related parties (MOU), conduct last research and its applications in the field, then do the licensing and production of Nano-MFC on an industrial scale and publications to the public.Keywords: CNT, efficiency, electric, microorganisms, sediment
Procedia PDF Downloads 409563 The Role of Social Media in the Rise of Islamic State in India: An Analytical Overview
Authors: Yasmeen Cheema, Parvinder Singh
Abstract:
The evolution of Islamic State (acronym IS) has an ultimate goal of restoring the caliphate. IS threat to the global security is main concern of international community but has also raised a factual concern for India about the regular radicalization of IS ideology among Indian youth. The incident of joining Arif Ejaz Majeed, an Indian as ‘jihadist’ in IS has set strident alarm in law & enforcement agencies. On 07.03.2017, many people were injured in an Improvised Explosive Device (IED) blast on-board of Bhopal Ujjain Express. One perpetrator of this incident was killed in encounter with police. But, the biggest shock is that the conspiracy was pre-planned and the assailants who carried out the blast were influenced by the ideology perpetrated by the Islamic State. This is the first time name of IS has cropped up in a terror attack in India. It is a red indicator of violent presence of IS in India, which is spreading through social media. The IS have the capacity to influence the younger Muslim generation in India through its brutal and aggressive propaganda videos, social media apps and hatred speeches. It is a well known fact that India is on the radar of IS, as well on its ‘Caliphate Map’. IS uses Twitter, Facebook and other social media platforms constantly. Islamic State has used enticing videos, graphics, and articles on social media and try to influence persons from India & globally that their jihad is worthy. According to arrested perpetrator of IS in different cases in India, the most of Indian youths are victims to the daydreams which are fondly shown by IS. The dreams that the Muslim empire as it was before 1920 can come back with all its power and also that the Caliph and its caliphate can be re-established are shown by the IS. Indian Muslim Youth gets attracted towards these euphemistic ideologies. Islamic State has used social media for disseminating its poisonous ideology, recruitment, operational activities and for future direction of attacks. IS through social media inspired its recruits & lone wolfs to continue to rely on local networks to identify targets and access weaponry and explosives. Recently, a pro-IS media group on its Telegram platform shows Taj Mahal as the target and suggested mode of attack as a Vehicle Born Improvised Explosive Attack (VBIED). Islamic State definitely has the potential to destroy the Indian national security & peace, if timely steps are not taken. No doubt, IS has used social media as a critical mechanism for recruitment, planning and executing of terror attacks. This paper will therefore examine the specific characteristics of social media that have made it such a successful weapon for Islamic State. The rise of IS in India should be viewed as a national crisis and handled at the central level with efficient use of modern technology.Keywords: ideology, India, Islamic State, national security, recruitment, social media, terror attack
Procedia PDF Downloads 230562 Quantum Conductance Based Mechanical Sensors Fabricated with Closely Spaced Metallic Nanoparticle Arrays
Authors: Min Han, Di Wu, Lin Yuan, Fei Liu
Abstract:
Mechanical sensors have undergone a continuous evolution and have become an important part of many industries, ranging from manufacturing to process, chemicals, machinery, health-care, environmental monitoring, automotive, avionics, and household appliances. Concurrently, the microelectronics and microfabrication technology have provided us with the means of producing mechanical microsensors characterized by high sensitivity, small size, integrated electronics, on board calibration, and low cost. Here we report a new kind of mechanical sensors based on the quantum transport process of electrons in the closely spaced nanoparticle films covering a flexible polymer sheet. The nanoparticle films were fabricated by gas phase depositing of preformed metal nanoparticles with a controlled coverage on the electrodes. To amplify the conductance of the nanoparticle array, we fabricated silver interdigital electrodes on polyethylene terephthalate(PET) by mask evaporation deposition. The gaps of the electrodes ranged from 3 to 30μm. Metal nanoparticles were generated from a magnetron plasma gas aggregation cluster source and deposited on the interdigital electrodes. Closely spaced nanoparticle arrays with different coverage could be gained through real-time monitoring the conductance. In the film coulomb blockade and quantum, tunneling/hopping dominate the electronic conduction mechanism. The basic principle of the mechanical sensors relies on the mechanical deformation of the fabricated devices which are translated into electrical signals. Several kinds of sensing devices have been explored. As a strain sensor, the device showed a high sensitivity as well as a very wide dynamic range. A gauge factor as large as 100 or more was demonstrated, which can be at least one order of magnitude higher than that of the conventional metal foil gauges or even better than that of the semiconductor-based gauges with a workable maximum applied strain beyond 3%. And the strain sensors have a workable maximum applied strain larger than 3%. They provide the potential to be a new generation of strain sensors with performance superior to that of the currently existing strain sensors including metallic strain gauges and semiconductor strain gauges. When integrated into a pressure gauge, the devices demonstrated the ability to measure tiny pressure change as small as 20Pa near the atmospheric pressure. Quantitative vibration measurements were realized on a free-standing cantilever structure fabricated with closely-spaced nanoparticle array sensing element. What is more, the mechanical sensor elements can be easily scaled down, which is feasible for MEMS and NEMS applications.Keywords: gas phase deposition, mechanical sensors, metallic nanoparticle arrays, quantum conductance
Procedia PDF Downloads 274561 Analysis of Resistance and Virulence Genes of Gram-Positive Bacteria Detected in Calf Colostrums
Authors: C. Miranda, S. Cunha, R. Soares, M. Maia, G. Igrejas, F. Silva, P. Poeta
Abstract:
The worldwide inappropriate use of antibiotics has increased the emergence of antimicrobial-resistant microorganisms isolated from animals, humans, food, and the environment. To combat this complex and multifaceted problem is essential to know the prevalence in livestock animals and possible ways of transmission among animals and between these and humans. Enterococci species, in particular E. faecalis and E. faecium, are the most common nosocomial bacteria, causing infections in animals and humans. Thus, the aim of this study was to characterize resistance and virulence factors genes among two enterococci species isolated from calf colostrums in Portuguese dairy farms. The 55 enterococci isolates (44 E. faecalis and 11 E. faecium) were tested for the presence of the resistance genes for the following antibiotics: erythromicyn (ermA, ermB, and ermC), tetracycline (tetL, tetM, tetK, and tetO), quinupristin/dalfopristin (vatD and vatE) and vancomycin (vanB). Of which, 25 isolates (15 E. faecalis and 10 E. faecium) were tested until now for 8 virulence factors genes (esp, ace, gelE, agg, cpd, cylA, cylB, and cylLL). The resistance and virulence genes were performed by PCR, using specific primers and conditions. Negative and positive controls were used in all PCR assays. All enterococci isolates showed resistance to erythromicyn and tetracycline through the presence of the genes: ermB (n=29, 53%), ermC (n=10, 18%), tetL (n=49, 89%), tetM (n=39, 71%) and tetK (n=33, 60%). Only two (4%) E. faecalis isolates showed the presence of tetO gene. No resistance genes for vancomycin were found. The virulence genes detected in both species were cpd (n=17, 68%), agg (n=16, 64%), ace (n=15, 60%), esp (n=13, 52%), gelE (n=13, 52%) and cylLL (n=8, 32%). In general, each isolate showed at least three virulence genes. In three E. faecalis isolates was not found virulence genes and only E. faecalis isolates showed virulence genes for cylA (n=4, 16%) and cylB (n=6, 24%). In conclusion, these colostrum samples that were consumed by calves demonstrated the presence of antibiotic-resistant enterococci harbored virulence genes. This genotypic characterization is crucial to control the antibiotic-resistant bacteria through the implementation of restricts measures safeguarding public health. Acknowledgements: This work was funded by the R&D Project CAREBIO2 (Comparative assessment of antimicrobial resistance in environmental biofilms through proteomics - towards innovative theragnostic biomarkers), with reference NORTE-01-0145-FEDER-030101 and PTDC/SAU-INF/30101/2017, financed by the European Regional Development Fund (ERDF) through the Northern Regional Operational Program (NORTE 2020) and the Foundation for Science and Technology (FCT). This work was supported by the Associate Laboratory for Green Chemistry - LAQV which is financed by national funds from FCT/MCTES (UIDB/50006/2020 and UIDP/50006/2020).Keywords: antimicrobial resistance, calf, colostrums, enterococci
Procedia PDF Downloads 199560 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂
Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang
Abstract:
CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces
Procedia PDF Downloads 278559 A Descriptive Study on Comparison of Maternal and Perinatal Outcome of Twin Pregnancies Conceived Spontaneously and by Assisted Conception Methods
Authors: Aishvarya Gupta, Keerthana Anand, Sasirekha Rengaraj, Latha Chathurvedula
Abstract:
Introduction: Advances in assisted reproductive technology and increase in the proportion of infertile couples have both contributed to the steep increase in the incidence of twin pregnancies in past decades. Maternal and perinatal complications are higher in twins than in singleton pregnancies. Studies comparing the maternal and perinatal outcomes of ART twin pregnancies versus spontaneously conceived twin pregnancies report heterogeneous results making it unclear whether the complications are due to twin gestation per se or because of assisted reproductive techniques. The present study aims to compare both maternal and perinatal outcomes in twin pregnancies which are spontaneously conceived and after assisted conception methods, so that targeted steps can be undertaken in order to improve maternal and perinatal outcome of twins. Objectives: To study perinatal and maternal outcome in twin pregnancies conceived spontaneously as well as with assisted methods and compare the outcomes between the two groups. Setting: Women delivering at JIPMER (tertiary care institute), Pondicherry. Population: 380 women with twin pregnancies who delivered in JIPMER between June 2015 and March 2017 were included in the study. Methods: The study population was divided into two cohorts – one conceived by spontaneous conception and other by assisted reproductive methods. Association of various maternal and perinatal outcomes with the method of conception was assessed using chi square test or Student's t test as appropriate. Multiple logistic regression analysis was done to assess the independent association of assisted conception with maternal outcomes after adjusting for age, parity and BMI. Multiple logistic regression analysis was done to assess the independent association of assisted conception with perinatal outcomes after adjusting for age, parity, BMI, chorionicity, gestational age at delivery and presence of hypertension or gestational diabetes in the mother. A p value of < 0.05 was considered as significant. Result: There was increased proportion of women with GDM (21% v/s 4.29%) and premature rupture of membranes (35% v/s 22.85%) in the assisted conception group and more anemic women in the spontaneous group (71.27% v/s 55.1%). However assisted conception per se increased the incidence of GDM among twin gestations (OR 3.39, 95% CI 1.34 – 8.61) and did not influence any of the other maternal outcomes. Among the perinatal outcomes, assisted conception per se increased the risk of having very preterm (<32 weeks) neonates (OR 3.013, 95% CI 1.432 – 6.337). The mean birth weight did not significantly differ between the two groups (p = 0.429). Though there were higher proportion of babies admitted to NICU in the assisted conception group (48.48% v/s 36.43%), assisted conception per se did not increase the risk of admission to NICU (OR 1.23, 95% CI 0.76 – 1.98). There was no significant difference in perinatal mortality rates between the two groups (p = 0.829). Conclusion: Assisted conception per se increases the risk of developing GDM in women with twin gestation and increases the risk of delivering very preterm babies. Hence measures should be taken to ensure appropriate screening methods for GDM and suitable neonatal care in such pregnancies.Keywords: assisted conception, maternal outcomes, perinatal outcomes, twin gestation
Procedia PDF Downloads 210558 Cycle-Oriented Building Components and Constructions Made from Paper Materials
Authors: Rebecca Bach, Evgenia Kanli, Nihat Kiziltoprak, Linda Hildebrand, Ulrich Knaack, Jens Schneider
Abstract:
The building industry has a high demand for resources and at the same time is responsible for a significant amount of waste created worldwide. Today's building components need to contribute to the protection of natural resources without creating waste. This is defined in the product development phase and impacts the product’s degree of being cycle-oriented. Paper-based materials show advantage due to their renewable origin and their ability to incorporate different functions. Besides the ecological aspects like renewable origin and recyclability the main advantages of paper materials are its light-weight but stiff structure, the optimized production processes and good insulation values. The main deficits from building technology’s perspective are the material's vulnerability to humidity and water as well as inflammability. On material level, those problems can be solved by coatings or through material modification. On construction level intelligent setup and layering of a building component can improve and also solve these issues. The target of the present work is to provide an overview of developed building components and construction typologies mainly made from paper materials. The research is structured in four parts: (1) functions and requirements, (2) preselection of paper-based materials, (3) development of building components and (4) evaluation. As part of the research methodology at first the needs of the building sector are analyzed with the aim to define the main areas of application and consequently the requirements. Various paper materials are tested in order to identify to what extent the requirements are satisfied and determine potential optimizations or modifications, also in combination with other construction materials. By making use of the material’s potentials and solving the deficits on material and on construction level, building components and construction typologies are developed. The evaluation and the calculation of the structural mechanics and structural principals will show that different construction typologies can be derived. Profiles like paper tubes can be used at best for skeleton constructions. Massive structures on the other hand can be formed by plate-shaped elements like solid board or honeycomb. For insulation purposes corrugated cardboard or cellulose flakes have the best properties, while layered solid board can be applied to prevent inner condensation. Enhancing these properties by material combinations for instance with mineral coatings functional constructions mainly out of paper materials were developed. In summary paper materials offer a huge variety of possible applications in the building sector. By these studies a general base of knowledge about how to build with paper was developed and is to be reinforced by further research.Keywords: construction typologies, cycle-oriented construction, innovative building material, paper materials, renewable resources
Procedia PDF Downloads 279557 Energy Strategies for Long-Term Development in Kenya
Authors: Joseph Ndegwa
Abstract:
Changes are required if energy systems are to foster long-term growth. The main problems are increasing access to inexpensive, dependable, and sufficient energy supply while addressing environmental implications at all levels. Policies can help to promote sustainable development by providing adequate and inexpensive energy sources to underserved regions, such as liquid and gaseous fuels for cooking and electricity for household and commercial usage. Promoting energy efficiency. Increased utilization of new renewables. Spreading and implementing additional innovative energy technologies. Markets can achieve many of these goals with the correct policies, pricing, and regulations. However, if markets do not work or fail to preserve key public benefits, tailored government policies, programs, and regulations can achieve policy goals. The main strategies for promoting sustainable energy systems are simple. However, they need a broader recognition of the difficulties we confront, as well as a firmer commitment to specific measures. Making markets operate better by minimizing pricing distortions, boosting competition, and removing obstacles to energy efficiency are among the measures. Complementing the reform of the energy industry with policies that promote sustainable energy. Increasing investments in renewable energy. Increasing the rate of technical innovation at each level of the energy innovation chain. Fostering technical leadership in underdeveloped nations by transferring technology and enhancing institutional and human capabilities. promoting more international collaboration. Governments, international organizations, multilateral financial institutions, and civil society—including local communities, business and industry, non-governmental organizations (NGOs), and consumers—all have critical enabling roles to play in the problem of sustainable energy. Partnerships based on integrated and cooperative approaches and drawing on real-world experience will be necessary. Setting the required framework conditions and ensuring that public institutions collaborate effectively and efficiently with the rest of society are common themes across all industries and geographical areas in order to achieve sustainable development. Powerful tools for sustainable development include energy. However, significant policy adjustments within the larger enabling framework will be necessary to refocus its influence in order to achieve that aim. Many of the options currently accessible will be lost or the price of their ultimate realization (where viable) will grow significantly if such changes don't take place during the next several decades and aren't started right enough. In any case, it would seriously impair the capacity of future generations to satisfy their demands.Keywords: sustainable development, reliable, price, policy
Procedia PDF Downloads 65556 Biocultural Biographies and Molecular Memories: A Study of Neuroepigenetics and How Trauma Gets under the Skull
Authors: Elsher Lawson-Boyd
Abstract:
In the wake of the Human Genome Project, the life sciences have undergone some fascinating changes. In particular, conventional beliefs relating to gene expression are being challenged by advances in postgenomic sciences, especially by the field of epigenetics. Epigenetics is the modification of gene expression without changes in the DNA sequence. In other words, epigenetics dictates that gene expression, the process by which the instructions in DNA are converted into products like proteins, is not solely controlled by DNA itself. Unlike gene-centric theories of heredity that characterized much of the 20th Century (where the genes were considered as having almost god-like power to create life), gene expression in epigenetics insists on environmental ‘signals’ or ‘exposures’, a point that radically deviates from gene-centric thinking. Science and Technology Studies (STS) scholars have shown that epigenetic research is having vast implications for the ways in which chronic, non-communicable diseases are conceptualized, treated, and governed. However, to the author’s knowledge, there have not yet been any in-depth sociological engagements with neuroepigenetics that examine how the field is affecting mental health and trauma discourse. In this paper, the author discusses preliminary findings from a doctoral ethnographic study on neuroepigenetics, trauma, and embodiment. Specifically, this study investigates the kinds of causal relations neuroepigenetic researchers are making between experiences of trauma and the development of mental illnesses like complex post-traumatic stress disorder (PTSD), both throughout a human’s lifetime and across generations. Using qualitative interviews and nonparticipant observation, the author focuses on two public-facing research centers based in Melbourne: Florey Institute of Neuroscience and Mental Health (FNMH), and Murdoch Children’s Research Institute (MCRI). Preliminary findings indicate that a great deal of ambiguity characterizes this infant field, particularly when animal-model experiments are employed and the results are translated into human frameworks. Nevertheless, researchers at the FNMH and MCRI strongly suggest that adverse and traumatic life events have a significant effect on gene expression, especially when experienced during early development. Furthermore, they predict that neuroepigenetic research will have substantial implications for the ways in which mental illnesses like complex PTSD are diagnosed and treated. These preliminary findings shed light on why medical and health sociologists have good reason to be chiming in, engaging with and de-black-boxing ideations emerging from postgenomic sciences, as they may indeed have significant effects for vulnerable populations not only in Australia but other developing countries in the Global South.Keywords: genetics, mental illness, neuroepigenetics, trauma
Procedia PDF Downloads 125555 Competitive Effects of Differential Voting Rights and Promoter Control in Indian Start-Ups
Authors: Prateek Bhattacharya
Abstract:
The definition of 'control' in India is a rapidly evolving concept, owing to varying rights attached to varying securities. Shares with differential voting rights (DVRs) provide the holder with differential rights as to voting, as compared to ordinary equity shareholders of the company. Such DVRs can amount to both superior voting rights and inferior voting rights, where DVRs with superior voting rights amount to providing the holder with golden shares in the company. While DVRs are not a novel concept in India having been recognized since 2000, they were placed on a back burner by the Securities and Exchange Board of India (SEBI) in 2010 after issuance of DVRs with superior voting rights was restricted. In June 2019, the SEBI rekindled the ebbing fire of DVRs, keeping mind the fast-paced nature of the global economy, the government's faith that India’s ‘new age technology companies’ (i.e., Start-Ups) will lead the charge in achieving its goal of India becoming a $5 trillion dollar economy by 2024, and recognizing that the promoters of such Start-Ups seek to raise capital without losing control over their companies. DVRs with superior voting rights guarantee promoters with up to 74% shareholding in Start-Ups for a period of 5 years, meaning that the holder of such DVRs can exercise sole control and material influence over the company for that period. This manner of control has the potential of causing both pro-competitive and anti-competitive effects in the markets where these companies operate. On the one hand, DVRs will allow Start-Up promoters/founders to retain control of their companies and protect its business interests from foreign elements such as private/public investors – in a scenario where such investors have multiple investments in firms engaged in associated lines of business (whether on a horizontal or vertical level) and would seek to influence these firms to enter into potential anti-competitive arrangements with one another, DVRs will enable the promoters to thwart such scenarios. On the other hand, promoters/founders who themselves have multiple investments in Start-Ups, which are in associated lines of business run the risk of influencing these associated Start-Ups to engage in potentially anti-competitive arrangements in the name of profit maximisation. This paper shall be divided into three parts: Part I shall deal with the concept of ‘control’, as deliberated upon and decided by the SEBI and the Competition Commission of India (CCI) under both company/securities law and competition law; Part II shall review this definition of ‘control’ through the lens of DVRs, and Part III shall discuss the aforementioned potential pro-competitive and anti-competitive effects caused by the DVRs by examining the current Indian Start-Up scenario. The paper shall conclude by providing suggestions for the CCI to incorporate a clearer and more progressive concept of ‘control’.Keywords: competition law, competitive effects, control, differential voting rights, DVRs, investor shareholding, merger control, start-ups
Procedia PDF Downloads 123554 Modeling Sorption and Permeation in the Separation of Benzene/ Cyclohexane Mixtures through Styrene-Butadiene Rubber Crosslinked Membranes
Authors: Hassiba Benguergoura, Kamal Chanane, Sâad Moulay
Abstract:
Pervaporation (PV), a membrane-based separation technology, has gained much attention because of its energy saving capability and low-cost, especially for separation of azeotropic or close-boiling liquid mixtures. There are two crucial issues for industrial application of pervaporation process. The first is developing membrane material and tailoring membrane structure to obtain high pervaporation performances. The second is modeling pervaporation transport to better understand of the above-mentioned structure–pervaporation relationship. Many models were proposed to predict the mass transfer process, among them, solution-diffusion model is most widely used in describing pervaporation transport including preferential sorption, diffusion and evaporation steps. For modeling pervaporation transport, the permeation flux, which depends on the solubility and diffusivity of components in the membrane, should be obtained first. Traditionally, the solubility was calculated according to the Flory–Huggins theory. Separation of the benzene (Bz)/cyclohexane (Cx) mixture is industrially significant. Numerous papers have been focused on the Bz/Cx system to assess the PV properties of membrane materials. Membranes with both high permeability and selectivity are desirable for practical application. Several new polymers have been prepared to get both high permeability and selectivity. Styrene-butadiene rubbers (SBR), dense membranes cross-linked by chloromethylation were used in the separation of benzene/cyclohexane mixtures. The impact of chloromethylation reaction as a new method of cross-linking SBR on the pervaporation performance have been reported. In contrast to the vulcanization with sulfur, the cross-linking takes places on styrene units of polymeric chains via a methylene bridge. The partial pervaporative (PV) fluxes of benzene/cyclohexane mixtures in styrene-butadiene rubber (SBR) were predicted using Fick's first law. The predicted partial fluxes and the PV separation factor agreed well with the experimental data by integrating Fick's law over the benzene concentration. The effects of feed concentration and operating temperature on the predicted permeation flux by this proposed model are investigated. The predicted permeation fluxes are in good agreement with experimental data at lower benzene concentration in feed, but at higher benzene concentration, the model overestimated permeation flux. The predicted and experimental permeation fluxes all increase with operating temperature increasing. Solvent sorption levels for benzene/ cyclohexane mixtures in a SBR membrane were determined experimentally. The results showed that the solvent sorption levels were strongly affected by the feed composition. The Flory- Huggins equation generates higher R-square coefficient for the sorption selectivity.Keywords: benzene, cyclohexane, pervaporation, permeation, sorption modeling, SBR
Procedia PDF Downloads 326553 Graphic Procession Unit-Based Parallel Processing for Inverse Computation of Full-Field Material Properties Based on Quantitative Laser Ultrasound Visualization
Authors: Sheng-Po Tseng, Che-Hua Yang
Abstract:
Motivation and Objective: Ultrasonic guided waves become an important tool for nondestructive evaluation of structures and components. Guided waves are used for the purpose of identifying defects or evaluating material properties in a nondestructive way. While guided waves are applied for evaluating material properties, instead of knowing the properties directly, preliminary signals such as time domain signals or frequency domain spectra are first revealed. With the measured ultrasound data, inversion calculation can be further employed to obtain the desired mechanical properties. Methods: This research is development of high speed inversion calculation technique for obtaining full-field mechanical properties from the quantitative laser ultrasound visualization system (QLUVS). The quantitative laser ultrasound visualization system (QLUVS) employs a mirror-controlled scanning pulsed laser to generate guided acoustic waves traveling in a two-dimensional target. Guided waves are detected with a piezoelectric transducer located at a fixed location. With a gyro-scanning of the generation source, the QLUVS has the advantage of fast, full-field, and quantitative inspection. Results and Discussions: This research introduces two important tools to improve the computation efficiency. Firstly, graphic procession unit (GPU) with large amount of cores are introduced. Furthermore, combining the CPU and GPU cores, parallel procession scheme is developed for the inversion of full-field mechanical properties based on the QLUVS data. The newly developed inversion scheme is applied to investigate the computation efficiency for single-layered and double-layered plate-like samples. The computation efficiency is shown to be 80 times faster than unparalleled computation scheme. Conclusions: This research demonstrates a high-speed inversion technique for the characterization of full-field material properties based on quantitative laser ultrasound visualization system. Significant computation efficiency is shown, however not reaching the limit yet. Further improvement can be reached by improving the parallel computation. Utilizing the development of the full-field mechanical property inspection technology, full-field mechanical property measured by non-destructive, high-speed and high-precision measurements can be obtained in qualitative and quantitative results. The developed high speed computation scheme is ready for applications where full-field mechanical properties are needed in a nondestructive and nearly real-time way.Keywords: guided waves, material characterization, nondestructive evaluation, parallel processing
Procedia PDF Downloads 202552 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing
Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari
Abstract:
A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.Keywords: bacteria chromosome, bacterial identification, sequence, primer generation
Procedia PDF Downloads 193