Search results for: conventional power generation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11678

Search results for: conventional power generation

8918 Moisture Impact on the Utilization of Recycled Concrete Fine Aggregate to Produce Mortar

Authors: Rahimullah Habibzai

Abstract:

To achieve a sustainable concrete industry, reduce exploitation of the natural aggregate resources, and mitigate waste concrete environmental burden, one way is to use recycled concrete aggregate. The utilization of low-quality fine aggregate inclusively recycled concrete sand that is produced from crushing waste concrete recently has become a popular and challenging topic among researchers nowadays. This study provides a scientific base for promoting the application of concrete waste as fine aggregate in producing concrete by conducting a comprehensive laboratory program. The mechanical properties of mortar made from recycled concrete fine aggregate (RCFA), that is produced by pulse power crushing concrete waste are satisfactory and capable of being utilized in the construction industry. A better treatment of RCFA particles and enhancing its quality will make it possible to be utilized in producing structural concrete. Pulse power discharge technology is proposed in this research to produce RCFA, which is a more effective and promising technique compared to other recycling methods to generate medium to high-quality recycled concrete fine aggregate with a reduced amount of powder, mitigate the environmental burden, and save more space.

Keywords: construction and demolition waste, concrete waste recycle fine aggregate, pulse power discharge

Procedia PDF Downloads 147
8917 A Resilience-Based Approach for Assessing Social Vulnerability in New Zealand's Coastal Areas

Authors: Javad Jozaei, Rob G. Bell, Paula Blackett, Scott A. Stephens

Abstract:

In the last few decades, Social Vulnerability Assessment (SVA) has been a favoured means in evaluating the susceptibility of social systems to drivers of change, including climate change and natural disasters. However, the application of SVA to inform responsive and practical strategies to deal with uncertain climate change impacts has always been challenging, and typically agencies resort back to conventional risk/vulnerability assessment. These challenges include complex nature of social vulnerability concepts which influence its applicability, complications in identifying and measuring social vulnerability determinants, the transitory social dynamics in a changing environment, and unpredictability of the scenarios of change that impacts the regime of vulnerability (including contention of when these impacts might emerge). Research suggests that the conventional quantitative approaches in SVA could not appropriately address these problems; hence, the outcomes could potentially be misleading and not fit for addressing the ongoing uncertain rise in risk. The second phase of New Zealand’s Resilience to Nature’s Challenges (RNC2) is developing a forward-looking vulnerability assessment framework and methodology that informs the decision-making and policy development in dealing with the changing coastal systems and accounts for complex dynamics of New Zealand’s coastal systems (including socio-economic, environmental and cultural). Also, RNC2 requires the new methodology to consider plausible drivers of incremental and unknowable changes, create mechanisms to enhance social and community resilience; and fits the New Zealand’s multi-layer governance system. This paper aims to analyse the conventional approaches and methodologies in SVA and offer recommendations for more responsive approaches that inform adaptive decision-making and policy development in practice. The research adopts a qualitative research design to examine different aspects of the conventional SVA processes, and the methods to achieve the research objectives include a systematic review of the literature and case study methods. We found that the conventional quantitative, reductionist and deterministic mindset in the SVA processes -with a focus the impacts of rapid stressors (i.e. tsunamis, floods)- show some deficiencies to account for complex dynamics of social-ecological systems (SES), and the uncertain, long-term impacts of incremental drivers. The paper will focus on addressing the links between resilience and vulnerability; and suggests how resilience theory and its underpinning notions such as the adaptive cycle, panarchy, and system transformability could address these issues, therefore, influence the perception of vulnerability regime and its assessment processes. In this regard, it will be argued that how a shift of paradigm from ‘specific resilience’, which focuses on adaptive capacity associated with the notion of ‘bouncing back’, to ‘general resilience’, which accounts for system transformability, regime shift, ‘bouncing forward’, can deliver more effective strategies in an era characterised by ongoing change and deep uncertainty.

Keywords: complexity, social vulnerability, resilience, transformation, uncertain risks

Procedia PDF Downloads 92
8916 Blood Oxygen Saturation Measurement System Using Broad-Band Light Source with LabVIEW Program

Authors: Myoung Ah Kim, Dong Ho Sin, Chul Gyu Song

Abstract:

Blood oxygen saturation system is a well-established, noninvasive photoplethysmographic method to monitor vital signs. Conventional blood oxygen saturation measurements for the two LED light source is the ambiguity of the oxygen saturation measurement principle and the measurement results greatly influenced and heat and motion artifact. A high accuracy in order to solve these problems blood oxygen saturation measuring method has been proposed using a broadband light source that can be easily understood by the algorithm. The measurement of blood oxygen saturation based on broad-band light source has advantage of simple testing facility and easy understanding. Broadband light source based on blood oxygen saturation measuring program proposed in this paper is a combination of LabVIEW and MATLAB. Using the wavelength range of 450 nm-750 nm using a floating light absorption of oxyhemoglobin and deoxyhemoglobin to measure the blood oxygen saturation. Hand movement is to fix the probe to the motor stage in order to prevent oxygen saturation measurement that affect the sample and probe kept constant interval. Experimental results show that the proposed method noticeably increases the accuracy and saves time compared with the conventional methods.

Keywords: oxygen saturation, broad-band light source, CCD, light reflectance theory

Procedia PDF Downloads 451
8915 Prevalence and Correlates of Complementary and Alternative Medicine Use among Diabetic Patients in Lebanon: A Cross-Sectional Study

Authors: Farah Naja, Mohamad Alameddine

Abstract:

Background: The difficulty of compliance to therapeutic and lifestyle management of type 2 diabetes mellitus (T2DM) encourages patients to use complementary and alternative medicine (CAM) therapies. Little is known about the prevalence and mode of CAM use among diabetics in the Eastern Mediterranean Region in general and Lebanon in particular. Objective: To assess the prevalence and modes of CAM use among patients with T2DM residing in Beirut, Lebanon. Methods: A cross-sectional survey of T2DM patients was conducted on patients recruited from two major referral centers - a public hospital and a private academic medical center in Beirut. In a face-to-face interview, participants completed a survey questionnaire comprised of three sections: socio-demographic, diabetes characteristics and types and modes of CAM use. Descriptive statistics, univariate and multivariate logistic regression analyses were utilized to assess the prevalence, mode and correlates of CAM use in the study population. The main outcome in this study (CAM use) was defined as using CAM at least once since diagnosis with T2DM. Results: A total of 333 T2DM patients completed the survey (response rate: 94.6%). Prevalence of CAM use in the study population was 38%, 95% CI (33.1-43.5). After adjustment, CAM use was significantly associated with a “married” status, a longer duration of T2DM, the presence of disease complications, and a positive family history of the disease. Folk foods and herbs were the most commonly used CAM followed by natural health products. One in five patients used CAM as an alternative to conventional treatment. Only 7 % of CAM users disclosed the CAM use to their treating physician. Health care practitioners were the least cited (7%) as influencing the choice of CAM among users. Conclusion: The use of CAM therapies among T2DM patients in Lebanon is prevalent. Decision makers and care providers must fully understand the potential risks and benefits of CAM therapies to appropriately advise their patients. Attention must be dedicated to educating T2DM patients on the importance of disclosing CAM use to their physicians especially patients with a family history of diabetes, and those using conventional therapy for a long time.

Keywords: nutritional supplements, type 2 diabetes mellitus, complementary and alternative medicine (CAM), conventional therapy

Procedia PDF Downloads 346
8914 An Eigen-Approach for Estimating the Direction-of Arrival of Unknown Number of Signals

Authors: Dia I. Abu-Al-Nadi, M. J. Mismar, T. H. Ismail

Abstract:

A technique for estimating the direction-of-arrival (DOA) of unknown number of source signals is presented using the eigen-approach. The eigenvector corresponding to the minimum eigenvalue of the autocorrelation matrix yields the minimum output power of the array. Also, the array polynomial with this eigenvector possesses roots on the unit circle. Therefore, the pseudo-spectrum is found by perturbing the phases of the roots one by one and calculating the corresponding array output power. The results indicate that the DOAs and the number of source signals are estimated accurately in the presence of a wide range of input noise levels.

Keywords: array signal processing, direction-of-arrival, antenna arrays, Eigenvalues, Eigenvectors, Lagrange multiplier

Procedia PDF Downloads 330
8913 A Large Ion Collider Experiment (ALICE) Diffractive Detector Control System for RUN-II at the Large Hadron Collider

Authors: J. C. Cabanillas-Noris, M. I. Martínez-Hernández, I. León-Monzón

Abstract:

The selection of diffractive events in the ALICE experiment during the first data taking period (RUN-I) of the Large Hadron Collider (LHC) was limited by the range over which rapidity gaps occur. It would be possible to achieve better measurements by expanding the range in which the production of particles can be detected. For this purpose, the ALICE Diffractive (AD0) detector has been installed and commissioned for the second phase (RUN-II). Any new detector should be able to take the data synchronously with all other detectors and be operated through the ALICE central systems. One of the key elements that must be developed for the AD0 detector is the Detector Control System (DCS). The DCS must be designed to operate safely and correctly this detector. Furthermore, the DCS must also provide optimum operating conditions for the acquisition and storage of physics data and ensure these are of the highest quality. The operation of AD0 implies the configuration of about 200 parameters, from electronics settings and power supply levels to the archiving of operating conditions data and the generation of safety alerts. It also includes the automation of procedures to get the AD0 detector ready for taking data in the appropriate conditions for the different run types in ALICE. The performance of AD0 detector depends on a certain number of parameters such as the nominal voltages for each photomultiplier tube (PMT), their threshold levels to accept or reject the incoming pulses, the definition of triggers, etc. All these parameters define the efficiency of AD0 and they have to be monitored and controlled through AD0 DCS. Finally, AD0 DCS provides the operator with multiple interfaces to execute these tasks. They are realized as operating panels and scripts running in the background. These features are implemented on a SCADA software platform as a distributed control system which integrates to the global control system of the ALICE experiment.

Keywords: AD0, ALICE, DCS, LHC

Procedia PDF Downloads 301
8912 Principal Component Analysis Applied to the Electric Power Systems – Practical Guide; Practical Guide for Algorithms

Authors: John Morales, Eduardo Orduña

Abstract:

Currently the Principal Component Analysis (PCA) theory has been used to develop algorithms regarding to Electric Power Systems (EPS). In this context, this paper presents a practical tutorial of this technique detailed their concept, on-line and off-line mathematical foundations, which are necessary and desirables in EPS algorithms. Thus, features of their eigenvectors which are very useful to real-time process are explained, showing how it is possible to select these parameters through a direct optimization. On the other hand, in this work in order to show the application of PCA to off-line and on-line signals, an example step to step using Matlab commands is presented. Finally, a list of different approaches using PCA is presented, and some works which could be analyzed using this tutorial are presented.

Keywords: practical guide; on-line; off-line, algorithms, faults

Procedia PDF Downloads 559
8911 Synthesis of Pyrimidine-Based Polymers Consist of 2-{4-[4,6-Bis-(4-Hexyl-Thiophen-2-yl)-Pyrimidin-2-yl]-Phenyl}-Thiazolo[5,4-B]Pyridine with Deep HOMO Level for Photovoltaics

Authors: Hyehyeon Lee, Jiwon Yu, Juwon Kim, Raquel Kristina Leoni Tumiar, Taewon Kim, Juae Kim, Hongsuk Suh

Abstract:

Photovoltaics, which have many advantages in cost, easy processing, and light-weight, have attracted attention. We synthesized pyrimidine-based conjugated polymers with 2-{4-[4,6-bis-(4-hexyl-thiophen-2-yl)-pyrimidin-2-yl]-phenyl}-thiazolo[5,4-b]pyridine (pPTP) which have an ability of powerful electron withdrawing and introduced into the PSCs. By Stille polymerization, we designed the conjugated polymers, pPTPBDT-12, pPTPBDT-EH, pPTPBDTT-EH and pPTPTTI. The HOMO energy levels of four polymers (pPTPBDT-12, pPTPBDT-EH, pPTPBDTT-EH and pPTPTTI) were at -5.61 ~ -5.89 eV, their LUMO (Lowest Unoccupied Molecular Orbital) energy levels were at -3.95 ~ -4.09 eV. The device including pPTPBDT-12 and PC71BM (1:2) indicated a V_oc of 0.67 V, a J_sc of 1.33 mA/cm², and a fill factor (FF) of 0.25, giving a power conversion efficiency (PCE) of 0.23%. The device including pPTPBDT-EH and PC71BM (1:2) indicated a V_oc of 0.72 V, a J_sc of 2.56 mA/cm², and a fill factor (FF) of 0.30, giving a power conversion efficiency of 0.56%. The device including pPTPBDTT-EH and PC71BM (1:2) indicated a V_oc of 0.72 V, a J_sc of 3.61 mA/cm², and a fill factor (FF) of 0.29, giving a power conversion efficiency of 0.74%. The device including pPTPTTI and PC71BM (1:2) indicated a V_oc of 0.83 V, a J_sc of 4.41 mA/cm², and a fill factor (FF) of 0.31, giving a power conversion efficiency of 1.13%. Therefore, pPTPBDT-12, pPTPBDT-EH, pPTPBDTT-EH, and pPTPTTI were synthesized by Stille polymerization. And We find one of the best efficiency for these polymers, called pPTPTTI. Their optical properties were measured and the results show that pyrimidine-based polymers especially like pPTPTTI have a great promise to act as the donor of the active layer.

Keywords: polymer solar cells, pyrimidine-based polymers, photovoltaics, conjugated polymer

Procedia PDF Downloads 196
8910 Is there Anything Useful in That? High Value Product Extraction from Artemisia annua L. in the Spent Leaf and Waste Streams

Authors: Anike Akinrinlade

Abstract:

The world population is estimated to grow from 7.1 billion to 9.22 billion by 2075, increasing therefore by 23% from the current global population. Much of the demographic changes up to 2075 will take place in the less developed regions. There are currently 54 countries which fall under the bracket of being defined as having ‘low-middle income’ economies and need new ways to generate valuable products from current resources that is available. Artemisia annua L is well used for the extraction of the phytochemical artemisinin, which accounts for around 0.01 to 1.4 % dry weight of the plant. Artemisinin is used in the treatment of malaria, a disease rampart in sub-Saharan Africa and in many other countries. Once artemisinin has been extracted the spent leaf and waste streams are disposed of as waste. A feasibility study was carried out looking at increasing the biomass value of A. annua, by designing a biorefinery where spent leaf and waste streams are utilized for high product generation. Quercetin, ferulic acid, dihydroartemisinic acid, artemisinic acid and artemsinin were screened for in the waste stream samples and the spent leaf. The analytical results showed that artemisinin, artemisinic acid and dihydroartemisinic acid were present in the waste extracts as well as camphor and arteannuin b. Ongoing effects are looking at using more industrially relevant solvents to extract the phytochemicals from the waste fractions and investigate how microwave pyrolysis of spent leaf can be utilized to generate bio-products.

Keywords: high value product generation, bioinformatics, biomedicine, waste streams, spent leaf

Procedia PDF Downloads 342
8909 Decommissioning of Nuclear Power Plants: The Current Position and Requirements

Authors: A. Stifi, S. Gentes

Abstract:

Undoubtedly from construction's perspective, the use of explosives will remove a large facility such as a 40-storey building , that took almost 3 to 4 years for construction, in few minutes. Usually, the reconstruction or decommissioning, the last phase of life cycle of any facility, is considered to be the shortest. However, this is proved to be wrong in the case of nuclear power plant. Statistics says that in the last 30 years, the construction of a nuclear power plant took an average time of 6 years whereas it is estimated that decommissioning of such plants may take even a decade or more. This paper is all about the decommissioning phase of a nuclear power plant which needs to be given more attention and encouragement from the research institutes as well as the nuclear industry. Currently, there are 437 nuclear power reactors in operation and 70 reactors in construction. With around 139 nuclear facilities already been shut down and are in different decommissioning stages and approximately 347 nuclear reactors will be in decommissioning phase in the next 20 years (assuming the operation time of a reactor as 40 years), This fact raises the following two questions (1) How far is the nuclear and construction Industry ready to face the challenges of decommissioning project? (2) What is required for a safety and reliable decommissioning project delivery? The decommissioning of nuclear facilities across the global have severe time and budget overruns. Largely the decommissioning processes are being executed by the force of manual labour where the change in regulations is respectively observed. In term of research and development, some research projects and activities are being carried out in this area, but the requirement seems to be much more. The near future of decommissioning shall be better through a sustainable development strategy where all stakeholders agree to implement innovative technologies especially for dismantling and decontamination processes and to deliever a reliable and safety decommissioning. The scope of technology transfer from other industries shall be explored. For example, remotery operated robotic technologies used in automobile and production industry to reduce time and improve effecincy and saftey shall be tried here. However, the innovative technologies are highly requested but they are alone not enough, the implementation of creative and innovative management methodologies should be also investigated and applied. Lean Management with it main concept "elimination of waste within process", is a suitable example here. Thus, the cooperation between international organisations and related industries and the knowledge-sharing may serve as a key factor for the successful decommissioning projects.

Keywords: decommissioning of nuclear facilities, innovative technology, innovative management, sustainable development

Procedia PDF Downloads 465
8908 Aerodynamic Design Optimization Technique for a Tube Capsule That Uses an Axial Flow Air Compressor and an Aerostatic Bearing

Authors: Ahmed E. Hodaib, Muhammed A. Hashem

Abstract:

High-speed transportation has become a growing concern. To increase high-speed efficiencies and minimize power consumption of a vehicle, we need to eliminate the friction with the ground and minimize the aerodynamic drag acting on the vehicle. Due to the complexity and high power requirements of electromagnetic levitation, we make use of the air in front of the capsule, that produces the majority of the drag, to compress it in two phases and inject a proportion of it through small nozzles to make a high-pressure air cushion to levitate the capsule. The tube is partially-evacuated so that the air pressure is optimized for maximum compressor effectiveness, optimum tube size, and minimum vacuum pump power consumption. The total relative mass flow rate of the tube air is divided into two fractions. One is by-passed to flow over the capsule body, ensuring that no chocked flow takes place. The other fraction is sucked by the compressor where it is diffused to decrease the Mach number (around 0.8) to be suitable for the compressor inlet. The air is then compressed and intercooled, then split. One fraction is expanded through a tail nozzle to contribute to generating thrust. The other is compressed again. Bleed from the two compressors is used to maintain a constant air pressure in an air tank. The air tank is used to supply air for levitation. Dividing the total mass flow rate increases the achievable speed (Kantrowitz limit), and compressing it decreases the blockage of the capsule. As a result, the aerodynamic drag on the capsule decreases. As the tube pressure decreases, the drag decreases and the capsule power requirements decrease, however, the vacuum pump consumes more power. That’s why Design optimization techniques are to be used to get the optimum values for all the design variables given specific design inputs. Aerodynamic shape optimization, Capsule and tube sizing, compressor design, diffuser and nozzle expander design and the effect of the air bearing on the aerodynamics of the capsule are to be considered. The variations of the variables are to be studied for the change of the capsule velocity and air pressure.

Keywords: tube-capsule, hyperloop, aerodynamic design optimization, air compressor, air bearing

Procedia PDF Downloads 325
8907 Oscillating Water Column Wave Energy Converter with Deep Water Reactance

Authors: William C. Alexander

Abstract:

The oscillating water column (OSC) wave energy converter (WEC) with deep water reactance (DWR) consists of a large hollow sphere filled with seawater at the base, referred to as the ‘stabilizer’, a hollow cylinder at the top of the device, with a said cylinder having a bottom open to the sea and a sealed top save for an orifice which leads to an air turbine, and a long, narrow rod connecting said stabilizer with said cylinder. A small amount of ballast at the bottom of the stabilizer and a small amount of floatation in the cylinder keeps the device upright in the sea. The floatation is set such that the mean water level is nominally halfway up the cylinder. The entire device is loosely moored to the seabed to keep it from drifting away. In the presence of ocean waves, seawater will move up and down within the cylinder, producing the ‘oscillating water column’. This gives rise to air pressure within the cylinder alternating between positive and negative gauge pressure, which in turn causes air to alternately leave and enter the cylinder through said top-cover situated orifice. An air turbine situated within or immediately adjacent to said orifice converts the oscillating airflow into electric power for transport to shore or elsewhere by electric power cable. Said oscillating air pressure produces large up and down forces on the cylinder. Said large forces are opposed through the rod to the large mass of water retained within the stabilizer, which is located deep enough to be mostly free of any wave influence and which provides the deepwater reactance. The cylinder and stabilizer form a spring-mass system which has a vertical (heave) resonant frequency. The diameter of the cylinder largely determines the power rating of the device, while the size (and water mass within) of the stabilizer determines said resonant frequency. Said frequency is chosen to be on the lower end of the wave frequency spectrum to maximize the average power output of the device over a large span of time (such as a year). The upper portion of the device (the cylinder) moves laterally (surge) with the waves. This motion is accommodated with minimal loading on the said rod by having the stabilizer shaped like a sphere, allowing the entire device to rotate about the center of the stabilizer without rotating the seawater within the stabilizer. A full-scale device of this type may have the following dimensions. The cylinder may be 16 meters in diameter and 30 meters high, the stabilizer 25 meters in diameter, and the rod 55 meters long. Simulations predict that this will produce 1,400 kW in waves of 3.5-meter height and 12 second period, with a relatively flat power curve between 5 and 16 second wave periods, as will be suitable for an open-ocean location. This is nominally 10 times higher power than similar-sized WEC spar buoys as reported in the literature, and the device is projected to have only 5% of the mass per unit power of other OWC converters.

Keywords: oscillating water column, wave energy converter, spar bouy, stabilizer

Procedia PDF Downloads 102
8906 Numerical Resolving of Net Faradaic Current in Fast-Scan Cyclic Voltammetry Considering Induced Charging Currents

Authors: Gabriel Wosiak, Dyovani Coelho, Evaldo B. Carneiro-Neto, Ernesto C. Pereira, Mauro C. Lopes

Abstract:

In this work, the theoretical and experimental effects of induced charging currents on fast-scan cyclic voltammetry (FSCV) are investigated. Induced charging currents arise from the effect of ohmic drop in electrochemical systems, which depends on the presence of an uncompensated resistance. They cause the capacitive contribution to the total current to be different from the capacitive current measured in the absence of electroactive species. The paper shows that the induced charging current is relevant when the capacitive current magnitude is close to the total current, even for systems with low time constant. In these situations, the conventional background subtraction method may be inaccurate. A method is developed that separates the faradaic and capacitive currents by using a combination of voltametric experimental data and finite element simulation, by the obtention of a potential-dependent capacitance. The method was tested in a standard electrochemical cell with Platinum ultramicroelectrodes, in different experimental conditions as well in previously reported data in literature. The proposed method allows the real capacitive current to be separated even in situations where the conventional background subtraction method is clearly inappropriate.

Keywords: capacitive current, fast-scan cyclic voltammetry, finite-element method, electroanalysis

Procedia PDF Downloads 68
8905 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine

Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland

Abstract:

The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.

Keywords: force-velocity, leg-press, power-velocity, profiling, reliability

Procedia PDF Downloads 54
8904 Simulation and Assessment of Carbon Dioxide Separation by Piperazine Blended Solutions Using E-NRTL and Peng-Robinson Models: Study of Regeneration Heat Duty

Authors: Arash Esmaeili, Zhibang Liu, Yang Xiang, Jimmy Yun, Lei Shao

Abstract:

A high-pressure carbon dioxide (CO₂) absorption from a specific off-gas in a conventional column has been evaluated for the environmental concerns by the Aspen HYSYS simulator using a wide range of single absorbents and piperazine (PZ) blended solutions to estimate the outlet CO₂ concentration, CO₂ loading, reboiler power supply, and regeneration heat duty to choose the most efficient solution in terms of CO₂ removal and required heat duty. The property package, which is compatible with all applied solutions for the simulation in this study, estimates the properties based on the electrolyte non-random two-liquid (E-NRTL) model for electrolyte thermodynamics and Peng-Robinson equation of state for vapor phase and liquid hydrocarbon phase properties. The results of the simulation indicate that piperazine, in addition to the mixture of piperazine and monoethanolamine (MEA), demands the highest regeneration heat duty compared with other studied single and blended amine solutions, respectively. The blended amine solutions with the lowest PZ concentrations (5wt% and 10wt%) were considered and compared to reduce the cost of the process, among which the blended solution of 10wt%PZ+35wt%MDEA (methyldiethanolamine) was found as the most appropriate solution in terms of CO₂ content in the outlet gas, rich-CO₂ loading, and regeneration heat duty.

Keywords: absorption, amine solutions, aspen HYSYS, CO₂ loading, piperazine, regeneration heat duty

Procedia PDF Downloads 181
8903 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid

Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang

Abstract:

Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.

Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid

Procedia PDF Downloads 426
8902 Evaluation of Microbiological Quality and Safety of Two Types of Salads Prepared at Libyan Airline Catering Center in Tripoli

Authors: Elham A. Kwildi, Yahia S. Abugnah, Nuri S. Madi

Abstract:

This study was designed to evaluate the microbiological quality and safety of two types of salads prepared at a catering center affiliated with Libyan Airlines in Tripoli, Libya. Two hundred and twenty-one (221) samples (132 economy-class and 89 first- class) were used in this project which lasted for ten months. Biweekly, microbiological tests were performed which included total plate count (TPC) and total coliforms (TCF), in addition to enumeration and/or detection of some pathogenic bacteria mainly Escherichia coli, Staphylococcus aureus, Bacillus cereus, Salmonella sp, Listeria sp and Vibrio parahaemolyticus parahaemolyticus, By using conventional as well as compact dry methods. Results indicated that TPC of type 1 salad ranged between (<10 – 62 x 103 cfu/gm) and (<10 to 36 x103 cfu/g), while TCF were (<10 – 41 x 103 cfu/gm) and (< 10 to 66 x102 cfu/g) using both methods of detection respectively. On the other hand, TPC of type 2 salad were: (1 × 10 – 52 x 103) and (<10 – 55 x 103 cfu/gm) and in the range of (1 x10 to 45x103 cfu/g), and the (TCF) counts were between (< 10 to 55x103 cfu/g) and (< 10 to 34 x103 cfu/g) using the 1st and the 2nd methods of detection respectively. Also, the pathogens mentioned above were detected in both types of salads, but their levels varied according to the type of salad and the method of detection. The level of Staphylococcus aureus, for instance, was 17.4% using conventional method versus 14.4% using the compact dry method. Similarly, E. coli was 7.6% and 9.8%, while Salmonella sp. recorded the least percentage i.e. 3% and 3.8% with the two mentioned methods respectively. First class salads were also found to contain the same pathogens, but the level of E. coli was relatively higher in this case (14.6% and 16.9%) using conventional and compact dry methods respectively. The second rank came Staphylococcus aureus (13.5%) and (11.2%), followed by Salmonella (6.74%) and 6.70%). The least percentage was for Vibrio parahaemolyticus (4.9%) which was detected in the first class salads only. The other two pathogens Bacillus cereus and Listeria sp. were not detected in either one of the salads. Finally, it is worth mentioning that there was a significant decline in TPC and TCF counts in addition to the disappearance of pathogenic bacteria after the 6-7th month of the study which coincided with the first trial of the HACCP system at the center. The ups and downs in the counts along the early stages of the study reveal that there is a need for some important correction measures including more emphasis on training of the personnel in applying the HACCP system effectively.

Keywords: air travel, vegetable salads, foodborne outbreaks, Libya

Procedia PDF Downloads 317
8901 Informal Carers in Telemonitoring of Users with Pacemakers: Characteristics, Time of Services Provided and Costs

Authors: Antonio Lopez-Villegas, Rafael Bautista-Mesa, Emilio Robles-Musso, Daniel Catalan-Matamoros, Cesar Leal-Costa

Abstract:

Objectives: The purpose of this trial was to evaluate the burden borne by and the costs to informal caregivers of users with telemonitoring of pacemakers. Methods: This is a controlled, non-randomised clinical trial, with data collected from informal caregivers, five years after implantation of pacemakers. The Spanish version of the Survey on Disabilities, Personal Autonomy, and Dependency Situations was used to get information on clinical and social characteristics, levels of professionalism, duration and types of care, difficulties in providing care, health status, economic and job aspects, impact on the family or leisure due to informal caregiving for patients with pacemakers. Results: After five years of follow-up, 55 users with pacemakers finished the study. Of which, 50 were helped by a caregiver, 18 were included in the telemonitoring group (TM) and 32 in the conventional follow-up group (HM). Overall, females represented 96.0% of the informal caregivers (88.89% in TM and 100.0% in HM group). The mean ages were 63.17 ± 15.92 and 63.13 ± 14.56 years, respectively (p = 0.83) in the groups. The majority (88.0%) of the caregivers declared that they had to provide their services between 6 and 7 days per week (83.33% in TM group versus 90.63% in HM group), without significant differences between both groups. The costs related to care provided by the informal caregivers were 47.04% higher in the conventional follow-up group than in the TM group. Conclusions: The results of this trial confirm that there were no significant differences between the informal caregivers regarding to baseline characteristics, workload and time worked in both groups of follow-up. The costs incurred by the informal caregivers providing care for users with pacemakers included in telemonitoring group are significantly lower than those in the conventional follow-up group. Trial registration: ClinicalTrials.gov NCT02234245. Funding: The PONIENTE study, has been funded by the General Secretariat for Research, Development and Innovation, Regional Government of Andalusia (Spain), project reference number PI/0256/2017, under the research call 'Development and Innovation Projects in the Field of Biomedicine and Health Sciences', 2017.

Keywords: costs, disease burden, informal caregiving, pacemaker follow-up, remote monitoring, telemedicine

Procedia PDF Downloads 136
8900 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: data fusion, Gaussian process regression, signal denoise, temporal extrapolation

Procedia PDF Downloads 132
8899 Comparative Vector Susceptibility for Dengue Virus and Their Co-Infection in A. aegypti and A. albopictus

Authors: Monika Soni, Chandra Bhattacharya, Siraj Ahmed Ahmed, Prafulla Dutta

Abstract:

Dengue is now a globally important arboviral disease. Extensive vector surveillance has already established A.aegypti as a primary vector, but A.albopictus is now accelerating the situation through gradual adaptation to human surroundings. Global destabilization and gradual climatic shift with rising in temperature have significantly expanded the geographic range of these species These versatile vectors also host Chikungunya, Zika, and yellow fever virus. Biggest challenge faced by endemic countries now is upsurge in co-infection reported with multiple serotypes and virus co-circulation. To foster vector control interventions and mitigate disease burden, there is surge for knowledge on vector susceptibility and viral tolerance in response to multiple infections. To address our understanding on transmission dynamics and reproductive fitness, both the vectors were exposed to single and dual combinations of all four dengue serotypes by artificial feeding and followed up to third generation. Artificial feeding observed significant difference in feeding rate for both the species where A.albopictus was poor artificial feeder (35-50%) compared to A.aegypti (95-97%) Robust sequential screening of viral antigen in mosquitoes was followed by Dengue NS1 ELISA, RT-PCR and Quantitative PCR. To observe viral dissemination in different mosquito tissues Indirect immunofluorescence assay was performed. Result showed that both the vectors were infected initially with all dengue(1-4)serotypes and its co-infection (D1 and D2, D1 and D3, D1 and D4, D2 and D4) combinations. In case of DENV-2 there was significant difference in the peak titer observed at 16th day post infection. But when exposed to dual infections A.aegypti supported all combinations of virus where A.albopictus only continued single infections in successive days. There was a significant negative effect on the fecundity and fertility of both the vectors compared to control (PANOVA < 0.001). In case of dengue 2 infected mosquito, fecundity in parent generation was significantly higher (PBonferroni < 0.001) for A.albopicus compare to A.aegypti but there was a complete loss of fecundity from second to third generation for A.albopictus. It was observed that A.aegypti becomes infected with multiple serotypes frequently even at low viral titres compared to A.albopictus. Possible reason for this could be the presence of wolbachia infection in A.albopictus or mosquito innate immune response, small RNA interference etc. Based on the observations it could be anticipated that transovarial transmission may not be an important phenomenon for clinical disease outcome, due to the absence of viral positivity by third generation. Also, Dengue NS1 ELISA can be used for preliminary viral detection in mosquitoes as more than 90% of the samples were found positive compared to RT-PCR and viral load estimation.

Keywords: co-infection, dengue, reproductive fitness, viral quantification

Procedia PDF Downloads 199
8898 A Microsurgery-Specific End-Effector Equipped with a Bipolar Surgical Tool and Haptic Feedback

Authors: Hamidreza Hoshyarmanesh, Sanju Lama, Garnette R. Sutherland

Abstract:

In tele-operative robotic surgery, an ideal haptic device should be equipped with an intuitive and smooth end-effector to cover the surgeon’s hand/wrist degrees of freedom (DOF) and translate the hand joint motions to the end-effector of the remote manipulator with low effort and high level of comfort. This research introduces the design and development of a microsurgery-specific end-effector, a gimbal mechanism possessing 4 passive and 1 active DOFs, equipped with a bipolar forceps and haptic feedback. The robust gimbal structure is comprised of three light-weight links/joint, pitch, yaw, and roll, each consisting of low-friction support and a 2-channel accurate optical position sensor. The third link, which provides the tool roll, was specifically designed to grip the tool prongs and accommodate a low mass geared actuator together with a miniaturized capstan-rope mechanism. The actuator is able to generate delicate torques, using a threaded cylindrical capstan, to emulate the sense of pinch/coagulation during conventional microsurgery. While the tool left prong is fixed to the rolling link, the right prong bears a miniaturized drum sector with a large diameter to expand the force scale and resolution. The drum transmits the actuator output torque to the right prong and generates haptic force feedback at the tool level. The tool is also equipped with a hall-effect sensor and magnet bar installed vis-à-vis on the inner side of the two prongs to measure the tooltip distance and provide an analogue signal to the control system. We believe that such a haptic end-effector could significantly increase the accuracy of telerobotic surgery and help avoid high forces that are known to cause bleeding/injury.

Keywords: end-effector, force generation, haptic interface, robotic surgery, surgical tool, tele-operation

Procedia PDF Downloads 113
8897 Energy Management System and Interactive Functions of Smart Plug for Smart Home

Authors: Win Thandar Soe, Innocent Mpawenimana, Mathieu Di Fazio, Cécile Belleudy, Aung Ze Ya

Abstract:

Intelligent electronic equipment and automation network is the brain of high-tech energy management systems in critical role of smart homes dominance. Smart home is a technology integration for greater comfort, autonomy, reduced cost, and energy saving as well. These services can be provided to home owners for managing their home appliances locally or remotely and consequently allow them to automate intelligently and responsibly their consumption by individual or collective control systems. In this study, three smart plugs are described and one of them tested on typical household appliances. This article proposes to collect the data from the wireless technology and to extract some smart data for energy management system. This smart data is to quantify for three kinds of load: intermittent load, phantom load and continuous load. Phantom load is a waste power that is one of unnoticed power of each appliance while connected or disconnected to the main. Intermittent load and continuous load take in to consideration the power and using time of home appliances. By analysing the classification of loads, this smart data will be provided to reduce the communication of wireless sensor network for energy management system.

Keywords: energy management, load profile, smart plug, wireless sensor network

Procedia PDF Downloads 267
8896 Language and Power Relations in Selected Political Crisis Speeches in Nigeria: A Critical Discourse Analysis

Authors: Isaiah Ifeanyichukwu Agbo

Abstract:

Human speech is capable of serving many purposes. Power and control are not always exercised overtly by linguistic acts, but maybe enacted and exercised in the myriad of taken-for-granted actions of everyday life. Domination, power control, discrimination and mind control exist in human speech and may lead to asymmetrical power relations. In discourse, there are persuasive and manipulative linguistic acts that serve to establish solidarity and identification with the 'we group' and polarize with the 'they group'. Political discourse is crafted to defend and promote the problematic narrative of outright controversial events in a nation’s history thereby sustaining domination, marginalization, manipulation, inequalities and injustices, often without the dominated and marginalized group being aware of them. They are designed and positioned to serve the political and social needs of the producers. Political crisis speeches in Nigeria, just like in other countries concentrate on positive self-image, de-legitimization of political opponents, reframing accusation to one’s advantage, redefining problematic terms and adopting reversal strategy. In most cases, the people are ignorant of the hidden ideological positions encoded in the text. Few researches have been conducted adopting the frameworks of critical discourse analysis and systemic functional linguistics to investigate this situation in the political crisis speeches in Nigeria. In this paper, we focus attention on the analyses of the linguistic, semantic, and ideological elements in selected political crisis speeches in Nigeria to investigate if they create and sustain unequal power relations and manipulative tendencies from the perspectives of Critical Discourse Analysis (CDA) and Systemic Functional Linguistics (SFL). Critical Discourse Analysis unpacks both opaque and transparent structural relationships of power dominance, power relations and control as manifested in language. Critical discourse analysis emerged from a critical theory of language study which sees the use of language as a form of social practice where social relations are reproduced or contested and different interests are served. Systemic function linguistics relates the structure of texts to their function. Fairclough’s model of CDA and Halliday’s systemic functional approach to language study are adopted in this paper. This paper probes into language use that perpetuates inequalities. This study demystifies the hidden implicature of the selected political crisis speeches and reveals the existence of information that is not made explicit in what the political actors actually say. The analysis further reveals the ideological configurations present in the texts. These ideological standpoints are the basis for naturalizing implicit ideologies and hegemonic influence in the texts. The analyses of the texts further uncovered the linguistic and discursive strategies deployed by text producers to manipulate the unsuspecting members of the public both mentally and conceptually in order to enact, sustain and maintain unhealthy power relations at crisis times in the Nigerian political history.

Keywords: critical discourse analysis, language, political crisis, power relations, systemic functional linguistics

Procedia PDF Downloads 337
8895 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: aggregate data, combined-level data, individual patient data, meta-analysis

Procedia PDF Downloads 370
8894 Effect of Heavy Metals on the Life History Trait of Heterocephalobellus sp. and Cephalobus sp. (Nematode: Cephalobidae) Collected from a Small-Scale Mining Site, Davao de Oro, Philippines

Authors: Alissa Jane S. Mondejar, Florifern C. Paglinawan, Nanette Hope N. Sumaya, Joey Genevieve T. Martinez, Mylah Villacorte-Tabelin

Abstract:

Mining is associated with increased heavy metals in the environment, and heavy metal contamination disrupts the activities of soil fauna, such as nematodes, causing changes in the function of the soil ecosystem. Previous studies found that nematode community composition and diversity indices were strongly affected by heavy metals (e.g., Pb, Cu, and Zn). In this study, the influence of heavy metals on nematode survivability and reproduction were investigated. Life history analysis of the free-living nematodes, Heterocephalobellus sp. and Cephalobus sp. (Rhabditida: Cephalobidae) were assessed using the hanging drop technique, a technique often used in life history trait experiments. The nematodes were exposed to different temperatures, i.e.,20°C, 25°C, and 30°C, in different groups (control and heavy metal exposed) and fed with the same bacterial density of 1×109 Escherichia coli cells ml-1 for 30 days. Results showed that increasing temperature and exposure to heavy metals had a significant influence on the survivability and egg production of both species. Heterocephalobellus sp. and Cephalobus sp., when exposed to 20°C survived longer and produced few numbers of eggs but without subsequent hatching. Life history parameters of Heterocephalobellus sp. showed that the value of parameters was higher in the control group under net production rate (R0), fecundity (mx) which is also the same value for the total fertility rate (TFR), generation times (G0, G₁, and Gh) and Population doubling time (PDT). However, a lower rate of natural increase (rm) was observed since generation times were higher. Meanwhile, the life history parameters of Cephalobus sp. showed that the value of net production rate (R0) was higher in the exposed group. Fecundity (mx) which is also the same value for the TFR, G0, G1, Gh, and PDT, were higher in the control group. However, a lower rate of natural increase (rm) was observed since generation times were higher. In conclusion, temperature and exposure to heavy metals had a negative influence on the life history of the nematodes, however, further experiments should be considered.

Keywords: artisanal and small-scale gold mining (ASGM), hanging drop method, heavy metals, life history trait.

Procedia PDF Downloads 87
8893 Application of Imperialist Competitive Algorithm for Optimal Location and Sizing of Static Compensator Considering Voltage Profile

Authors: Vahid Rashtchi, Ashkan Pirooz

Abstract:

This paper applies the Imperialist Competitive Algorithm (ICA) to find the optimal place and size of Static Compensator (STATCOM) in power systems. The output of the algorithm is a two dimensional array which indicates the best bus number and STATCOM's optimal size that minimizes all bus voltage deviations from their nominal value. Simulations are performed on IEEE 5, 14, and 30 bus test systems. Also some comparisons have been done between ICA and the famous Particle Swarm Optimization (PSO) algorithm. Results show that how this method can be considered as one of the most precise evolutionary methods for the use of optimum compensator placement in electrical grids.

Keywords: evolutionary computation, imperialist competitive algorithm, power systems compensation, static compensators, voltage profile

Procedia PDF Downloads 597
8892 Effect of Corrosion on the Shear Buckling Strength

Authors: Myoung-Jin Lee, Sung-Jin Lee, Young-Kon Park, Jin-Wook Kim, Bo-Kyoung Kim, Song-Hun Chong, Sun-Ii Kim

Abstract:

The ability to resist the shear strength arises mainly from the web panel of steel girders and as such, the shear buckling strength of these girders has been extensively investigated. For example, Blaser’s reported that when buckling occurs, the tension field has an effect after the buckling strength of the steel is reached. The findings of these studies have been applied by AASHTO, AISC, and to the European Code that provides guidelines for designs aimed at preventing shear buckling. Steel girders are susceptible to corrosion resulting from exposure to natural elements such as rainfall, humidity, and temperature. This corrosion leads to a reduction in the size of the web panel section, thereby resulting in a decrease in the shear strength. The decrease in the panel section has a significant effect on the maintenance section of the bridge. However, in most conventional designs, the influence of corrosion is overlooked during the calculation of the shear buckling strength and hence over-design is common. Therefore, in this study, a steel girder with an A/D of 1:1, as well as a 6-mm-, 16-mm-, and 12-mm-thick web panel, flange, and intermediate reinforcing material, respectively, were used. The total length was set to that (3200 mm) of the default model. The effect of corrosion shear buckling was investigated by determining the volume amount of corrosion, shape of the erosion patterns, and the angular change in the tensile field of the shear buckling strength. This study provides the basic data that will enable designs that incorporate values closer (than those used in most conventional designs) to the actual shear buckling strength.

Keywords: corrosion, shear buckling strength, steel girder, shear strength

Procedia PDF Downloads 370
8891 Genome-Wide Identification of Genes Resistance to Nitric Oxide in Vibrio parahaemolyticus

Authors: Yantao Li, Jun Zheng

Abstract:

Food poison caused by consumption of contaminated food, especially seafood, is one of most serious public health threats worldwide. Vibrio parahaemolyticus is emerging bacterial pathogen and the leading cause of human gastroenteritis associated with food poison, especially in the southern coastal region of China. To successfully cause disease in host, bacterial pathogens need to overcome the host-derived stresses encountered during infection. One of the toxic chemical species elaborated by the host is nitric oxide (NO). NO is generated by acidified nitrite in the stomach and by enzymes of the inducible NO synthase (iNOS) in the host cell, and is toxic to bacteria. Bacterial pathogens have evolved some mechanisms to battle with this toxic stress. Such mechanisms include genes to sense NO produced from immune system and activate others to detoxify NO toxicity, and genes to repair the damage caused by toxic reactive nitrogen species (RNS) generated during NO toxic stress. However, little is known about the NO resistance in V. parahaemolyticus. In this study, a transposon coupled with next generation sequencing (Tn-seq) technology will be utilized to identify genes for NO resistance in V. parahaemolyticus. Our strategy will include construction the saturating transposon insertion library, transposon library challenging with NO, next generation sequencing (NGS), bioinformatics analysis and verification of the identified genes in vitro and in vivo.

Keywords: vibrio parahaemolyticus, nitric oxide, tn-seq, virulence

Procedia PDF Downloads 263
8890 Behavior of Droplets in Microfluidic System with T-Junction

Authors: A. Guellati, F-M Lounis, N. Guemras, K. Daoud

Abstract:

Micro droplet formation is considered as a growing emerging area of research due to its wide-range application in chemistry as well as biology. The mechanism of micro droplet formation using two immiscible liquids running through a T-junction has been widely studied. We believe that the flow of these two immiscible phases can be of greater important factor that could have an impact on out-flow hydrodynamic behavior, the droplets generated and the size of the droplets. In this study, the type of the capillary tubes used also represents another important factor that can have an impact on the generation of micro droplets. The tygon capillary tubing with hydrophilic inner surface doesn't allow regular out-flows due to the fact that the continuous phase doesn't adhere to the wall of the capillary inner surface. Teflon capillary tubing, presents better wettability than tygon tubing, and allows to obtain steady and regular regimes of out-flow, and the micro droplets are homogeneoussize. The size of the droplets is directly dependent on the flows of the continuous and dispersed phases. Thus, as increasing the flow of the continuous phase, to flow of the dispersed phase stationary, the size of the drops decreases. Inversely, while increasing the flow of the dispersed phase, to flow of the continuous phase stationary, the size of the droplet increases.

Keywords: microfluidic system, micro droplets generation, t-junction, fluids engineering

Procedia PDF Downloads 338
8889 Multi-Dimensional Energy Resource Evaluation in Climate Change beyond the 21st Century

Authors: Hameed Alshammari

Abstract:

The decarbonisation of the energy sector beyond the 21ˢᵗ century is akin to establishing morally responsible mechanisms that can propagate sustainable livelihoods (Denina et al., 2021). It implies that Kuwait undertakes a re-evaluation of energy generation gaps so as to tap the potential to reduce overreliance on fossil fuel (Si et al., 2020) and align with global views on sustainable energy generation and consumption.(Herrero, Pineda, Villar, & Zambrano, 2020). Without the economic pressure to switch to alternative sources of energy, Kuwait requires a multi-dimensional analysis the energy policies andsources of energy other than fossil fuels (Alsaad, 2021).Currently, Kuwait has an energy system that is highly skewed towards fossil fuels (Alsaad, 2021); hence, the reliance on burning fossil fuels forms part of the core elements of the general inefficient energy systems that have negative consequences to global environmental and economic systems (Kang et al., 2020). This paper undertakes a detailed literature review on factors needed for the development of a framework for the multi-dimensional energy resource analysis in Kuwait. The framework aims aligning the current energy policies in Kuwait with the global decarbonisation drive, to promote sustainable energy strategies.

Keywords: decarbonisation, energy, fossil fuels, multi-dimensional analysis, sustainability

Procedia PDF Downloads 81