Search results for: cumulative absolute velocity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2519

Search results for: cumulative absolute velocity

419 Outcome-Based Education as Mediator of the Effect of Blended Learning on the Student Performance in Statistics

Authors: Restituto I. Rodelas

Abstract:

The higher education has adopted the outcomes-based education from K-12. In this approach, the teacher uses any teaching and learning strategies that enable the students to achieve the learning outcomes. The students may be required to exert more effort and figure things out on their own. Hence, outcomes-based students are assumed to be more responsible and more capable of applying the knowledge learned. Another approach that the higher education in the Philippines is starting to adopt from other countries is blended learning. This combination of classroom and fully online instruction and learning is expected to be more effective. Participating in the online sessions, however, is entirely up to the students. Thus, the effect of blended learning on the performance of students in Statistics may be mediated by outcomes-based education. If there is a significant positive mediating effect, then blended learning can be optimized by integrating outcomes-based education. In this study, the sample will consist of four blended learning Statistics classes at Jose Rizal University in the second semester of AY 2015–2016. Two of these classes will be assigned randomly to the experimental group that will be handled using outcomes-based education. The two classes in the control group will be handled using the traditional lecture approach. Prior to the discussion of the first topic, a pre-test will be administered. The same test will be given as posttest after the last topic is covered. In order to establish equality of the groups’ initial knowledge, single factor ANOVA of the pretest scores will be performed. Single factor ANOVA of the posttest-pretest score differences will also be conducted to compare the performance of the experimental and control groups. When a significant difference is obtained in any of these ANOVAs, post hoc analysis will be done using Tukey's honestly significant difference test (HSD). Mediating effect will be evaluated using correlation and regression analyses. The groups’ initial knowledge are equal when the result of pretest scores ANOVA is not significant. If the result of score differences ANOVA is significant and the post hoc test indicates that the classes in the experimental group have significantly different scores from those in the control group, then outcomes-based education has a positive effect. Let blended learning be the independent variable (IV), outcomes-based education be the mediating variable (MV), and score difference be the dependent variable (DV). There is mediating effect when the following requirements are satisfied: significant correlation of IV to DV, significant correlation of IV to MV, significant relationship of MV to DV when both IV and MV are predictors in a regression model, and the absolute value of the coefficient of IV as sole predictor is larger than that when both IV and MV are predictors. With a positive mediating effect of outcomes-base education on the effect of blended learning on student performance, it will be recommended to integrate outcomes-based education into blended learning. This will yield the best learning results.

Keywords: outcome-based teaching, blended learning, face-to-face, student-centered

Procedia PDF Downloads 291
418 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 82
417 Analyzing the Crisis of Liberal Democracy by Investigating Connections Between Deliberative Democratic Theory, Criticism of Neoliberalism and Contemporary Marxist Political Economy

Authors: Inka Maria Vilhelmiina Hiltunen

Abstract:

The crisis of liberal democracy has been recognized from many sites of political literature; scholars of Marxist critical political economy and deliberative democracy, as well as critics of neoliberalism, have become concerned about how either the rise of populism and authoritarianism, institutional decline or the overarching economic rationality erode political democratic citizenship in favor of economic technocracy or conservative protectionism. However, even if these bodies of literature recognize the generalized crisis that haunts Western democracies, dialogue between them has been very limited. That said, drawing from contemporary Marxist perspectives, this article aims at bridging the gap between the criticism of neoliberalism and theories of deliberative democracy. The first section starts by outlining what is meant by neoliberalism, liberal democracy, and the crisis of liberal democracy. The next section explores how contemporary capitalism acts upon society and transforms it. It introduces Jurgen Habermas’ thesis of the ‘colonization of the lifeworld’, Wendy Brown’s analysis of neoliberal rationality and Étienne Balibar’s concepts of ‘absolute capitalism’ and ‘total subsumption,’ that the essay aims at connecting in the last section. The third section is concerned with the deliberative democratic theory and practice. The section highlights the qualitative socio-political impacts of deliberation, as predicted by theorists and shown by empirical studies. The last section draws from contemporary Marxist perspectives to examine the question if deliberative democratic theories and practices can resolve the crisis of liberal democracy in the current financially driven era of neoliberal capitalism. By asking this question, the essay aims to consider what is required to reverse the current global trend of rising inequality. If liberal democracy has declined towards commodified and reactionary forms of politics and if ‘market rationality’ has shaped social agency to the extent that politicians and the public struggle to imagine ‘any alternatives’, the most urgent political task is to bring to life a new political imagination based on democratic ideals of equality, inclusivity, reciprocity, and solidarity, that thereby enables the revision of the transnational institutional design. This part focuses on the hegemonic role of finance and money. The essay concludes by stating that the implementation of substantive global democracy must start from the dissolution of the hegemony of finance, centered on U.S., and from the remaking of the conditions of socioeconomic reproduction world-wide. However, given the still present overarching neoliberal status quo, the essay is skeptical of the ideological feasibility of this remaking.

Keywords: deliberative democracy, criticism of neoliberalism, marxist political economy, crisis of liberal democracy

Procedia PDF Downloads 113
416 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers

Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal

Abstract:

Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.

Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test

Procedia PDF Downloads 99
415 First Systematic Review on Aerosol Bound Water: Exploring the Existing Knowledge Domain Using the CiteSpace Software

Authors: Kamila Widziewicz-Rzonca

Abstract:

The presence of PM bound water as an integral chemical compound of suspended aerosol particles (PM) has become one of the hottest issues in recent years. The UN climate summits on climate change (COP24) indicate that PM of anthropogenic origin (released mostly from coal combustion) is directly responsible for climate change. Chemical changes at the particle-liquid (water) interface determine many phenomena occurring in the atmosphere such as visibility, cloud formation or precipitation intensity. Since water-soluble particles such as nitrates, sulfates, or sea salt easily become cloud condensation nuclei, they affect the climate for example by increasing cloud droplet concentration. Aerosol water is a master component of atmospheric aerosols and a medium that enables all aqueous-phase reactions occurring in the atmosphere. Thanks to a thorough bibliometric analysis conducted using CiteSpace Software, it was possible to identify past trends and possible future directions in measuring aerosol-bound water. This work, in fact, doesn’t aim at reviewing the existing literature in the related topic but is an in-depth bibliometric analysis exploring existing gaps and new frontiers in the topic of PM-bound water. To assess the major scientific areas related to PM-bound water and clearly define which among those are the most active topics we checked Web of Science databases from 1996 till 2018. We give an answer to the questions: which authors, countries, institutions and aerosol journals to the greatest degree influenced PM-bound water research? Obtained results indicate that the paper with the greatest citation burst was Tang In and Munklewitz H.R. 'water activities, densities, and refractive indices of aqueous sulfates and sodium nitrate droplets of atmospheric importance', 1994. The largest number of articles in this specific field was published in atmospheric chemistry and physics. An absolute leader in the quantity of publications among all research institutions is the National Aeronautics Space Administration (NASA). Meteorology and atmospheric sciences is a category with the most studies in this field. A very small number of studies on PM-bound water conduct a quantitative measurement of its presence in ambient particles or its origin. Most articles rather point PM-bound water as an artifact in organic carbon and ions measurements without any chemical analysis of its contents. This scientometric study presents the current and most actual literature regarding particulate bound water.

Keywords: systematic review, aerosol-bound water, PM-bound water, CiteSpace, knowledge domain

Procedia PDF Downloads 124
414 Dynamic Analysis of Functionally Graded Nano Composite Pipe with PZT Layers Subjected to Moving Load

Authors: Morteza Raminnia

Abstract:

In this study, dynamic analysis of functionally graded nano-composite pipe reinforced by single-walled carbon nano-tubes (SWCNTs) with simply supported boundary condition subjected to moving mechanical loads is investigated. The material properties of functionally graded carbon nano tube-reinforced composites (FG-CNTRCs) are assumed to be graded in the thickness direction and are estimated through a micro-mechanical model. In this paper polymeric matrix considered as isotropic material and for the CNTRC, uniform distribution (UD) and three types of FG distribution patterns of SWCNT reinforcements are considered. The system equation of motion is derived by using Hamilton's principle under the assumptions of first order shear deformation theory (FSDT).The thin piezoelectric layers embedded on inner and outer surfaces of FG-CNTRC layer are acted as distributed sensor and actuator to control dynamic characteristics of the FG-CNTRC laminated pipe. The modal analysis technique and Newmark's integration method are used to calculate the displacement and dynamic stress of the pipe subjected to moving loads. The effects of various material distribution and velocity of moving loads on dynamic behavior of the pipe is presented. This present approach is validated by comparing the numerical results with the published numerical results in literature. The results show that the above-mentioned effects play very important role on dynamic behavior of the pipe .This present work shows that some meaningful results that which are interest to scientific and engineering community in the field of FGM nano-structures.

Keywords: nano-composite, functionally garded material, moving load, active control, PZT layers

Procedia PDF Downloads 420
413 Marine Environmental Monitoring Using an Open Source Autonomous Marine Surface Vehicle

Authors: U. Pruthviraj, Praveen Kumar R. A. K. Athul, K. V. Gangadharan, S. Rao Shrikantha

Abstract:

An open source based autonomous unmanned marine surface vehicle (UMSV) is developed for some of the marine applications such as pollution control, environmental monitoring and thermal imaging. A double rotomoulded hull boat is deployed which is rugged, tough, quick to deploy and moves faster. It is suitable for environmental monitoring, and it is designed for easy maintenance. A 2HP electric outboard marine motor is used which is powered by a lithium-ion battery and can also be charged from a solar charger. All connections are completely waterproof to IP67 ratings. In full throttle speed, the marine motor is capable of up to 7 kmph. The motor is integrated with an open source based controller using cortex M4F for adjusting the direction of the motor. This UMSV can be operated by three modes: semi-autonomous, manual and fully automated. One of the channels of a 2.4GHz radio link 8 channel transmitter is used for toggling between different modes of the USMV. In this electric outboard marine motor an on board GPS system has been fitted to find the range and GPS positioning. The entire system can be assembled in the field in less than 10 minutes. A Flir Lepton thermal camera core, is integrated with a 64-bit quad-core Linux based open source processor, facilitating real-time capturing of thermal images and the results are stored in a micro SD card which is a data storage device for the system. The thermal camera is interfaced to an open source processor through SPI protocol. These thermal images are used for finding oil spills and to look for people who are drowning at low visibility during the night time. A Real Time clock (RTC) module is attached with the battery to provide the date and time of thermal images captured. For the live video feed, a 900MHz long range video transmitter and receiver is setup by which from a higher power output a longer range of 40miles has been achieved. A Multi-parameter probe is used to measure the following parameters: conductivity, salinity, resistivity, density, dissolved oxygen content, ORP (Oxidation-Reduction Potential), pH level, temperature, water level and pressure (absolute).The maximum pressure it can withstand 160 psi, up to 100m. This work represents a field demonstration of an open source based autonomous navigation system for a marine surface vehicle.

Keywords: open source, autonomous navigation, environmental monitoring, UMSV, outboard motor, multi-parameter probe

Procedia PDF Downloads 242
412 Influence of Environment-Friendly Organic Wastes on the Properties of Sandy Soil under Growing Zea mays L. in Arid Regions

Authors: Mohamed Rashad, Mohamed Hafez, Mohamed Emran, Emad Aboukila, Ibrahim Nassar

Abstract:

Environment-friendly organic wastes of Brewers' spent grain, a byproduct of the brewing process, have recently used as soil amendment to improve soil fertility and plant production. In this work, treatments of 1% (T1) and 2% (T2) of spent grains, 1% (C1) and 2% (C2) of compost and mix of both sources (C1T1) were used and compared to the control for growing Zea mays L. on sandy soil under arid Mediterranean climate. Soils were previously incubated at 65% saturation capacity for a month. The most relevant soil physical and chemical parameters were analysed. Water holding capacity and soil organic matter (OM) increased significantly along the treatments with the highest values in T2. Soil pH decreased along the treatments and the lowest pH was in C1T1. Bicarbonate decreased by 69% in C1T1 comparing to control. Total nitrogen (TN) and available P varied significantly among all treatments and T2, C1T1 and C2 treatments increased 25, 17 and 11 folds in TN and 1.2, 0.6 and 0.3 folds in P, respectively related to control. Available K showed the highest values in C1T1. Soil micronutrients increased significantly along all treatments with the highest values in T2. After corn germination, significant variation was observed in the velocity of germination coefficients (VGC) among all treatments in the order of C1T1>T2>T1>C2>C1>control. The highest records of final germination and germination index were in C1T1 and T2. The spent grains may compensate deficiencies of macro and micronutrients in newly reclaimed sandy soils without adverse effects to sustain crop production with a rider that excessive or continuous use need to be circumvented.

Keywords: corn and squash germination, environmentally friendly organic wastes, soil carbon sequestration, spent grains as soil amendment, water holding capacity

Procedia PDF Downloads 510
411 Braiding Channel Pattern Due to Variation of Discharge

Authors: Satish Kumar, Spandan Sahu, Sarjati Sahoo, K. K. Khatua

Abstract:

An experimental investigation has been carried out in a tilting flume of 2 m wide, 13 m long, and 0.3 m deep to study the effect of flow on the formation of braided channel pattern. Sediment flow is recirculated through the flume, which passes from the headgate to the sediment/water collecting tank through the tailgate. Further, without altering the geometry of the sand bed channel, the discharge is varied to study the effect of the formation of the braided pattern with time. Then the flow rate is varied to study the effect of flow on the formation of the braided pattern. Sediment transport rate is highly variable and was found to be a nonlinear function of flow rate, aspect ratio, longitudinal slope, and time. Total braided intensity (BIT) for each discharge case is found to be more than the active braided intensity (BIA). Both the parameters first increase and then decrease as the time progresses following a similar pattern for all the observed discharge cases. When the flow is increased, the movement of sediment also increases since the active braided intensity is found to adjust quickly. The measurement of velocity and boundary shear helps to study the erosion and sedimentation processes in the channel and formation of small meandering channels and then the braided channel for different discharge conditions of a sediment river. Due to regime properties of rivers, both total braided Intensity and active braided intensity become stable for a given channel and flow conditions. In the present case, the trend of the ratio of BIA to BIT is found to be asymptotic against the time with a value of 0.4. After the particular time elapses off the flow, new small channels are also found to be formed with changes in the sinuosity of the active channels, thus forming the braided network. This is due to the continuous erosion and sedimentation processes occurring for the flow process for the flow and sediment conditions.

Keywords: active braided intensity, bed load, sediment transport, shear stress, total braided intensity

Procedia PDF Downloads 131
410 A Study of Tactics in the Dissident Urban Form

Authors: Probuddha Mukhopadhyay

Abstract:

The infiltration of key elements to the civil structure is foraying its way to reclaim, what is its own. The reclamation of lives and spaces, once challenged, becomes a consistent process of ingress, disguised as parallels to the moving city, disperses into discourses often unheard of and conveniently forgotten. In this age of 'hyper'-urbanization, there are solutions suggested to a plethora of issues faced by citizens, in improving their standards of living. Problems are ancillary to proposals that emerge out of the underlying disorders of the townscape. These interventions result in the formulation of urban policies, to consolidate and optimize, to regularize and to streamline resources. Policy and practice are processes where the politics in policies define the way in which urban solutions are prescribed. Social constraints, that formulate the various cycles of order and disorders within the urban realm, are the stigmas for such interventions. There is often a direct relation of policy to place, no matter how people-centric it may seem to be projected. How we live our lives depends on where we live our lives - a relative statement for urban problems, varies from city to city. Communal compositions, welfare, crisis, socio-economic balance, need for management are the generic roots for urban policy formulation. However, in reality, the gentry administering its environmentalism is the criterion, that shapes and defines the values and expanse of such policies. In relation to the psycho-spatial characteristic of urban spheres with respect to the other side of this game, there have been instances, where the associational values have been reshaped by interests. The public domain reclaimed for exclusivity, thus creating fortified neighborhoods. Here, the citizen cumulative is often drifted by proposals that would over time deplete such landscapes of the city. It is the organized rebellion that in turn formulates further inward looking enclaves of latent aggression. In recent times, it has been observed that the unbalanced division of power and the implied processes of regulating the weak, stem the rebellion who respond in kits and parts. This is a phenomenon that mimics the guerilla warfare tactics, in order to have systems straightened out, either by manipulations or by force. This is the form of the city determined by the various forms insinuated by the state of city wide decisions. This study is an attempt at understanding the way in which development is interpreted by the state and the civil society and the role that community driven processes undertake to reinstate their claims to the city. This is a charter of consolidated patterns of negotiations that tend to counter policies. The research encompasses a study of various contested settlements in two cities of India- Mumbai and Kolkata, tackling dissent through spatial order. The study has been carried out to identify systems - formal and informal, catering to the most challenged interests of the people with respect to their habitat, a model to counter the top-down authoritative framework challenging the legitimacy of such settlements.

Keywords: urban design, insurgence, tactical urbanism, urban governance, civil society, state

Procedia PDF Downloads 150
409 CFD Study of Subcooled Boiling Flow at Elevated Pressure Using a Mechanistic Wall Heat Partitioning Model

Authors: Machimontorn Promtong, Sherman C. P. Cheung, Guan H. Yeoh, Sara Vahaji, Jiyuan Tu

Abstract:

The wide range of industrial applications involved with boiling flows promotes the necessity of establishing fundamental knowledge in boiling flow phenomena. For this purpose, a number of experimental and numerical researches have been performed to elucidate the underlying physics of this flow. In this paper, the improved wall boiling models, implemented on ANSYS CFX 14.5, were introduced to study subcooled boiling flow at elevated pressure. At the heated wall boundary, the Fractal model, Force balance approach and Mechanistic frequency model are given for predicting the nucleation site density, bubble departure diameter, and bubble departure frequency. The presented wall heat flux partitioning closures were modified to consider the influence of bubble sliding along the wall before the lift-off, which usually happens in the flow boiling. The simulation was performed based on the Two-fluid model, where the standard k-ω SST model was selected for turbulence modelling. Existing experimental data at around 5 bars were chosen to evaluate the accuracy of the presented mechanistic approach. The void fraction and Interfacial Area Concentration (IAC) are in good agreement with the experimental data. However, the predicted bubble velocity and Sauter Mean Diameter (SMD) are over-predicted. This over-prediction may be caused by consideration of only dispersed and spherical bubbles in the simulations. In the future work, the important physical mechanisms of bubbles, such as merging and shrinking during sliding on the heated wall will be incorporated into this mechanistic model to enhance its capability for a wider range of flow prediction.

Keywords: subcooled boiling flow, computational fluid dynamics (CFD), mechanistic approach, two-fluid model

Procedia PDF Downloads 320
408 Experimental Study and Numerical Simulation of the Reaction and Flow on the Membrane Wall of Entrained Flow Gasifier

Authors: Jianliang Xu, Zhenghua Dai, Zhongjie Shen, Haifeng Liu, Fuchen Wang

Abstract:

In an entrained flow gasifier, the combustible components are converted into the gas phase, and the mineral content is converted into ash. Most of the ash particles or droplets are deposited on the refractory or membrane wall and form a slag layer that flows down to the quenching system. The captured particle reaction process and slag flow and phase transformation play an important role in gasifier performance and safe and stable operation. The reaction characteristic of captured char particles on the molten slag had been studied by applied a high-temperature stage microscope. The gasification process of captured chars with CO2 on the slag surface was observed and recorded, compared to the original char gasification. The particle size evolution, heat transfer process are discussed, and the gasification reaction index of the capture char particle are modeled. Molten slag layer promoted the char reactivity from the analysis of reaction index, Coupled with heat transfer analysis, shrinking particle model (SPM) was applied and modified to predict the gasification time at carbon conversion of 0.9, and results showed an agreement with the experimental data. A comprehensive model with gas-particle-slag flow and reaction models was used to model the different industry gasifier. The carbon conversion information in the spatial space and slag layer surface are investigated. The slag flow characteristic, such as slag velocity, molten slag thickness, slag temperature distribution on the membrane wall and refractory brick are discussed.

Keywords: char, slag, numerical simulation, gasification, wall reaction, membrane wall

Procedia PDF Downloads 309
407 Designing an Exhaust Gas Energy Recovery Module Following Measurements Performed under Real Operating Conditions

Authors: Jerzy Merkisz, Pawel Fuc, Piotr Lijewski, Andrzej Ziolkowski, Pawel Czarkowski

Abstract:

The paper presents preliminary results of the development of an automotive exhaust gas energy recovery module. The aim of the performed analyses was to select the geometry of the heat exchanger that would ensure the highest possible transfer of heat at minimum heat flow losses. The starting point for the analyses was a straight portion of a pipe, from which the exhaust system of the tested vehicle was made. The design of the heat exchanger had a cylindrical cross-section, was 300 mm long and was fitted with a diffuser and a confusor. The model works were performed for the mentioned geometry utilizing the finite volume method based on the Ansys CFX v12.1 and v14 software. This method consisted in dividing of the system into small control volumes for which the exhaust gas velocity and pressure calculations were performed using the Navier-Stockes equations. The heat exchange in the system was modeled based on the enthalpy balance. The temperature growth resulting from the acting viscosity was not taken into account. The heat transfer on the fluid/solid boundary in the wall layer with the turbulent flow was done based on an arbitrarily adopted dimensionless temperature. The boundary conditions adopted in the analyses included the convective condition of heat transfer on the outer surface of the heat exchanger and the mass flow and temperature of the exhaust gas at the inlet. The mass flow and temperature of the exhaust gas were assumed based on the measurements performed in actual traffic using portable PEMS analyzers. The research object was a passenger vehicle fitted with a 1.9 dm3 85 kW diesel engine. The tests were performed in city traffic conditions.

Keywords: waste heat recovery, heat exchanger, CFD simulation, pems

Procedia PDF Downloads 574
406 Determination of Viscosity and Degree of Hydrogenation of Liquid Organic Hydrogen Carriers by Cavity Based Permittivity Measurement

Authors: I. Wiemann, N. Weiß, E. Schlücker, M. Wensing

Abstract:

A very promising alternative to compression or cryogenics is the chemical storage of hydrogen by liquid organic hydrogen carriers (LOHC). These carriers enable high energy density and allow, at the same time, efficient and safe storage under ambient conditions without leakage losses. Another benefit of this storage medium is the possibility of transporting it using already available infrastructure for the transport of fossil fuels. Efficient use of LOHC is related to precise process control, which requires a number of sensors in order to measure all relevant process parameters, for example, to measure the level of hydrogen loading of the carrier. The degree of loading is relevant for the energy content of the storage carrier and simultaneously represents the modification in the chemical structure of the carrier molecules. This variation can be detected in different physical properties like permittivity, viscosity, or density. E.g., each degree of loading corresponds to different viscosity values. Conventional measurements currently use invasive viscosity measurements or near-line measurements to obtain quantitative information. This study investigates permittivity changes resulting from changes in hydrogenation degree (chemical structure) and temperature. Based on calibration measurements, the degree of loading and temperature of LOHC can thus be determined by comparatively simple permittivity measurements in a cavity resonator. Subsequently, viscosity and density can be calculated. An experimental setup with a heating device and flow test bench was designed. By varying temperature in the range of 293,15 K -393,15 K and flow velocity up to 140 mm/s, corresponding changes in the resonation frequency were determined in the hundredths of the GHz range. This approach allows inline process monitoring of hydrogenation of the liquid organic hydrogen carrier (LOHC).

Keywords: hydrogen loading, LOHC, measurement, permittivity, viscosity

Procedia PDF Downloads 81
405 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets

Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar

Abstract:

The study of the primary flow velocity and the self impinging secondary jet flow mixing is important from both the fundamental research and the application point of view. Real industrial configurations are more complex than simple shear layers present in idealized numerical thrust-vectoring models due to the presence of combustion, swirl and confinement. Predicting the flow features of self impinging secondary jets in a supersonic primary flow is complex owing to the fact that there are a large number of parameters involved. Earlier studies have been highlighted several key features of self impinging jets, but an extensive characterization in terms of jet interaction between supersonic flow and self impinging secondary sonic jets is still an active research topic. In this paper numerical studies have been carried out using a validated two-dimensional k-omega standard turbulence model for the design optimization of a thrust vector control system using shock induced self impinging secondary flow sonic jets using non-reacting flows. Efforts have been taken for examining the flow features of TVC system with various secondary jets at different divergent locations and jet impinging angles with the same inlet jet pressure and mass flow ratio. The results from the parametric studies reveal that in addition to the primary to the secondary mass flow ratio the characteristics of the self impinging secondary jets having bearing on an efficient thrust vectoring. We concluded that the self impinging secondary jet nozzles are better than single jet nozzle with the same secondary mass flow rate owing to the fact fixing of the self impinging secondary jet nozzles with proper jet angle could facilitate better thrust vectoring for any supersonic aerospace vehicle.

Keywords: fluidic thrust vectoring, rocket steering, supersonic to sonic jet interaction, TVC in aerospace vehicles

Procedia PDF Downloads 590
404 Experimental and Numerical Study on the Effects of Oxygen Methane Flames with Water Dilution for Different Pressures

Authors: J. P. Chica Cano, G. Cabot, S. de Persis, F. Foucher

Abstract:

Among all possibilities to combat global warming, CO2 capture and sequestration (CCS) is presented as a great alternative to reduce greenhouse gas (GHG) emission. Several strategies for CCS from industrial and power plants are being considered. The concept of combined oxy-fuel combustion has been the most alternative solution. Nevertheless, due to the high cost of pure O2 production, additional ways recently emerged. In this paper, an innovative combustion process for a gas turbine cycle was studied: it was composed of methane combustion with oxygen enhanced air (OEA), exhaust gas recirculation (EGR) and H2O issuing from STIG (Steam Injection Gas Turbine), and the CO2 capture was realized by membrane separator. The effect on this combustion process was emphasized, and it was shown that a study of the influence of H2O dilution on the combustion parameters by experimental and numerical approaches had to be carried out. As a consequence, the laminar burning velocities measurements were performed in a stainless steel spherical combustion from atmospheric pressure to high pressure (up to 0.5 MPa), at 473 K for an equivalence ratio at 1. These experimental results were satisfactorily compared with Chemical Workbench v.4.1 package in conjunction with GRIMech 3.0 reaction mechanism. The good correlations so obtained between experimental and calculated flame speed velocities showed the validity of the GRIMech 3.0 mechanism in this domain of combustion: high H2O dilution, low N2, medium pressure. Finally, good estimations of flame speed and pollutant emissions were determined in other conditions compatible with real gas turbine. In particular, mixtures (composed of CH4/O2/N2/H2O/ or CO2) leading to the same adiabatic temperature were investigated. Influences of oxygen enrichment and H2O dilution (compared to CO2) were disused.

Keywords: CO₂ capture, oxygen enrichment, water dilution, laminar burning velocity, pollutants emissions

Procedia PDF Downloads 166
403 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 418
402 Catalytic Soot Gasification in Single and Mixed Atmospheres of CO2 and H2O in the Presence of CO and H2

Authors: Yeidy Sorani Montenegro Camacho, Samir Bensaid, Nunzio Russo, Debora Fino

Abstract:

LiFeO2 nano-powders were prepared via solution combustion synthesis (SCS) method and were used as carbon gasification catalyst in a reduced atmosphere. The gasification of soot with CO2 and H2O in the presence of CO and H2 (syngas atmosphere) were also investigated under atmospheric conditions using a fixed-bed micro-reactor placed in an electric, PID-regulated oven. The catalytic bed was composed of 150 mg of inert silica, 45 mg of carbon (Printex-U) and 5 mg of catalyst. The bed was prepared by ball milling the mixture at 240 rpm for 15 min to get an intimate contact between the catalyst and soot. A Gas Hourly Space Velocity (GHSV) of 38.000 h-1 was used for the tests campaign. The furnace was heated up to the desired temperature, a flow of 120 mL/min was sent into the system and at the same time the concentrations of CO, CO2 and H2 were recorded at the reactor outlet using an EMERSON X-STREAM XEGP analyzer. Catalytic and non-catalytic soot gasification reactions were studied in a temperature range of 120°C – 850°C with a heating rate of 5 °C/min (non-isothermal case) and at 650°C for 40 minutes (isothermal case). Experimental results show that the gasification of soot with H2O and CO2 are inhibited by the H2 and CO, respectively. The soot conversion at 650°C decreases from 70.2% to 31.6% when the CO is present in the feed. Besides, the soot conversion was 73.1% and 48.6% for H2O-soot and H2O-H2-soot gasification reactions, respectively. Also, it was observed that the carbon gasification in mixed atmosphere, i.e., when simultaneous carbon gasification with CO2 and steam take place, with H2 and CO as co-reagents; the gasification reaction is strongly inhibited by CO and H2, as well has been observed in single atmospheres for the isothermal and non-isothermal reactions. Further, it has been observed that when CO2 and H2O react with carbon at the same time, there is a passive cooperation of steam and carbon dioxide in the gasification reaction, this means that the two gases operate on separate active sites without influencing each other. Finally, despite the extreme reduced operating conditions, it has been demonstrated that the 32.9% of the initial carbon was gasified using LiFeO2-catalyst, while in the non-catalytic case only 8% of the soot was gasified at 650°C.

Keywords: soot gasification, nanostructured catalyst, reducing environment, syngas

Procedia PDF Downloads 263
401 Numerical Analysis of Laminar Reflux Condensation from Gas-Vapour Mixtures in Vertical Parallel Plate Channels

Authors: Foad Hassaninejadafarahani, Scott Ormiston

Abstract:

Reflux condensation occurs in a vertical channels and tubes when there is an upward core flow of vapor (or gas-vapor mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapor-gas mixture (or pure vapor) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapor core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on a finite volume method and a co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and pressure profiles, as well as axial variations of film thickness, Nusselt number and interface gas mass fraction.

Keywords: Reflux, Condensation, CFD-Two Phase, Nusselt number

Procedia PDF Downloads 364
400 Effects of Seed Culture and Attached Growth System on the Performance of Anammox Hybrid Reactor (AHR) Treating Nitrogenous Wastewater

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

The start-up of anammox (anaerobic ammonium oxidation) process in hybrid reactor delineated four distinct phases i.e. cell lysis, lag phase, activity elevation and stationary phase. Cell lysis phase was marked by death and decay of heterotrophic denitrifiers resulting in breakdown of organic nitrogen into ammonium. Lag phase showed initiation of anammox activity with turnover of heterotrophic denitrifiers, which is evident from appearance of NO3-N in the effluent. In activity elevation phase, anammox became the dominant reaction, which can be attributed to consequent reduction of NH4-N into N2 with increased NO3-N in the effluent. Proper selection of mixed seed culture at influent NO2-/NH4+ ratio (1:1) and hydraulic retention time (HRT) of 1 day led to early startup of anammox within 70 days. Pseudo steady state removal efficiencies of NH4+ and NO2- were found as 94.3% and 96.4% respectively, at nitrogen loading rate (NLR) of 0.35 kg N/m3d at an HRT of 1 day. Analysis of the data indicated that attached growth system contributes an additional 11% increase in the ammonium removal and results in an average of 29% reduction in sludge washout rate. Mass balance study of nitrogen indicated that 74.1% of total input nitrogen is converted into N2 gas followed by 11.2% being utilized in biomass development. Scanning electron microscope (SEM) observation of the granular sludge clearly showed the presence of cocci and rod shaped microorganisms intermingled on the external surface of the granules. The average size of anammox granules (1.2-1.5 mm) with an average settling velocity of 45.6 m/h indicated a high degree of granulation resulting into formation of well compacted granules in the anammox process.

Keywords: anammox, hybrid reactor, startup, granulation, nitrogen removal, mixed seed culture

Procedia PDF Downloads 186
399 Microbubbles Enhanced Synthetic Phorbol Ester Degradation by Ozonolysis

Authors: D. Kuvshinov, A. Siswanto, W. Zimmerman

Abstract:

A phorbol-12-myristate-13-acetate (TPA) is a synthetic analogue of phorbol ester (PE), a natural toxic compound of Euphorbiaceae plant. The oil extracted from plants of this family is useful source for primarily biofuel. However this oil can also be used as a food stock due to its significant nutrition content. The limitations for utilizing the oil as a food stock are mainly due to a toxicity of PE. Nowadays a majority of PE detoxification processes are expensive as include multi steps alcohol extraction sequence. Ozone is considered as a strong oxidative agent. It reaction with PE it attacks the carbon double bond of PE. This modification of PE molecular structure results into nontoxic ester with high lipid content. This report presents data on development of simple and cheap PE detoxification process with water application as a buffer and ozone as reactive component. The core of this new technique is a simultaneous application of new microscale plasma unit for ozone production and patented gas oscillation technology. In combination with a reactor design the technology permits ozone injection to the water-TPA mixture in form of microbubbles. The efficacy of a heterogeneous process depends on diffusion coefficient which can be controlled by contact time and interface area. The low velocity of rising microbubbles and high surface to volume ratio allow fast mass transfer to be achieved during the process. Direct injection of ozone is the most efficient process for a highly reactive and short lived chemical. Data on the plasma unit behavior are presented and influence of the gas oscillation technology to the microbubbles production mechanism has been discussed. Data on overall process efficacy for TPA degradation is shown.

Keywords: microbubble, ozonolysis, synthetic phorbol ester, chemical engineering

Procedia PDF Downloads 217
398 Cloud Based Supply Chain Traceability

Authors: Kedar J. Mahadeshwar

Abstract:

Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.

Keywords: cloud, pharmaceutical, supply chain, tracking

Procedia PDF Downloads 529
397 The Role of Oceanic Environmental Conditions on Catch of Sardinella spp. In Ghana

Authors: Emmanuel Okine Neokye Serge Dossou Martin Iniga Bortey Nketia Alabi-Doku

Abstract:

Fish stock distribution is greatly influenced by oceanographic environmental conditions. Temporal variations of temperature and other oceanic properties, resulting from climate change have been documented to have a strong impact on fisheries and aquaculture. In Ghana, Sardinella species are one of the most important fisheries resources; they constitute about 60% of the total catch of coastal fisheries and are more predominant during the upwelling season. The present study investigated the role of physical oceanographic environmental conditions in the catches of Sardinella species: S. aurita and S. maderensis, which were landed in Ghana. Furthermore, we examined the relationship between environmental conditions and catches of Sardinella species for seasonal and interannual variations between 2005 and 2015. For oceanographic environmental factors, we used comprehensive datasets, which consist of :(1) daily in situ SST data obtained at two coastal stations in Ghana; (i) Cape 3 Points (4.7° N, -2.09° W) and (ii) Tema (5° N, 0° E), for the period 2005–2015, (2) Monthly SST data (MOAA GPV) from JAMSTEC, and (3) gridded 10 metre wind data from CCMP reanalysis. The analysis of the data collected showed that higher (lower) wind velocity forms stronger (weaker) coastal upwelling that is detected by lower (higher) SST, resulting in a higher (lower) catch of Sardinella spp., in both seasonal and interannual variations. It was also observed that the capture ability of small pelagic fish species such as Sardinella spp. is depend on the intensity of the coastal upwelling. Moreso, the Atlantic Meridional Mode index (climatic index) is now known to be a possible factor to the interannual variation in catch of small pelagic fish species.

Keywords: Sardinella spp., fish, climate change, Ghana

Procedia PDF Downloads 15
396 Risk Management and Resiliency: Evaluating Walmart’s Global Supply Chain Leadership Using the Supply Chain Resilience Assessment and Management Framework

Authors: Meghan Biallas, Amanda Hoffman, Tamara Miller, Kimmy Schnibben, Janaina Siegler

Abstract:

This paper assesses Walmart’s supply chain resiliency amidst continuous supply chain disruptions. It aims to evaluate how Walmart can use supply chain resiliency theory to retain its status as a global supply chain leader. The Bloomberg terminal was used to organize Walmart’s 754 Tier-1 suppliers by the size of their relationship to Walmart. Additional data from IBISWorld and Statista was also used in the analysis. This research focused on the top ten Tier-1 suppliers, with the greatest percentage of their revenue attributed to Walmart. This paper also applied the firm’s information to the Supply Chain Resilience Assessment and Management (SCRAM) framework for supply chain resiliency to evaluate the firm’s capabilities, vulnerabilities, and gaps. A rubric was created to quantify Walmart’s risks using four pillars: flexibility, velocity, visibility, and collaboration. Information and examples were reported from Walmart’s 10k filing. For each example, a rating of 1 indicated “high” resiliency, 0 indicated “medium” resiliency, and -1 indicated “low” resiliency. Findings from this study include the following: (1) Walmart has maintained its leadership through its ability to remain resilient with regard to visibility, efficiency, capacity, and collaboration. (2) Walmart is experiencing increases in supply chain costs due to internal factors affecting the company and external factors affecting its suppliers. (3) There are a number of emerging supply chain risks with Walmart’s suppliers, which could cause issues for Walmart to remain a supply chain leader in the future. Using the SCRAM framework, this paper assesses how Walmart measures up to the Supply Chain Resiliency Theory, identifying areas of strength as well as areas where Walmart can improve in order to remain a global supply chain leader.

Keywords: supply chain resiliency, zone of balanced resilience, supply chain resilience assessment and management, supply chain theory.

Procedia PDF Downloads 129
395 Dynamic and Thermal Characteristics of Three-Dimensional Turbulent Offset Jet

Authors: Ali Assoudi, Sabra Habli, Nejla Mahjoub Saïd, Philippe Bournot, Georges Le Palec

Abstract:

Studying the flow characteristics of a turbulent offset jet is an important topic among researchers across the world because of its various engineering applications. Some of the common examples include: injection and carburetor systems, entrainment and mixing process in gas turbine and boiler combustion chambers, Thrust-augmenting ejectors for V/STOL aircrafts and HVAC systems, environmental dischargers, film cooling and many others. An offset jet is formed when a jet discharges into a medium above a horizontal solid wall parallel to the axis of the jet exit but which is offset by a certain distance. The structure of a turbulent offset-jet can be described by three main regions. Close to the nozzle exit, an offset jet possesses characteristic features similar to those of free jets. Then, the entrainment of fluid between the jet, the offset wall and the bottom wall creates a low pressure zone, forcing the jet to deflect towards the wall and eventually attaches to it at the impingement point. This is referred to as the Coanda effect. Further downstream after the reattachment point, the offset jet has the characteristics of a wall jet flow. Therefore, the offset jet has characteristics of free, impingement and wall jets, and it is relatively more complex compared to these types of flows. The present study examines the dynamic and thermal evolution of a 3D turbulent offset jet with different offset height ratio (the ratio of the distance from the jet exit to the impingement bottom wall and the jet nozzle diameter). To achieve this purpose a numerical study was conducted to investigate a three-dimensional offset jet flow through the resolution of the different governing Navier–Stokes’ equations by means of the finite volume method and the RSM second-order turbulent closure model. A detailed discussion has been provided on the flow and thermal characteristics in the form of streamlines, mean velocity vector, pressure field and Reynolds stresses.

Keywords: offset jet, offset ratio, numerical simulation, RSM

Procedia PDF Downloads 304
394 Polyphenols: Isolation, Purification, Characterization and Evaluation of Various Biological Activities

Authors: Abdullah Ijaz Hussain

Abstract:

The purpose of this study was to explore the cardioprotective and anti-inflammatory effects of polyphenol-rich extracts from cucurbitaceae family members, including Cucurbita pepo, C. moschata, and C. maxima, on rat models. The initial crude extracts from these cucurbits were further separated into hexane, chloroform, ethyl acetate, butanol, and aqueous ethanol fractions, labeled as HEF, CHF, EAF, BUF, and AEF, respectively. Of these, AEF yielded the highest amount, followed by BUF, HEF, EAF, and CHF in descending order. Notably, EAF contained the greatest concentration of total phenolics, flavonoids, and flavonols. In terms of antioxidant activity, EAF demonstrated the most potent DPPH radical scavenging capability, succeeded by CHF, BUF, AEF, and HEF. EAF also exhibited the strongest reducing potential among the fractions. RP-HPLC analysis identified various phenolic acids and flavonoids across the cucurbita fractions, including ferulic acid, vanillic acid, p-coumeric acid, gallic acid, p-hydroxybenzoic acid, chlorogenic acid, catechin, rutin, quercetin, myricetin, and kaempferol. Doses of 250 and 500 mg/kg body weight of cucurbita fractions were administered orally to male WKY rats daily for 21 days. The rats' body weight, heart rate, and blood pressure were monitored bi-weekly. Oxidative status assessments were conducted using plasma samples to measure levels of malondialdehyde (MDA), superoxide dismutase (SOD), reduced glutathione (GSH), nitric oxide (NO), and total antioxidant capacity (TAC). At the study's conclusion, surgical assessments, including blood pressure, pulse wave velocity (PWV), and echocardiograms (ECG) were performed. The findings indicated that EAF from cucurbita significantly enhanced antihypertensive and antioxidant activities in the SHR rat group.

Keywords: polyphenols, chlorogenic acid, antihypertensive activity, oxidative stress, lcms

Procedia PDF Downloads 28
393 Effects of Glucogenic and Lipogenic Diets on Ruminal Microbiota and Metabolites in Vitro

Authors: Beihai Xiong, Dengke Hua, Wouter Hendriks, Wilbert Pellikaan

Abstract:

To improve the energy status of dairy cows in the early lactation, lots of jobs have been done on adjusting the starch to fiber ratio in the diet. As a complex ecosystem, the rumen contains a large population of microorganisms which plays a crucial role in feed degradation. Further study on the microbiota alterations and metabolic changes under different dietary energy sources is essential and valuable to better understand the function of the ruminal microorganisms and thereby to optimize the rumen function and enlarge feed efficiency. The present study will focus on the effects of two glucogenic diets (G: ground corn and corn silage; S: steam-flaked corn and corn silage) and a lipogenic diet (L: sugar beet pulp and alfalfa silage) on rumen fermentation, gas production, the ruminal microbiota and metabolome, and also their correlations in vitro. The gas production was recorded consistently, and the gas volume and producing rate at times 6, 12, 24, 48 h were calculated separately. The fermentation end-products were measured after fermenting for 48 h. The ruminal bacteria and archaea communities were determined by 16S RNA sequencing technique, the metabolome profile was tested through LC-MS methods. Compared to the diet G and S, the L diet had a lower dry matter digestibility, propionate production, and ammonia-nitrogen concentration. The two glucogenic diets performed worse in controlling methane and lactic acid production compared to the L diet. The S diet produced the greatest cumulative gas volume at any time points during incubation compared to the G and L diet. The metabolic analysis revealed that the lipid digestion was up-regulated by the diet L than other diets. On the subclass level, most metabolites belonging to the fatty acids and conjugates were higher, but most metabolites belonging to the amino acid, peptides, and analogs were lower in diet L than others. Differences in rumen fermentation characteristics were associated with (or resulting from) changes in the relative abundance of bacterial and archaeal genera. Most highly abundant bacteria were stable or slightly influenced by diets, while several amylolytic and cellulolytic bacteria were sensitive to the dietary changes. The L diet had a significantly higher number of cellulolytic bacteria, including the genera of Ruminococcus, Butyrivibrio, Eubacterium, Lachnospira, unclassified Lachnospiraceae, and unclassified Ruminococcaceae. The relative abundances of amylolytic bacteria genera including Selenomonas_1, Ruminobacter, and Succinivibrionaceae_UCG-002 were higher in diet G and S. These affected bacteria was also proved to have high associations with certain metabolites. The Selenomonas_1 and Succinivibrionaceae_UCG-002 may contribute to the higher propionate production in the diet G and S through enhancing the succinate pathway. The results indicated that the two glucogenic diets had a greater extent of gas production, a higher dry matter digestibility, and produced more propionate than diet L. The steam-flaked corn did not show a better performance on fermentation end-products than ground corn. This study has offered a deeper understanding of ruminal microbial functions which could assistant the improvement in rumen functions and thereby in the ruminant production.

Keywords: gas production, metabolome, microbiota, rumen fermentation

Procedia PDF Downloads 153
392 Inferring Influenza Epidemics in the Presence of Stratified Immunity

Authors: Hsiang-Yu Yuan, Marc Baguelin, Kin O. Kwok, Nimalan Arinaminpathy, Edwin Leeuwen, Steven Riley

Abstract:

Traditional syndromic surveillance for influenza has substantial public health value in characterizing epidemics. Because the relationship between syndromic incidence and the true infection events can vary from one population to another and from one year to another, recent studies rely on combining serological test results with syndromic data from traditional surveillance into epidemic models to make inference on epidemiological processes of influenza. However, despite the widespread availability of serological data, epidemic models have thus far not explicitly represented antibody titre levels and their correspondence with immunity. Most studies use dichotomized data with a threshold (Typically, a titre of 1:40 was used) to define individuals as likely recently infected and likely immune and further estimate the cumulative incidence. Underestimation of Influenza attack rate could be resulted from the dichotomized data. In order to improve the use of serosurveillance data, here, a refinement of the concept of the stratified immunity within an epidemic model for influenza transmission was proposed, such that all individual antibody titre levels were enumerated explicitly and mapped onto a variable scale of susceptibility in different age groups. Haemagglutination inhibition titres from 523 individuals and 465 individuals during pre- and post-pandemic phase of the 2009 pandemic in Hong Kong were collected. The model was fitted to serological data in age-structured population using Bayesian framework and was able to reproduce key features of the epidemics. The effects of age-specific antibody boosting and protection were explored in greater detail. RB was defined to be the effective reproductive number in the presence of stratified immunity and its temporal dynamics was compared to the traditional epidemic model using use dichotomized seropositivity data. Deviance Information Criterion (DIC) was used to measure the fitness of the model to serological data with different mechanisms of the serological response. The results demonstrated that the differential antibody response with age was present (ΔDIC = -7.0). The age-specific mixing patterns with children specific transmissibility, rather than pre-existing immunity, was most likely to explain the high serological attack rates in children and low serological attack rates in elderly (ΔDIC = -38.5). Our results suggested that the disease dynamics and herd immunity of a population could be described more accurately for influenza when the distribution of immunity was explicitly represented, rather than relying only on the dichotomous states 'susceptible' and 'immune' defined by the threshold titre (1:40) (ΔDIC = -11.5). During the outbreak, RB declined slowly from 1.22[1.16-1.28] in the first four months after 1st May. RB dropped rapidly below to 1 during September and October, which was consistent to the observed epidemic peak time in the late September. One of the most important challenges for infectious disease control is to monitor disease transmissibility in real time with statistics such as the effective reproduction number. Once early estimates of antibody boosting and protection are obtained, disease dynamics can be reconstructed, which are valuable for infectious disease prevention and control.

Keywords: effective reproductive number, epidemic model, influenza epidemic dynamics, stratified immunity

Procedia PDF Downloads 261
391 Fluvial Stage-Discharge Rating of a Selected Reach of Jamuna River

Authors: Makduma Zahan Badhan, M. Abdul Matin

Abstract:

A study has been undertaken to develop a fluvial stage-discharge rating curve for Jamuna River. Past Cross-sectional survey of Jamuna River reach within Sirajgonj and Tangail has been analyzed. The analysis includes the estimation of discharge carrying capacity, possible maximum scour depth and sediment transport capacity of the selected reaches. To predict the discharge and sediment carrying capacity, stream flow data which include cross-sectional area, top width, water surface slope and median diameter of the bed material of selected stations have been collected and some are calculated from reduced level data. A well-known resistance equation has been adopted and modified to a simple form in order to be used in the present analysis. The modified resistance equation has been used to calculate the mean velocity through the channel sections. In addition, a sediment transport equation has been applied for the prediction of transport capacity of the various sections. Results show that the existing drainage sections of Jamuna channel reach under study have adequate carrying capacity under existing bank-full conditions, but these reaches are subject to bed erosion even in low flow situations. Regarding sediment transport rate, it can be estimated that the channel flow has a relatively high range of bed material concentration. Finally, stage­ discharge curves for various sections have been developed. Based on stage-discharge rating data of various sections, water surface profile and sediment-rating curve of Jamuna River have been developed and also the flooding conditions have been analyzed from predicted water surface profile.

Keywords: discharge rating, flow profile, fluvial, sediment rating

Procedia PDF Downloads 185
390 Characterization of Atmospheric Aerosols by Developing a Cascade Impactor

Authors: Sapan Bhatnagar

Abstract:

Micron size particles emitted from different sources and produced by combustion have serious negative effects on human health and environment. They can penetrate deep into our lungs through the respiratory system. Determination of the amount of particulates present in the atmosphere per cubic meter is necessary to monitor, regulate and model atmospheric particulate levels. Cascade impactor is used to collect the atmospheric particulates and by gravimetric analysis, their concentration in the atmosphere of different size ranges can be determined. Cascade impactors have been used for the classification of particles by aerodynamic size. They operate on the principle of inertial impaction. It consists of a number of stages each having an impaction plate and a nozzle. Collection plates are connected in series with smaller and smaller cutoff diameter. Air stream passes through the nozzle and the plates. Particles in the stream having large enough inertia impact upon the plate and smaller particles pass onto the next stage. By designing each successive stage with higher air stream velocity in the nozzle, smaller diameter particles will be collected at each stage. Particles too small to be impacted on the last collection plate will be collected on a backup filter. Impactor consists of 4 stages each made of steel, having its cut-off diameters less than 10 microns. Each stage is having collection plates, soaked with oil to prevent bounce and allows the impactor to function at high mass concentrations. Even after the plate is coated with particles, the incoming particle will still have a wet surface which significantly reduces particle bounce. The particles that are too small to be impacted on the last collection plate are then collected on a backup filter (microglass fiber filter), fibers provide larger surface area to which particles may adhere and voids in filter media aid in reducing particle re-entrainment.

Keywords: aerodynamic diameter, cascade, environment, particulates, re-entrainment

Procedia PDF Downloads 321