Search results for: heat exchange coefficient
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6329

Search results for: heat exchange coefficient

479 Construction of a Dynamic Model of Cerebral Blood Circulation for Future Integrated Control of Brain State

Authors: Tomohiko Utsuki

Abstract:

Currently, brain resuscitation becomes increasingly important due to revising various clinical guidelines pertinent to emergency care. In brain resuscitation, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) is required for stabilizing physiological state of brain, and is described as the essential treatment points in many guidelines of disorder and/or disease such as brain injury, stroke, and encephalopathy. Thus, an integrated control system of BT, ICP, and CBF will greatly contribute to alleviating the burden on medical staff and improving treatment effect in brain resuscitation. In order to develop such a control system, models related to BT, ICP, and CBF are required for control simulation, because trial and error experiments using patients are not ethically allowed. A static model of cerebral blood circulation from intracranial arteries and vertebral artery to jugular veins has already constructed and verified. However, it is impossible to represent the pooling of blood in blood vessels, which is one cause of cerebral hypertension in this model. And, it is also impossible to represent the pulsing motion of blood vessels caused by blood pressure change which can have an affect on the change of cerebral tissue pressure. Thus, a dynamic model of cerebral blood circulation is constructed in consideration of the elasticity of the blood vessel and the inertia of the blood vessel wall. The constructed dynamic model was numerically analyzed using the normal data, in which each arterial blood flow in cerebral blood circulation, the distribution of blood pressure in the Circle of Willis, and the change of blood pressure along blood flow were calculated for verifying against physiological knowledge. As the result, because each calculated numerical value falling within the generally known normal range, this model has no problem in representing at least the normal physiological state of the brain. It is the next task to verify the accuracy of the present model in the case of disease or disorder. Currently, the construction of a migration model of extracellular fluid and a model of heat transfer in cerebral tissue are in progress for making them parts of an integrated model of brain physiological state, which is necessary for developing an future integrated control system of BT, ICP and CBF. The present model is applicable to constructing the integrated model representing at least the normal condition of brain physiological state by uniting with such models.

Keywords: dynamic model, cerebral blood circulation, brain resuscitation, automatic control

Procedia PDF Downloads 148
478 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids

Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho

Abstract:

In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.

Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model

Procedia PDF Downloads 110
477 Analysis of Power Demand for the Common Rail Pump Drive in an Aircraft Engine

Authors: Rafal Sochaczewski, Marcin Szlachetka, Miroslaw Wendeker

Abstract:

Increasing requirements to reduce exhaust emissions and fuel consumption while increasing the power factor is increasingly becoming applicable to internal combustion engines intended for aircraft applications. As a result, intensive research work is underway to develop a diesel-powered unit for aircraft propulsion. Due to a number of advantages, such as lack of the head (lower heat loss) and timing system, opposite movement of pistons conducive to balancing the engine, the two-stroke compression-ignition engine with the opposite pistons has been developed and upgraded. Of course, such construction also has drawbacks. The main one is the necessity of using a gear connecting two crankshafts or a complicated crank system with one shaft. The peculiarity of the arrangement of pistons with sleeves, as well as the fulfillment of rigorous requirements, makes it necessary to apply the most modern technologies and constructional solutions. In the case of the fuel supply system, it was decided to use common rail system elements. The paper presents an analysis of the possibility of using a common rail pump to supply an aircraft compression-ignition engine. It is an engine with a two-stroke cycle, three cylinders, opposing pistons, and 100 kW power. Each combustion chamber is powered by two injectors controlled by electromagnetic valves. In order to assess the possibility of using a common rail pump, four high-pressure pumps were tested on a bench. They are piston pumps differing in the number and geometry of the pumping sections. The analysis included the torque on the pump drive shaft and the power needed to drive the pump depending on the rotational speed, pumping pressure and fuel dispenser settings. The research allowed to optimize the engine power supply system depending on the fuel demand and the way the pump is mounted on the engine. Acknowledgment: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A.’ and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish Nation-al Centre for Research and Development.

Keywords: diesel engine, fuel pump, opposing pistons, two-stroke

Procedia PDF Downloads 137
476 Development of Metal-Organic Frameworks-Type Hybrid Functionalized Materials for Selective Uranium Extraction

Authors: Damien Rinsant, Eugen Andreiadis, Michael Carboni, Daniel Meyer

Abstract:

Different types of materials have been developed for the solid/liquid uranium extraction processes, such as functionalized organic polymers, hybrid silica or inorganic adsorbents. In general, these materials exhibit a moderate affinity for uranyl ions and poor selectivity against impurities like iron, vanadium or molybdenum. Moreover, the structural organization deficiency of these materials generates ion diffusion issues inside the material. Therefore, the aim of our study is to developed efficient and organized materials, stable in the acid media encountered in uranium extraction processes. Metal organic frameworks (MOFs) are hybrid crystalline materials consisting of an inorganic part (cluster or metal ions) and tailored organic linkers connected via coordination bonds. These hierarchical materials have exceptional surface area, thermal stability and a large variety of tunable structures. However, due to the reversibility of constitutive coordination bonds, MOFs have moderate stability in strongly complexing or acidic media. Only few of them are known to be stable in aqueous media and only one example is described in strong acidic media. However, these conditions are very often encountered in the environmental pollution remediation of mine wastewaters. To tackle the challenge of developing MOFs adapted for uranium extraction from acid mine waters, we have investigated the stability of several materials. To ensure a good stability we have synthetized and characterized different materials based on highly coordinated metal clusters, such as LnOFs and Zirconium based materials. Among the latter, the UiO family shows a great stability in sulfuric acid media even in the presence of 1.4 M sodium sulfate at pH 2. However, the stability in phosphoric media is reduced due to the high affinity between zirconium and phosphate ligand. Based on these results, we have developed a tertiary amine functionalized MOF denoted UiO-68-NMe2 particularly adapted for the extraction of anionic uranyl (VI) sulfate complexes mainly present in the acid mine solutions. The adsorption capacity of the material has been determined upon varying total sulfate concentration, contact time and uranium concentration. The extraction tests put in evidence different phenomena due to the complexity of the extraction media and the interaction between the MOF and sulfate anion. Finally, the extraction mechanisms and the interaction between uranyl and the MOF structure have been investigated. The functionalized material UiO-68-NMe2 has been characterized in the presence and absence of uranium by FT-IR, UV and Raman techniques. Moreover, the stability of the protonated amino functionalized MOF has been evaluated. The synthesis, characterization and evaluation of this type of hybrid material, particularly adapted for uranium extraction in sulfuric acid media by an anionic exchange mechanism, paved the way for the development of metal organic frameworks functionalized by different other chelating motifs, such as bifunctional ligands showing an enhanced affinity and selectivity for uranium in acid and complexing media. Work in this direction is currently in progress.

Keywords: extraction, MOF, ligand, uranium

Procedia PDF Downloads 157
475 Investigating Sub-daily Responses of Water Flow of Trees in Tropical Successional Forests in Thailand

Authors: Pantana Tor-Ngern

Abstract:

In the global water cycle, tree water use (Tr) largely contributes to evapotranspiration which is the total amount of water evaporated from terrestrial ecosystems to the atmosphere, regulating climates. Tree water use responds to environmental factors, including atmospheric humidity and sunlight (represented by vapor pressure deficit or VPD and photosynthetically active radiation or PAR, respectively) and soil moisture. In forests, Tr responses to such factors depend on species and their spatial and temporal variations. Tropical forests in Southeast Asia (SEA) have experienced land-use conversion from abandoned agricultural practices, resulting in patches of forests at different stages including old-growth and secondary forests. Because the inherent structures, such as canopy height and tree density, significantly vary among forests at different stages and can strongly affect their respective microclimate, Tr and its responses to changing environmental conditions in successional forests may differ. Daily and seasonal variations in the environmental factors may exert significant impacts on the respective Tr patterns. Extrapolating Tr data from short periods of days to longer periods of seasons or years can be complex and is important for estimating long-term ecosystem water use which often includes normal and abnormal climatic conditions. Thus, this study aims to investigate the diurnal variation of Tr, using measured sap flux density (JS) data, with changes in VPD in eight evergreen tree species in an old-growth forest (hereafter OF; >200 years old) and a young forest (hereafter YF, <10 years old) in Khao Yai National Park, Thailand. The studied species included Sysygium syzygoides, Aquilaria crassna, Cinnamomum subavenium, Nephelium melliferum, Altingia excelsa in OF, and Syzygium nervosum and Adinandra integerrima in YF. Only Sysygium antisepticum was found in both forest stages. Specifically, hysteresis, which indicates the asymmetrical changes of JS in response to changing VPD across daily timescale, was examined in these species. Results showed no hysteresis in all species in OF, except Altingia excelsa which exhibited a 3-hour delayed JS response to VPD. In contrast, JS of all species in YF displayed one-hour delayed responses to VPD. The OF species that showed no hysteresis indicated their well-coupling of their canopies with the atmosphere, facilitating the gas exchange which is essential for tree growth. The delayed responses in Altingia excelsa in OF and all species in YF were associated with higher JS in the morning than that in the afternoon. This implies that these species were sensitive to drying air, closing stomata relatively rapidly compared to the decreasing atmospheric humidity (VPD). Such behavior is often observed in trees growing in dry environments. This study suggests that detailed investigation of JS at sub-daily timescales is imperative for better understanding of mechanistic responses of trees to the changing climate, which will benefit the improvement of earth system models.

Keywords: sap flow, tropical forest, forest succession, thermal dissipcation probe

Procedia PDF Downloads 57
474 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topological order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment 'interactional cycle' for exchange photon energy with molecules without changes in topology. The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies: which are; automated, real-time, reliable, reproducible, and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody-antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due to the pathogenic archival architecture of cell clusters.

Keywords: autopoiesis, photonics systems, quantum topology, molecular structure, biosensing

Procedia PDF Downloads 85
473 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 279
472 Macroscopic Support Structure Design for the Tool-Free Support Removal of Laser Powder Bed Fusion-Manufactured Parts Made of AlSi10Mg

Authors: Tobias Schmithuesen, Johannes Henrich Schleifenbaum

Abstract:

The additive manufacturing process laser powder bed fusion offers many advantages over conventional manufacturing processes. For example, almost any complex part can be produced, such as topologically optimized lightweight parts, which would be inconceivable with conventional manufacturing processes. A major challenge posed by the LPBF process, however, is, in most cases, the need to use and remove support structures on critically inclined part surfaces (α < 45 ° regarding substrate plate). These are mainly used for dimensionally accurate mapping of part contours and to reduce distortion by absorbing process-related internal stresses. Furthermore, they serve to transfer the process heat to the substrate plate and are, therefore, indispensable for the LPBF process. A major challenge for the economical use of the LPBF process in industrial process chains is currently still the high manual effort involved in removing support structures. According to the state of the art (SoA), the parts are usually treated by simple hand tools (e.g., pliers, chisels) or by machining (e.g., milling, turning). New automatable approaches are the removal of support structures by means of wet chemical ablation and thermal deburring. According to the state of the art, the support structures are essentially adapted to the LPBF process and not to potential post-processing steps. The aim of this study is the determination of support structure designs that are adapted to the mentioned post-processing approaches. In the first step, the essential boundary conditions for complete removal by means of the respective approaches are identified. Afterward, a representative demonstrator part with various macroscopic support structure designs will be LPBF-manufactured and tested with regard to a complete powder and support removability. Finally, based on the results, potentially suitable support structure designs for the respective approaches will be derived. The investigations are carried out on the example of the aluminum alloy AlSi10Mg.

Keywords: additive manufacturing, laser powder bed fusion, laser beam melting, selective laser melting, post processing, tool-free, wet chemical ablation, thermal deburring, aluminum alloy, AlSi10Mg

Procedia PDF Downloads 87
471 Dependence of Densification, Hardness and Wear Behaviors of Ti6Al4V Powders on Sintering Temperature

Authors: Adewale O. Adegbenjo, Elsie Nsiah-Baafi, Mxolisi B. Shongwe, Mercy Ramakokovhu, Peter A. Olubambi

Abstract:

The sintering step in powder metallurgy (P/M) processes is very sensitive as it determines to a large extent the properties of the final component produced. Spark plasma sintering over the past decade has been extensively used in consolidating a wide range of materials including metallic alloy powders. This novel, non-conventional sintering method has proven to be advantageous offering full densification of materials, high heating rates, low sintering temperatures, and short sintering cycles over conventional sintering methods. Ti6Al4V has been adjudged the most widely used α+β alloy due to its impressive mechanical performance in service environments, especially in the aerospace and automobile industries being a light metal alloy with the capacity for fuel efficiency needed in these industries. The P/M route has been a promising method for the fabrication of parts made from Ti6Al4V alloy due to its cost and material loss reductions and the ability to produce near net and intricate shapes. However, the use of this alloy has been largely limited owing to its relatively poor hardness and wear properties. The effect of sintering temperature on the densification, hardness, and wear behaviors of spark plasma sintered Ti6Al4V powders was investigated in this present study. Sintering of the alloy powders was performed in the 650–850°C temperature range at a constant heating rate, applied pressure and holding time of 100°C/min, 50 MPa and 5 min, respectively. Density measurements were carried out according to Archimedes’ principle and microhardness tests were performed on sectioned as-polished surfaces at a load of 100gf and dwell time of 15 s. Dry sliding wear tests were performed at varied sliding loads of 5, 15, 25 and 35 N using the ball-on-disc tribometer configuration with WC as the counterface material. Microstructural characterization of the sintered samples and wear tracks were carried out using SEM and EDX techniques. The density and hardness characteristics of sintered samples increased with increasing sintering temperature. Near full densification (99.6% of the theoretical density) and Vickers’ micro-indentation hardness of 360 HV were attained at 850°C. The coefficient of friction (COF) and wear depth improved significantly with increased sintering temperature under all the loading conditions examined, except at 25 N indicating better mechanical properties at high sintering temperatures. Worn surface analyses showed the wear mechanism was a synergy of adhesive and abrasive wears, although the former was prevalent.

Keywords: hardness, powder metallurgy, spark plasma sintering, wear

Procedia PDF Downloads 265
470 The Impact of Shifting Trading Pattern from Long-Haul to Short-Sea to the Car Carriers’ Freight Revenues

Authors: Tianyu Wang, Nikita Karandikar

Abstract:

The uncertainty around cost, safety, and feasibility of the decarbonized shipping fuels has made it increasingly complex for the shipping companies to set pricing strategies and forecast their freight revenues going forward. The increase in the green fuel surcharges will ultimately influence the automobile’s consumer prices. The auto shipping demand (ton-miles) has been gradually shifting from long-haul to short-sea trade over the past years following the relocation of the original equipment manufacturer (OEM) manufacturing to regions such as South America and Southeast Asia. The objective of this paper is twofold: 1) to investigate the car-carriers freight revenue development over the years when the trade pattern is gradually shifting towards short-sea exports 2) to empirically identify the quantitative impact of such trade pattern shifting to mainly freight rate, but also vessel size, fleet size as well as Green House Gas (GHG) emission in Roll on-Roll Off (Ro-Ro) shipping. In this paper, a model of analyzing and forecasting ton-miles and freight revenues for the trade routes of AS-NA (Asia to North America), EU-NA (Europe to North America), and SA-NA (South America to North America) is established by deploying Automatic Identification System (AIS) data and the financial results of a selected car carrier company. More specifically, Wallenius Wilhelmsen Logistics (WALWIL), the Norwegian Ro-Ro carrier listed on Oslo Stock Exchange, is selected as the case study company in this paper. AIS-based ton-mile datasets of WALWIL vessels that are sailing into North America region from three different origins (Asia, Europe, and South America), together with WALWIL’s quarterly freight revenues as reported in trade segments, will be investigated and compared for the past five years (2018-2022). Furthermore, ordinary‐least‐square (OLS) regression is utilized to construct the ton-mile demand and freight revenue forecasting. The determinants of trade pattern shifting, such as import tariffs following the China-US trade war and fuel prices following the 0.1% Emission Control Areas (ECA) zone requirement after IMO2020 will be set as key variable inputs to the machine learning model. The model will be tested on another newly listed Norwegian Car Carrier, Hoegh Autoliner, to forecast its 2022 financial results and to validate the accuracy based on its actual results. GHG emissions on the three routes will be compared and discussed based on a constant emission per mile assumption and voyage distances. Our findings will provide important insights about 1) the trade-off evaluation between revenue reduction and energy saving with the new ton-mile pattern and 2) how the trade flow shifting would influence the future need for the vessel and fleet size.

Keywords: AIS, automobile exports, maritime big data, trade flows

Procedia PDF Downloads 118
469 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 62
468 Physical Model Testing of Storm-Driven Wave Impact Loads and Scour at a Beach Seawall

Authors: Sylvain Perrin, Thomas Saillour

Abstract:

The Grande-Motte port and seafront development project on the French Mediterranean coastline entailed evaluating wave impact loads (pressures and forces) on the new beach seawall and comparing the resulting scour potential at the base of the existing and new seawall. A physical model was built at ARTELIA’s hydraulics laboratory in Grenoble (France) to provide insight into the evolution of scouring overtime at the front of the wall, quasi-static and impulsive wave force intensity and distribution on the wall, and water and sand overtopping discharges over the wall. The beach was constituted of fine sand and approximately 50 m wide above mean sea level (MSL). Seabed slopes were in the range of 0.5% offshore to 1.5% closer to the beach. A smooth concrete structure will replace the existing concrete seawall with an elevated curved crown wall. Prior the start of breaking (at -7 m MSL contour), storm-driven maximum spectral significant wave heights of 2.8 m and 3.2 m were estimated for the benchmark historical storm event dated of 1997 and the 50-year return period storms respectively, resulting in 1 m high waves at the beach. For the wave load assessment, a tensor scale measured wave forces and moments and five piezo / piezo-resistive pressure sensors were placed on the wall. Light-weight sediment physical model and pressure and force measurements were performed with scale 1:18. The polyvinyl chloride light-weight particles used to model the prototype silty sand had a density of approximately 1 400 kg/m3 and a median diameter (d50) of 0.3 mm. Quantitative assessments of the seabed evolution were made using a measuring rod and also a laser scan survey. Testing demonstrated the occurrence of numerous impulsive wave impacts on the reflector (22%), induced not by direct wave breaking but mostly by wave run-up slamming on the top curved part of the wall. Wave forces of up to 264 kilonewtons and impulsive pressure spikes of up to 127 kilonewtons were measured. Maximum scour of -0.9 m was measured for the new seawall versus -0.6 m for the existing seawall, which is imputable to increased wave reflection (coefficient was 25.7 - 30.4% vs 23.4 - 28.6%). This paper presents a methodology for the setup and operation of a physical model in order to assess the hydrodynamic and morphodynamic processes at a beach seawall during storms events. It discusses the pros and cons of such methodology versus others, notably regarding structures peculiarities and model effects.

Keywords: beach, impacts, scour, seawall, waves

Procedia PDF Downloads 148
467 Networked Media, Citizen Journalism and Political Participation in Post-Revolutionary Tunisia: Insight from a European Research Project

Authors: Andrea Miconi

Abstract:

The research will focus on the results of the Tempus European Project eMEDia dedicated to Cross-Media Journalism. The project is founded by the European Commission as it involves four European partners - IULM University, Tampere University, University of Barcelona, and the Mediterranean network Unimed - and three Tunisian Universities – IPSI La Manouba, Sfax and Sousse – along with the Tunisian Ministry for Higher Education and the National Syndicate of Journalists. The focus on Tunisian condition is basically due to the role played by digital activists in its recent history. The research is dedicated to the relationship between political participation, news-making practices and the spread of social media, as it is affecting Tunisian society. As we know, Tunisia during the Arab Spring had been widely considered as a laboratory for the analysis the use of new technologies for political participation. Nonetheless, the literature about the Arab Spring actually fell short in explaining the genesis of the phenomenon, on the one hand by isolating technologies as a casual factor in the spread of demonstrations, and on the other by analyzing North-African condition through a biased perspective. Nowadays, it is interesting to focus on the consolidation of the information environment three years after the uprisings. And what is relevant, only a close, in-depth analysis of Tunisian society is able to provide an explanation of its history, and namely of the part of digital media in the overall evolution of political system. That is why the research is based on different methodologies: desk stage, interviews, and in-depth analysis of communication practices. Networked journalism is the condition determined by the technological innovation on news-making activities: a condition upon which professional journalist can no longer be considered the only player in the information arena, and a new skill must be developed. Along with democratization, nonetheless, the so-called citizen journalism is also likely to produce some ambiguous effects, such as the lack of professional standards and the spread of information cascades, which may prove to be particularly dangerous in an evolving media market as the Tunisian one. This is why, according to the project, a new profile must be defined, which is able to manage this new condition, and which can be hardly reduced to the parameters of traditional journalistic work. Rather than simply using new devices for news visualization, communication professionals must also be able to dialogue with all new players and to accept the decentralized nature of digital environments. This networked nature of news-making seemed to emerge during the Tunisian revolution, when bloggers, journalists, and activists used to retweet each other. Nonetheless, this intensification of communication exchange was inspired by the political climax of the uprising, while all media, by definition, are also supposed to bring some effects on people’s state of mind, culture and daily life routines. That is why it is worth analyzing the consolidation of these practices in a normal, post-revolutionary situation.

Keywords: cross-media, education, Mediterranean, networked journalism, social media, Tunisia

Procedia PDF Downloads 197
466 Determination Optimum Strike Price of FX Option Call Spread with USD/IDR Volatility and Garman–Kohlhagen Model Analysis

Authors: Bangkit Adhi Nugraha, Bambang Suripto

Abstract:

On September 2016 Bank Indonesia (BI) release regulation no.18/18/PBI/2016 that permit bank clients for using the FX option call spread USD/IDR. Basically, this product is a combination between clients buy FX call option (pay premium) and sell FX call option (receive premium) to protect against currency depreciation while also capping the potential upside with cheap premium cost. BI classifies this product as a structured product. The structured product is combination at least two financial instruments, either derivative or non-derivative instruments. The call spread is the first structured product against IDR permitted by BI since 2009 as response the demand increase from Indonesia firms on FX hedging through derivative for protecting market risk their foreign currency asset or liability. The composition of hedging products on Indonesian FX market increase from 35% on 2015 to 40% on 2016, the majority on swap product (FX forward, FX swap, cross currency swap). Swap is formulated by interest rate difference of the two currency pairs. The cost of swap product is 7% for USD/IDR with one year USD/IDR volatility 13%. That cost level makes swap products seem expensive for hedging buyers. Because call spread cost (around 1.5-3%) cheaper than swap, the most Indonesian firms are using NDF FX call spread USD/IDR on offshore with outstanding amount around 10 billion USD. The cheaper cost of call spread is the main advantage for hedging buyers. The problem arises because BI regulation requires the call spread buyer doing the dynamic hedging. That means, if call spread buyer choose strike price 1 and strike price 2 and volatility USD/IDR exchange rate surpass strike price 2, then the call spread buyer must buy another call spread with strike price 1’ (strike price 1’ = strike price 2) and strike price 2’ (strike price 2’ > strike price 1‘). It could make the premium cost of call spread doubled or even more and dismiss the purpose of hedging buyer to find the cheapest hedging cost. It is very crucial for the buyer to choose best optimum strike price before entering into the transaction. To help hedging buyer find the optimum strike price and avoid expensive multiple premium cost, we observe ten years 2005-2015 historical data of USD/IDR volatility to be compared with the price movement of the call spread USD/IDR using Garman–Kohlhagen Model (as a common formula on FX option pricing). We use statistical tools to analysis data correlation, understand nature of call spread price movement over ten years, and determine factors affecting price movement. We select some range of strike price and tenor and calculate the probability of dynamic hedging to occur and how much it’s cost. We found USD/IDR currency pairs is too uncertain and make dynamic hedging riskier and more expensive. We validated this result using one year data and shown small RMS. The study result could be used to understand nature of FX call spread and determine optimum strike price for hedging plan.

Keywords: FX call spread USD/IDR, USD/IDR volatility statistical analysis, Garman–Kohlhagen Model on FX Option USD/IDR, Bank Indonesia Regulation no.18/18/PBI/2016

Procedia PDF Downloads 373
465 Vulnerability Assessment of Groundwater Quality Deterioration Using PMWIN Model

Authors: A. Shakoor, M. Arshad

Abstract:

The utilization of groundwater resources in irrigation has significantly increased during the last two decades due to constrained canal water supplies. More than 70% of the farmers in the Punjab, Pakistan, depend directly or indirectly on groundwater to meet their crop water demands and hence, an unchecked paradigm shift has resulted in aquifer depletion and deterioration. Therefore, a comprehensive research was carried at central Punjab-Pakistan, regarding spatiotemporal variation in groundwater level and quality. Processing MODFLOW for window (PMWIN) and MT3D (solute transport model) models were used for existing and future prediction of groundwater level and quality till 2030. The comprehensive data set of aquifer lithology, canal network, groundwater level, groundwater salinity, evapotranspiration, groundwater abstraction, recharge etc. were used in PMWIN model development. The model was thus, successfully calibrated and validated with respect to groundwater level for the periods of 2003 to 2007 and 2008 to 2012, respectively. The coefficient of determination (R2) and model efficiency (MEF) for calibration and validation period were calculated as 0.89 and 0.98, respectively, which argued a high level of correlation between the calculated and measured data. For solute transport model (MT3D), the values of advection and dispersion parameters were used. The model used for future scenario up to 2030, by assuming that there would be no uncertain change in climate and groundwater abstraction rate would increase gradually. The model predicted results revealed that the groundwater would decline from 0.0131 to 1.68m/year during 2013 to 2030 and the maximum decline would be on the lower side of the study area, where infrastructure of canal system is very less. This lowering of groundwater level might cause an increase in the tubewell installation and pumping cost. Similarly, the predicted total dissolved solids (TDS) of the groundwater would increase from 6.88 to 69.88mg/L/year during 2013 to 2030 and the maximum increase would be on lower side. It was found that in 2030, the good quality would reduce by 21.4%, while marginal and hazardous quality water increased by 19.28 and 2%, respectively. It was found from the simulated results that the salinity of the study area had increased due to the intrusion of salts. The deterioration of groundwater quality would cause soil salinity and ultimately the reduction in crop productivity. It was concluded from the predicted results of groundwater model that the groundwater deteriorated with the depth of water table i.e. TDS increased with declining groundwater level. It is recommended that agronomic and engineering practices i.e. land leveling, rainwater harvesting, skimming well, ASR (Aquifer Storage and Recovery Wells) etc. should be integrated to meliorate management of groundwater for higher crop production in salt affected soils.

Keywords: groundwater quality, groundwater management, PMWIN, MT3D model

Procedia PDF Downloads 372
464 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 80
463 Effects of Additional Pelvic Floor Exercise on Sexual Function, Quality of Life and Pain Intensity in Subjects with Chronic Low Back Pain

Authors: Emel Sonmezer, Hayri Baran Yosmaoglu

Abstract:

The negative impact of chronic pain syndromes on sexual function has been reported in several studies; however, the influences of treatment strategies on sexual dysfunction have not been evaluated widely. The aim of this study was to determine the effects of pelvic floor exercise on sexual dysfunction in female patients with chronic low back pain. Forty-two patient with chronic low back pain were enrolled this study. Subjects were divided into two groups. Group 1 received conventional physiotherapy consist of heat therapy, ergonomic education, William flexion exercise during 6 weeks. Group 2 received pelvic floor exercises in addition to conventional physiotherapy. Female Sexual Function Index (FSFI) was used for the assessment of sexual function. Pain intensity was assessed with Visual Analogue Scale. Quality of life was assessed with World Health Organization Quality of Life Scale. All measurements were taken before and after treatment. In conventional physiotherapy group; there were significant improvement in pain intensity (p= 0,003), physical health (p=0,011), psychological health (p=0,042) subscales of quality of life scale, arousal (p=0,042), lubrication (p=0,028) and pain (p= 0,034) subscales of FSFI. In additional pelvic floor exercise group; there were significant improvement in pain intensity (p= 0,005), physical health (p=0,012) psychological health (p=0,039) subscales of quality of life scale, arousal (p=0,024), lubrication (p=0,011), orgasm (p=0,035) and pain (p= 0,015) subscales and total score (p=0,016) of FSFI. Total FSFI score (p=0,025) and orgasm (p=0,017) subscale of FSFI were significantly higher for the additional pelvic floor exercise group than the conventional physiotherapy group.The outcome of this study suggested that conventional physiotherapy may contribute to improve pain, quality of life and some parameters of the sexual function in patients with low back pain. Although additional pelvic floor exercise did not reveal more treatment effect in terms of quality of life and pain intensity, it caused significant improvement in sexual function. It is recommended that pelvic floor exercise should be added to treatment programs in order to manage sexual dysfunction more effectively in patients with chronic low back pain.

Keywords: physiotherapy, chronic pain, sexual dysfunction, pelvic floor

Procedia PDF Downloads 262
462 Swedish–Nigerian Extrusion Research: Channel for Traditional Grain Value Addition

Authors: Kalep Filli, Sophia Wassén, Annika Krona, Mats Stading

Abstract:

Food security challenge and the growing population in Sub-Saharan Africa centers on its agricultural transformation, where about 70% of its population is directly involved in farming. Research input can create economic opportunities, reduce malnutrition and poverty, and generate faster, fairer growth. Africa is discarding $4 billion worth of grain annually due to pre and post-harvest losses. Grains and tubers play a central role in food supply in the region but their production has generally lagged behind because no robust scientific input to meet up with the challenge. The African grains are still chronically underutilized to the detriment of the well-being of the people of Africa and elsewhere. The major reason for their underutilization is because they are under-researched. Any commitment by scientific community to intervene needs creative solutions focused on innovative approaches that will meet the economic growth. In order to mitigate this hurdle, co-creation activities and initiatives are necessary.An example of such initiatives has been initiated through Modibbo Adama University of Technology Yola, Nigeria and RISE (The Research Institutes of Sweden) Gothenburg, Sweden. Exchange of expertise in research activities as a possibility to create channel for value addition to agricultural commodities in the region under the ´Traditional Grain Network programme´ is in place. Process technologies, such as extrusion offers the possibility of creating products in the food and feed sectors, with better storage stability, added value, lower transportation cost and new markets. The Swedish–Nigerian initiative has focused on the development of high protein pasta. Dry microscopy of pasta sample result shows a continuous structural framework of proteins and starch matrix. The water absorption index (WAI) results showed that water was absorbed steadily and followed the master curve pattern. The WAI values ranged between 250 – 300%. In all aspect, the water absorption history was within a narrow range for all the eight samples. The total cooking time for all the eight samples in our study ranged between 5 – 6 minutes with their respective dry sample diameter ranging between 1.26 – 1.35 mm. The percentage water solubility index (WSI) ranged from 6.03 – 6.50% which was within a narrow range and the cooking loss which is a measure of WSI is considered as one of the main parameters taken into consideration during the assessment of pasta quality. The protein contents of the samples ranged between 17.33 – 18.60 %. The value of the cooked pasta firmness ranged from 0.28 - 0.86 N. The result shows that increase in ratio of cowpea flour and level of pregelatinized cowpea tends to increase the firmness of the pasta. The breaking strength represent index of toughness of the dry pasta ranged and it ranged from 12.9 - 16.5 MPa.

Keywords: cowpea, extrusion, gluten free, high protein, pasta, sorghum

Procedia PDF Downloads 186
461 Perception of Corporate Social Responsibility and Enhancing Compassion at Work through Sense of Meaningfulness

Authors: Nikeshala Weerasekara, Roshan Ajward

Abstract:

Contemporary business environment, given the circumstance of stringent scrutiny toward corporate behavior, organizations are under pressure to develop and implement solid overarching Corporate Social Responsibility (CSR) strategies. In that milieu, in order to differentiate themselves from competitors and maintain stakeholder confidence banks spend millions of dollars on CSR programmes. However, knowledge on how non-western bank employees perceive such activities is inconclusive. At the same time recently only researchers have shifted their focus on positive effects of compassion at work or the organizational conditions under which it arises. Nevertheless, mediation mechanisms between CSR and compassion at work have not been adequately examined leaving a vacuum to be explored. Despite finding a purpose in work that is greater than extrinsic outcomes of the work is important to employees, meaningful work has not been examined adequately. Thus, in addition to examining the direct relationship between CSR and compassion at work, this study examined the mediating capability of meaningful work between these variables. Specifically, the researcher explored how CSR enables employees to sense work as meaningful which in turn would enhance their level of compassion at work. Hypotheses were developed to examine the direct relationship between CSR and compassion at work and the mediating effect of meaningful work on the relationship between CSR and compassion at work. Both Social Identity Theory (SIT) and Social Exchange Theory (SET) were used to theoretically support the relationships. The sample comprised of 450 respondents covering different levels of the bank. A convenience sampling strategy was used to secure responses from 13 local licensed commercial banks in Sri Lanka. Data was collected using a structured questionnaire which was developed based on a comprehensive review of literature and refined using both expert opinions and a pilot survey. Structural equation modeling using Smart Partial Least Square (PLS) was utilized for data analysis. Findings indicate a positive and significant (p < .05) relationship between CSR and compassion at work. Also, it was found that meaningful work partially mediates the relationship between CSR and compassion at work. As per the findings it is concluded that bank employees’ perception of CSR engagement not only directly influence compassion at work but also impact such through meaningful work as well. This implies that employees consider working for a socially responsible bank since it creates greater meaningfulness of work to retain with the organization, which in turn trigger higher level of compassion at work. By utilizing both SIT and SET in explaining relationships between CSR and compassion at work it amounts to theoretical significance of the study. Enhance existing literature on CSR and compassion at work. Also, adds insights on mediating capability of psychologically related variables such as meaningful work. This study is expected to have significant policy implications in terms of increasing compassion at work where managers must understand the importance of including CSR activities into their strategy in order to thrive. Finally, it provides evidence of suitability of using Smart PLS to test models with mediating relationships involving non normal data.

Keywords: compassion at work, corporate social responsibility, employee commitment, meaningful work, positive affect

Procedia PDF Downloads 123
460 Mental Balance, Emotional Balance, and Stress Management: The Role of Ancient Vedic Philosophy from India

Authors: Emily Schulz

Abstract:

The ancient Vedic culture from India had traditions that supported all aspects of health, including psychological health, and are relevant in the current era. These traditions have been compiled by Professor Dr. Purna, a rare Himalayan Master, into the Purna Health Management System (PHMS). The PHMS is a unique, holistic, and integrated approach to health management. It is comprised of four key factors: Health, Fitness, and Nutrition (HF&N), Life Balance (Stress Management) (LB-SM), Spiritual Growth and Development (SG&D); and Living in Harmony with the Natural Environment (LHWNE). The purpose of the PHMS is to give people the tools to take responsibility for managing their own holistic health and wellbeing. A study using a cross-sectional mixed-methods anonymous online survey was conducted during 2017-2018. Adult students of Professor Dr. Purna were invited to participate through announcements made at various events He held throughout the globe. Follow-up emails were sent with consenting language for interested parties and provided them with a link to the survey. Participation in the study was completely voluntary and no incentives were given to respond to the survey. The overall aim of the study was to investigate the effectiveness of implementation of the PHMS on practitioners' emotional balance. However, given the holistic nature of the PHMS, survey questions also inquired about participants’ physical health, stress level, ability to manage stress, and wellbeing using Likert scales. The survey also included some open-ended questions to gain an understanding of the participants’ experiences with the PHMS relative to their emotional balance. In total, 52 people out of 253 potential respondents participated in the study. Data were analyzed using nonparametric Spearman’s Rho correlation coefficient (rs) since the data were not on a normal distribution. Statistical significance was set at p < .05. Results of the study suggested that there are moderate to strong statistically significant relationships (p < .001) between participants' frequent implementation of each of the four key factors of the PHMS and self-reported mental/emotional health (HF&N rs = 0.42; LB-SM rs = 0.54; SG&D rs = 0.49; LHWNE rs = 0.45) Results also demonstrated statistically significant relationships (p < .001) between participants' frequent implementation of each of the four key factors of the PHMS and their self-reported ability to manage stress (HF&N rs = 0.44; LB-SM rs = 0.55; SG&D rs = 0.39; LHWNE rs = 0.55). Additionally, those who reported experiencing better physical health also reported better mental/emotional health (rs = 0.49, p < .001) and better ability to manage stress (rs = 0.46, p < .001). The findings of this study suggest that wisdom from the ancient Vedic culture may be useful for those working in the field of psychology and related fields who would like to assist clients in calming their mind and emotions and managing their stress levels.

Keywords: balanced emotions, balanced mind, stress management, Vedic philosophy

Procedia PDF Downloads 112
459 The Effect of Paper Based Concept Mapping on Students' Academic Achievement and Attitude in Science Education

Authors: Orhan Akınoğlu, Arif Çömek, Ersin Elmacı, Tuğba Gündoğdu

Abstract:

The concept map is known to be a powerful tool to organize the ideas and concepts of an individuals’ mind. This tool is a kind of visual map that illustrates the relationships between the concepts of a certain subject. The effect of concept mapping on cognitive and affective qualities is one of the research topics among educational researchers for last decades. We educators want to utilize it both as an instructional tool or an assessment tool in classes. For that reason, this study aimed to determine the effect of concept mapping as a learning strategy in science classes on students’ academic achievement and attitude. The research employed a randomized pre-test post-test control group design. Data collected from 60 sixth grade students participated in the study from a randomly selected primary school in Turkey. Sixth-grade classes of the school were analyzed according to students’ academic achievement, science attitude, gender, mathematics, science courses grades, and their GPAs before the implementation. Two of the classes found to be equivalent (t=0,983, p>0,05) and one of them was defined as experimental and the other one control group randomly. During a 5-weeks period, the experimental group students (N=30) used the paper-based concept mapping method while the control group students (N=30) were taught with the traditional approach according to the science and technology education curriculum for light and sound subject. Both groups were taught by the same teacher who is experienced using concept mapping in science classes. Before the implementation, the teacher explained the theory of the concept maps and showed how to create paper-based concept mapping individually to the experimental group students for two hours. Then for two following hours she asked them to create some concept maps related to their former science subjects and gave them feedback by reviewing their concept maps to be sure that they can create during the implementation. The data were collected by science achievement test, science attitude scale and personal information form. Science achievement test and science attitude scale were implemented as pre-test and post-test while personal information form was implemented just as once. The reliability coefficient of the achievement test was KR20=0,76 and Cronbach’s Alpha of the attitude scale was 0,89. SPSS statistical software was used to analyze the data. According to the results, there was a statistically significant difference between the experimental and control group for academic achievement but not for attitude. The experimental group had significantly greater gains from academic achievement test than the control group (t=0,02, p<0,05). The findings showed that the paper-and-pencil concept mapping can be used as an effective method for students’ academic achievement in science classes. The results have implications for further researches.

Keywords: concept mapping, science education, constructivism, academic achievement, science attitude

Procedia PDF Downloads 401
458 Double Liposomes Based Dual Drug Delivery System for Effective Eradication of Helicobacter pylori

Authors: Yuvraj Singh Dangi, Brajesh Kumar Tiwari, Ashok Kumar Jain, Kamta Prasad Namdeo

Abstract:

The potential use of liposomes as drug carriers by i.v. injection is limited by their low stability in blood stream. Firstly, phospholipid exchange and transfer to lipoproteins, mainly HDL destabilizes and disintegrates liposomes with subsequent loss of content. To avoid the pain associated with injection and to obtain better patient compliance studies concerning various dosage forms, have been developed. Conventional liposomes (unilamellar and multilamellar) have certain drawbacks like low entrapment efficiency, stability and release of drug after single breach in external membrane, have led to the new type of liposomal systems. The challenge has been successfully met in the form of Double Liposomes (DL). DL is a recently developed type of liposome, consisting of smaller liposomes enveloped in lipid bilayers. The outer lipid layer of DL can protect inner liposomes against various enzymes, therefore DL was thought to be more effective than ordinary liposomes. This concept was also supported by in vitro release characteristics i.e. DL formation inhibited the release of drugs encapsulated in inner liposomes. DL consists of several small liposomes encapsulated in large liposomes, i.e., multivesicular vesicles (MVV), therefore, DL should be discriminated from ordinary classification of multilamellar vesicles (MLV), large unilamellar vesicles (LUV), small unilamellar vesicles (SUV). However, for these liposomes, the volume of inner phase is small and loading volume of water-soluble drugs is low. In the present study, the potential of phosphatidylethanolamine (PE) lipid anchored double liposomes (DL) to incorporate two drugs in a single system is exploited as a tool to augment the H. pylori eradication rate. Preparation of DL involves two steps, first formation of primary (inner) liposomes by thin film hydration method containing one drug, then addition of suspension of inner liposomes on thin film of lipid containing the other drug. The success of formation of DL was characterized by optical and transmission electron microscopy. Quantitation of DL-bacterial interaction was evaluated in terms of percent growth inhibition (%GI) on reference strain of H. pylori ATCC 26695. To confirm specific binding efficacy of DL to H. pylori PE surface receptor we performed an agglutination assay. Agglutination in DL treated H. pylori suspension suggested selectivity of DL towards the PE surface receptor of H. pylori. Monotherapy is generally not recommended for treatment of a H. pylori infection due to the danger of development of resistance and unacceptably low eradication rates. Therefore, combination therapy with amoxicillin trihydrate (AMOX) as anti-H. pylori agent and ranitidine bismuth citrate (RBC) as antisecretory agent were selected for the study with an expectation that this dual-drug delivery approach will exert acceptable anti-H. pylori activity.

Keywords: Helicobacter pylorI, amoxicillin trihydrate, Ranitidine Bismuth citrate, phosphatidylethanolamine, multi vesicular systems

Procedia PDF Downloads 202
457 Cross-Country Mitigation Policies and Cross Border Emission Taxes

Authors: Massimo Ferrari, Maria Sole Pagliari

Abstract:

Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.

Keywords: climate change, general equilibrium, optimal taxation, monetary policy

Procedia PDF Downloads 152
456 Economic Impacts of Nitrogen Fertilizer Use into Tropical Pastures for Beef Cattle in Brazil

Authors: Elieder P. Romanzini, Lutti M. Delevatti, Rhaony G. Leite, Ricardo A. Reis, Euclides B. Malheiros

Abstract:

Brazilian beef cattle production systems are an important profitability source for the national gross domestic product. The main characteristic of these systems is forage utilization as the exclusive feed source. Forage utilization had been causing on owners the false feeling of low production costs. However, this low cost is followed to low profit causing a lot times worst animal index what can result in activities changes or until land sold. Aiming to evaluate economic impacts into Brazilian beef cattle systems were evaluated four nitrogen fertilizer (N) application levels (0, 90, 180 and 270 kg per hectare [kg.ha-1]). Research was developed during 2015 into Forage Crops and Grasslands section of São Paulo State University, “Júlio de Mesquita Filho” (Unesp) (Jaboticabal, São Paulo, Brazil). Pastures were seeded with Brachiaria brizantha Stapf. ‘Marandu’ (Palisade grass) handled using continuous grazing system, with variable stocking rate, sward height maintained at 25 cm. The economic evaluation was developed in rearing e finishing phases. We evaluated the cash flows inside each phase on different N levels. Economic valuations were considering: cost-effective operating (CEO), cost-total operating (CTO), gross revenue (GR), operating profit (OP) and net income (NI), every measured in US$. Complementary analyses were developed, profitability was calculated by [OP/GR]. Pay back (measured in years) was calculated considering average capital stocktaking pondered by area in use (ACS) divided by [GR-CEO]. And the internal rate of return (IRR) was calculated by 100/(pay back). Input prices were prices during 2015 and were obtained from Anuário Brasileiro da Pecuária, Centro de Estudos Avançados em Economia Aplicada and quotation in the same region of animal production (northeast São Paulo State) during the period above mentioned. Values were calculated in US$ according exchange rate US$1.00 equal R$3.34. The CEO, CTO, GR, OP and NI per hectare for each N level were respectively US$1,919.66; US$2,048.47; US$2,905.72; US$857.25 and US$986.06 to 0 kg.ha-1; US$2,403.20; US$2,551.80; US$3,530.19; US$978.39 and US$1,126.99 to 90 kg.ha-1; US$3,180.42; US$3,364.81; US$4,985.03; US$1,620.23 and US$1,804.62 to 180 kg.ha-1andUS$3,709.14; US$3,915.15; US$5,554.95; US$1,639.80 and US$1,845.81 to 270 kg.ha-1. Relationship to another economic indexes, profitability, pay back and IRR, the results were respectively 29.50%, 6.44 and 15.54% to 0 kg.ha-1; 27.72%, 6.88 and 14.54% to 90 kg.ha-1; 32.50%, 4.08 and 24.50% to 180 kg.ha-1 and 29.52%, 3.42 and 29.27% to 270 kg.ha-1. Values previously presented in this evaluation allowing to affirm that the best result was obtained to N level 270 kg.ha-1. These results among all N levels evaluated could be explained by improve occurred on stocking rate caused by increase on N level. However, a crucial information about high N level application into pastures is the efficiency of N utilization (associated to environmental impacts) that normally decrease with the increase on N level. Hence, considering all situations (efficiency of N utilization and economic results) into tropical pastures used to beef cattle production could be recommended N level equal to 180kg.ha-1, which had better profitability and cause lesser environmental impacts, proved by other studies developed in the same area.

Keywords: Brachiaria brizantha, cost-total operating, gross revenue, profitability

Procedia PDF Downloads 164
455 Outcome-Based Education as Mediator of the Effect of Blended Learning on the Student Performance in Statistics

Authors: Restituto I. Rodelas

Abstract:

The higher education has adopted the outcomes-based education from K-12. In this approach, the teacher uses any teaching and learning strategies that enable the students to achieve the learning outcomes. The students may be required to exert more effort and figure things out on their own. Hence, outcomes-based students are assumed to be more responsible and more capable of applying the knowledge learned. Another approach that the higher education in the Philippines is starting to adopt from other countries is blended learning. This combination of classroom and fully online instruction and learning is expected to be more effective. Participating in the online sessions, however, is entirely up to the students. Thus, the effect of blended learning on the performance of students in Statistics may be mediated by outcomes-based education. If there is a significant positive mediating effect, then blended learning can be optimized by integrating outcomes-based education. In this study, the sample will consist of four blended learning Statistics classes at Jose Rizal University in the second semester of AY 2015–2016. Two of these classes will be assigned randomly to the experimental group that will be handled using outcomes-based education. The two classes in the control group will be handled using the traditional lecture approach. Prior to the discussion of the first topic, a pre-test will be administered. The same test will be given as posttest after the last topic is covered. In order to establish equality of the groups’ initial knowledge, single factor ANOVA of the pretest scores will be performed. Single factor ANOVA of the posttest-pretest score differences will also be conducted to compare the performance of the experimental and control groups. When a significant difference is obtained in any of these ANOVAs, post hoc analysis will be done using Tukey's honestly significant difference test (HSD). Mediating effect will be evaluated using correlation and regression analyses. The groups’ initial knowledge are equal when the result of pretest scores ANOVA is not significant. If the result of score differences ANOVA is significant and the post hoc test indicates that the classes in the experimental group have significantly different scores from those in the control group, then outcomes-based education has a positive effect. Let blended learning be the independent variable (IV), outcomes-based education be the mediating variable (MV), and score difference be the dependent variable (DV). There is mediating effect when the following requirements are satisfied: significant correlation of IV to DV, significant correlation of IV to MV, significant relationship of MV to DV when both IV and MV are predictors in a regression model, and the absolute value of the coefficient of IV as sole predictor is larger than that when both IV and MV are predictors. With a positive mediating effect of outcomes-base education on the effect of blended learning on student performance, it will be recommended to integrate outcomes-based education into blended learning. This will yield the best learning results.

Keywords: outcome-based teaching, blended learning, face-to-face, student-centered

Procedia PDF Downloads 286
454 Improved Functions For Runoff Coefficients And Smart Design Of Ditches & Biofilters For Effective Flow detention

Authors: Thomas Larm, Anna Wahlsten

Abstract:

An international literature study has been carried out for comparison of commonly used methods for the dimensioning of transport systems and stormwater facilities for flow detention. The focus of the literature study regarding the calculation of design flow and detention has been the widely used Rational method and its underlying parameters. The impact of chosen design parameters such as return time, rain intensity, runoff coefficient, and climate factor have been studied. The parameters used in the calculations have been analyzed regarding how they can be calculated and within what limits they can be used. Data used within different countries have been specified, e.g., recommended rainfall return times, estimated runoff times, and climate factors used for different cases and time periods. The literature study concluded that the determination of runoff coefficients is the most uncertain parameter that also affects the calculated flow and required detention volume the most. Proposals have been developed for new runoff coefficients, including a new proposed method with equations for calculating runoff coefficients as a function of return time (years) and rain intensity (l/s/ha), respectively. Suggestions have been made that it is recommended not to limit the use of the Rational Method to a specific catchment size, contrary to what many design manuals recommend, with references to this. The proposed relationships between return time or rain intensity and runoff coefficients need further investigation and to include the quantification of uncertainties. Examples of parameters that have not been considered are the influence on the runoff coefficients of different dimensioning rain durations and the degree of water saturation of green areas, which will be investigated further. The influence of climate effects and design rain on the dimensioning of the stormwater facilities grassed ditches and biofilters (bio retention systems) has been studied, focusing on flow detention capacity. We have investigated how the calculated runoff coefficients regarding climate effect and the influence of changed (increased) return time affect the inflow to and dimensioning of the stormwater facilities. We have developed a smart design of ditches and biofilters that results in both high treatment and flow detention effects and compared these with the effect from dry and wet ponds. Studies of biofilters have generally before focused on treatment of pollutants, but their effect on flow volume and how its flow detention capability can improve is only rarely studied. For both the new type of stormwater ditches and biofilters, it is required to be able to simulate their performance in a model under larger design rains and future climate, as these conditions cannot be tested in the field. The stormwater model StormTac Web has been used on case studies. The results showed that the new smart design of ditches and biofilters had similar flow detention capacity as dry and wet ponds for the same facility area.

Keywords: runoff coefficients, flow detention, smart design, biofilter, ditch

Procedia PDF Downloads 83
453 Antibacterial Bioactive Glasses in Orthopedic Surgery and Traumatology

Authors: V. Schmidt, L. Janovák, N. Wiegand, B. Patczai, K. Turzó

Abstract:

Large bone defects are not able to heal spontaneously. Bioactive glasses seem to be appropriate (bio)materials for bone reconstruction. Bioactive glasses are osteoconductive and osteoinductive, therefore, play a useful role in bony regeneration and repair. Because of their not optimal mechanical properties (e.g., brittleness, low bending strength, and fracture toughness), their applications are limited. Bioactive glass can be used as a coating material applied on metal surfaces. In this way -when using them as implants- the excellent mechanical properties of metals and the biocompatibility and bioactivity of glasses will be utilized. Furthermore, ion release effects of bioactive glasses regarding osteogenic and angiogenic responses have been shown. Silicate bioactive glasses (45S5 Bioglass) induce the release and exchange of soluble Si, Ca, P, and Na ions on the material surface. This will lead to special cellular responses inducing bone formation, which is favorable in the biointegration of the orthopedic prosthesis. The incorporation of other additional elements in the silicate network such as fluorine, magnesium, iron, silver, potassium, or zinc has been shown, as the local delivery of these ions is able to enhance specific cell functions. Although hip and knee prostheses present a high success rate, bacterial infections -mainly implant associated- are serious and frequent complications. Infection can also develop after implantation of hip prostheses, the elimination of which means more surgeries for the patient and additional costs for the clinic. Prosthesis-related infection is a severe complication of orthopedic surgery, which often causes prolonged illness, pain, and functional loss. While international efforts are made to reduce the risk of these infections, orthopedic surgical infections (SSIs) continue to occur in high numbers. It is currently estimated that up to 2.5% of primary hip and knee surgeries and up to 20% of revision arthroplasties are complicated by periprosthetic joint infection (PJIs). According to some authors, these numbers are underestimated, and they are also increasing. Staphylococcus aureus is the leading cause of both SSIs and PJIs, and the prevalence of methicillin-resistant S. aureus (MRSA) is on the rise, particularly in the United States. These deep infections lead to implant removal and consequently increase morbidity and mortality. The study targets this clinical problem using our experience so far with the Ag-doped polymer coatings on Titanium implants. Non-modified or modified (e.g., doped with antibacterial agents, like Ag) bioactive glasses could play a role in the prevention of infections or the therapy of infected tissues. Bioactive glasses have excellent biocompatibility, proved by in vitro cell culture studies of human osteoblast-like MG-63 cells. Ag-doped bioactive glass-scaffold has a good antibacterial ability against Escherichia coli and other bacteria. It may be concluded that these scaffolds have great potential in the prevention and therapy of implant-associated bone infection.

Keywords: antibacterial agents, bioactive glass, hip and knee prosthesis, medical implants

Procedia PDF Downloads 179
452 Effective Thermal Retrofitting Methods to Improve Energy Efficiency of Existing Dwellings in Sydney

Authors: Claire Far, Sara Wilkinson, Deborah Ascher Barnstone

Abstract:

Energy issues have been a growing concern in current decades. Limited energy resources and increasing energy consumption from one side and environmental pollution and waste of resources from the other side have substantially affected the future of human life. Around 40 percent of total energy consumption of Australian buildings goes to heating and cooling due to the low thermal performance of the buildings. Thermal performance of buildings determines the amount of energy used for heating and cooling of the buildings which profoundly influences energy efficiency. Therefore, employing sustainable design principles and effective use of construction materials for building envelope can play crucial role in the improvement of energy efficiency of existing dwellings and enhancement of thermal comfort of the occupants. The energy consumption for heating and cooling normally is determined by the quality of the building envelope. Building envelope is the part of building which separates the habitable areas from exterior environment. Building envelope consists of external walls, external doors, windows, roof, ground and the internal walls that separate conditioned spaces from non-condition spaces. The energy loss from the building envelope is the key factor. Heat loss through conduction, convection and radiation from building envelope. Thermal performance of the building envelope can be improved by using different methods of retrofitting depending on the climate conditions and construction materials. Based on the available studies, the importance of employing sustainable design principles has been highlighted among the Australian building professionals. However, the residential building sector still suffers from a lack of having the best practice examples and experience for effective use of construction materials for building envelope. As a result, this study investigates the effectiveness of different energy retrofitting techniques and examines the impact of employing those methods on energy consumption of existing dwellings in Sydney, the most populated city in Australia. Based on the research findings, the best thermal retrofitting methods for increasing thermal comfort and energy efficiency of existing residential dwellings as well as reducing their environmental impact and footprint have been identified and proposed.

Keywords: thermal comfort, energy consumption, residential dwellings, sustainable design principles, thermal retrofit

Procedia PDF Downloads 264
451 Beyond the “Breakdown” of Karman Vortex Street

Authors: Ajith Kumar S., Sankaran Namboothiri, Sankrish J., SarathKumar S., S. Anil Lal

Abstract:

A numerical analysis of flow over a heated circular cylinder is done in this paper. The governing equations, Navier-Stokes, and energy equation within the Boussinesq approximation along with continuity equation are solved using hybrid FEM-FVM technique. The density gradient created due to the heating of the cylinder will induce buoyancy force, opposite to the direction of action of acceleration due to gravity, g. In the present work, the flow direction and the direction of buoyancy force are taken as same (vertical flow configuration), so that the buoyancy force accelerates the mean flow past the cylinder. The relative dominance of the buoyancy force over the inertia force is characterized by the Richardson number (Ri), which is one of the parameter that governs the flow dynamics and heat transfer in this analysis. It is well known that above a certain value of Reynolds number, Re (ratio of inertia force over the viscous forces), the unsteady Von Karman vortices can be seen shedding behind the cylinder. The shedding wake patterns could be seriously altered by heating/cooling the cylinder. The non-dimensional shedding frequency called the Strouhal number is found to be increasing as Ri increases. The aerodynamic force coefficients CL and CD are observed to change its value. In the present vertical configuration of flow over the cylinder, as Ri increases, shedding frequency gets increased and suddenly drops down to zero at a critical value of Richardson number. The unsteady vortices turn to steady standing recirculation bubbles behind the cylinder after this critical Richardson number. This phenomenon is well known in literature as "Breakdown of the Karman Vortex Street". It is interesting to see the flow structures on further increase in the Richardson number. On further heating of the cylinder surface, the size of the recirculation bubble decreases without loosing its symmetry about the horizontal axis passing through the center of the cylinder. The separation angle is found to be decreasing with Ri. Finally, we observed a second critical Richardson number, after which the the flow will be attached to the cylinder surface without any wake behind it. The flow structures will be symmetrical not only about the horizontal axis, but also with the vertical axis passing through the center of the cylinder. At this stage, there will be a "single plume" emanating from the rear stagnation point of the cylinder. We also observed the transition of the plume is a strong function of the Richardson number.

Keywords: drag reduction, flow over circular cylinder, flow control, mixed convection flow, vortex shedding, vortex breakdown

Procedia PDF Downloads 402
450 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 343