Search results for: nonlinear partial differential equations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2790

Search results for: nonlinear partial differential equations

30 Modelling and Control of Milk Fermentation Process in Biochemical Reactor

Authors: Jožef Ritonja

Abstract:

The biochemical industry is one of the most important modern industries. Biochemical reactors are crucial devices of the biochemical industry. The essential bioprocess carried out in bioreactors is the fermentation process. A thorough insight into the fermentation process and the knowledge how to control it are essential for effective use of bioreactors to produce high quality and quantitatively enough products. The development of the control system starts with the determination of a mathematical model that describes the steady state and dynamic properties of the controlled plant satisfactorily, and is suitable for the development of the control system. The paper analyses the fermentation process in bioreactors thoroughly, using existing mathematical models. Most existing mathematical models do not allow the design of a control system for controlling the fermentation process in batch bioreactors. Due to this, a mathematical model was developed and presented that allows the development of a control system for batch bioreactors. Based on the developed mathematical model, a control system was designed to ensure optimal response of the biochemical quantities in the fermentation process. Due to the time-varying and non-linear nature of the controlled plant, the conventional control system with a proportional-integral-differential controller with constant parameters does not provide the desired transient response. The improved adaptive control system was proposed to improve the dynamics of the fermentation. The use of the adaptive control is suggested because the parameters’ variations of the fermentation process are very slow. The developed control system was tested to produce dairy products in the laboratory bioreactor. A carbon dioxide concentration was chosen as the controlled variable. The carbon dioxide concentration correlates well with the other, for the quality of the fermentation process in significant quantities. The level of the carbon dioxide concentration gives important information about the fermentation process. The obtained results showed that the designed control system provides minimum error between reference and actual values of carbon dioxide concentration during a transient response and in a steady state. The recommended control system makes reference signal tracking much more efficient than the currently used conventional control systems which are based on linear control theory. The proposed control system represents a very effective solution for the improvement of the milk fermentation process.

Keywords: Bioprocess engineering, biochemical reactor, fermentation process, modeling, adaptive control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378
29 Development of Moving Multifocal Electroretinogram with a Precise Perimetry Apparatus

Authors: Naoto Suzuki

Abstract:

A decline in visual sensitivity at arbitrary points on the retina can be measured using a precise perimetry apparatus along with a fundus camera. However, the retinal layer associated with this decline cannot be identified accurately with current medical technology. To investigate cryptogenic diseases, such as macular dystrophy, acute zonal occult outer retinopathy (AZOOR), and multiple evanescent white dot syndrome (MEWDS), we evaluated an electroretinogram (ERG) function that allows moving the center of the multifocal hexagonal stimulus array to a chosen position. Macular dystrophy is a generalized term used for a variety of functional disorders of the macula lutea, and the ERG shows a diminution of the b-wave in these disorders. AZOOR causes an acute functional disorder to an outer layer of the retina, and the ERG shows a-wave and b-wave amplitude reduction as well as delayed 30 Hz flicker responses. MEWDS causes acute visual loss and the ERG shows a decrease in a-wave amplitude. We combined an electroretinographic optical system and a perimetric optical system into an experimental apparatus that has the same optical system as that of a fundus camera. We also deployed an EO-50231 Edmund infrared camera, a 45-degree cold mirror, a lens with a 25-mm focal length, a halogen lamp, and an 8-inch monitor. Then, we also employed a differential amplifier with gain 10, a 50 Hz notch filter, a high-pass filter with a 21.2 Hz cut-off frequency, and two non-inverting amplifiers with gains 1001 and 11. In addition, we used a USB-6216 National Instruments I/O device, a NE-113A Nihon Kohden plate electrode, a SCB-68A shielded connector block, and LabVIEW 2017 software for data retrieval. The software was used to generate the multifocal hexagonal stimulus array on the computer monitor with C++Builder 10.2 and to move the center of the array toward the left and right and up and down. Cone and bright flash ERG results were observed using the moving ERG function. The a-wave, b-wave, c-wave, and the photopic negative response were identified with cone ERG. The moving ERG function allowed the identification of the retinal layer causing visual alterations.

Keywords: Moving ERG, multifocal ERG, precise perimetry, retinal layers, visual sensitivity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 547
28 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: Crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1127
27 An Identification Method of Geological Boundary Using Elastic Waves

Authors: Masamitsu Chikaraishi, Mutsuto Kawahara

Abstract:

This paper focuses on a technique for identifying the geological boundary of the ground strata in front of a tunnel excavation site using the first order adjoint method based on the optimal control theory. The geological boundary is defined as the boundary which is different layers of elastic modulus. At tunnel excavations, it is important to presume the ground situation ahead of the cutting face beforehand. Excavating into weak strata or fault fracture zones may cause extension of the construction work and human suffering. A theory for determining the geological boundary of the ground in a numerical manner is investigated, employing excavating blasts and its vibration waves as the observation references. According to the optimal control theory, the performance function described by the square sum of the residuals between computed and observed velocities is minimized. The boundary layer is determined by minimizing the performance function. The elastic analysis governed by the Navier equation is carried out, assuming the ground as an elastic body with linear viscous damping. To identify the boundary, the gradient of the performance function with respect to the geological boundary can be calculated using the adjoint equation. The weighed gradient method is effectively applied to the minimization algorithm. To solve the governing and adjoint equations, the Galerkin finite element method and the average acceleration method are employed for the spatial and temporal discretizations, respectively. Based on the method presented in this paper, the different boundary of three strata can be identified. For the numerical studies, the Suemune tunnel excavation site is employed. At first, the blasting force is identified in order to perform the accuracy improvement of analysis. We identify the geological boundary after the estimation of blasting force. With this identification procedure, the numerical analysis results which almost correspond with the observation data were provided.

Keywords: Parameter identification, finite element method, average acceleration method, first order adjoint equation method, weighted gradient method, geological boundary, navier equation, optimal control theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544
26 Self-Sensing Concrete Nanocomposites for Smart Structures

Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi

Abstract:

In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.

Keywords: Carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3393
25 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Antonio Vitale, Nicola Genito, Giovanni Cuciniello, Ferdinando Montemari

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: Flapping Dynamics, Flight Dynamics, System Identification, Tilt-Rotor Modeling and Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1220
24 Development and Validation of Cylindrical Linear Oscillating Generator

Authors: Sungin Jeong

Abstract:

This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.

Keywords: Equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, free piston engine, cylindrical linear oscillating generator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1319
23 Association of Brain Derived Neurotrophic Factor with Iron as well as Vitamin D, Folate and Cobalamin in Pediatric Metabolic Syndrome

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

The impact of metabolic syndrome (MetS) on cognition and functions of the brain is being investigated. Iron deficiency and deficiencies of B9 (folate) as well as B12 (cobalamin) vitamins are best-known nutritional anemias. They are associated with cognitive disorders and learning difficulties. The antidepressant effects of vitamin D are known and the deficiency state affects mental functions negatively. The aim of this study is to investigate possible correlations of MetS with serum brain-derived neurotrophic factor (BDNF), iron, folate, cobalamin and vitamin D in pediatric patients. 30 children, whose age- and sex-dependent body mass index (BMI) percentiles vary between 85 and 15, 60 morbid obese children with above 99th percentiles constituted the study population. Anthropometric measurements were taken. BMI values were calculated. Age- and sex-dependent BMI percentile values were obtained using the appropriate tables prepared by the World Health Organization (WHO). Obesity classification was performed according to WHO criteria. Those with MetS were evaluated according to MetS criteria. Serum BDNF was determined by enzyme-linked immunosorbent assay. Serum folate was analyzed by an immunoassay analyzer. Serum cobalamin concentrations were measured using electrochemiluminescence immunoassay. Vitamin D status was determined by the measurement of 25-hydroxycholecalciferol [25-hydroxy vitamin D3, 25(OH)D] using high performance liquid chromatography. Statistical evaluations were performed using SPSS for Windows, version 16. The p values less than 0.05 were accepted as statistically significant. Although statistically insignificant, lower folate and cobalamin values were found in MO children compared to those observed for children with normal BMI. For iron and BDNF values, no alterations were detected among the groups. Significantly decreased vitamin D concentrations were noted in MO children with MetS in comparison with those in children with normal BMI (p ≤ 0.05). The positive correlation observed between iron and BDNF in normal-BMI group was not found in two MO groups. In THE MetS group, the partial correlation among iron, BDNF, folate, cobalamin, vitamin D controlling for waist circumference and BMI was r = -0.501; p ≤ 0.05. None was calculated in MO and normal BMI groups. In conclusion, vitamin D should also be considered during the assessment of pediatric MetS. Waist circumference and BMI should collectively be evaluated during the evaluation of MetS in children. Within this context, BDNF appears to be a key biochemical parameter during the examination of obesity degree in terms of mental functions, cognition and learning capacity. The association observed between iron and BDNF in children with normal BMI was not detected in MO groups possibly due to development of inflammation and other obesity-related pathologies. It was suggested that this finding may contribute to mental function impairments commonly observed among obese children.

Keywords: Brain-derived neurotrophic factor, iron, Vitamin B9, Vitamin B12, Vitamin D.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683
22 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: Expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
21 The Performance Analysis of Valveless Micropump with Contoured Nozzle/Diffuser

Authors: Cheng-Chung Yang, Jr-Ming Miao, Fuh-Lin Lih, Tsung-Lung Liu, Ming-Hui Ho

Abstract:

The operation performance of a valveless micro-pump is strongly dependent on the shape of connected nozzle/diffuser and Reynolds number. The aims of present work are to compare the performance curves of micropump with the original straight nozzle/diffuser and contoured nozzle/diffuser under different back pressure conditions. The tested valveless micropumps are assembled of five pieces of patterned PMMA plates with hot-embracing technique. The structures of central chamber, the inlet/outlet reservoirs and the connected nozzle/diffuser are fabricated with laser cutting machine. The micropump is actuated with circular-type PZT film embraced on the bottom of central chamber. The deformation of PZT membrane with various input voltages is measured with a displacement laser probe. A simple testing facility is also constructed to evaluate the performance curves for comparison. In order to observe the evaluation of low Reynolds number multiple vortex flow patterns within the micropump during suction and pumping modes, the unsteady, incompressible laminar three-dimensional Reynolds-averaged Navier-Stokes equations are solved. The working fluid is DI water with constant thermo-physical properties. The oscillating behavior of PZT film is modeled with the moving boundary wall in way of UDF program. With the dynamic mesh method, the instants pressure and velocity fields are obtained and discussed.Results indicated that the volume flow rate is not monotony increased with the oscillating frequency of PZT film, regardless of the shapes of nozzle/diffuser. The present micropump can generate the maximum volume flow rate of 13.53 ml/min when the operation frequency is 64Hz and the input voltage is 140 volts. The micropump with contoured nozzle/diffuser can provide 7ml/min flow rate even when the back pressure is up to 400 mm-H2O. CFD results revealed that the flow central chamber was occupied with multiple pairs of counter-rotating vortices during suction and pumping modes. The net volume flow rate over a complete oscillating periodic of PZT

Keywords: valveless micropump、PZT diagraph、contoured nozzle/diffuser、vortex flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2804
20 A New Method for Extracting Ocean Wave Energy Utilizing the Wave Shoaling Phenomenon

Authors: Shafiq R. Qureshi, Syed Noman Danish, Muhammad Saeed Khalid

Abstract:

Fossil fuels are the major source to meet the world energy requirements but its rapidly diminishing rate and adverse effects on our ecological system are of major concern. Renewable energy utilization is the need of time to meet the future challenges. Ocean energy is the one of these promising energy resources. Threefourths of the earth-s surface is covered by the oceans. This enormous energy resource is contained in the oceans- waters, the air above the oceans, and the land beneath them. The renewable energy source of ocean mainly is contained in waves, ocean current and offshore solar energy. Very fewer efforts have been made to harness this reliable and predictable resource. Harnessing of ocean energy needs detail knowledge of underlying mathematical governing equation and their analysis. With the advent of extra ordinary computational resources it is now possible to predict the wave climatology in lab simulation. Several techniques have been developed mostly stem from numerical analysis of Navier Stokes equations. This paper presents a brief over view of such mathematical model and tools to understand and analyze the wave climatology. Models of 1st, 2nd and 3rd generations have been developed to estimate the wave characteristics to assess the power potential. A brief overview of available wave energy technologies is also given. A novel concept of on-shore wave energy extraction method is also presented at the end. The concept is based upon total energy conservation, where energy of wave is transferred to the flexible converter to increase its kinetic energy. Squeezing action by the external pressure on the converter body results in increase velocities at discharge section. High velocity head then can be used for energy storage or for direct utility of power generation. This converter utilizes the both potential and kinetic energy of the waves and designed for on-shore or near-shore application. Increased wave height at the shore due to shoaling effects increases the potential energy of the waves which is converted to renewable energy. This approach will result in economic wave energy converter due to near shore installation and more dense waves due to shoaling. Method will be more efficient because of tapping both potential and kinetic energy of the waves.

Keywords: Energy Utilizing, Wave Shoaling Phenomenon

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2617
19 Life Cycle Datasets for the Ornamental Stone Sector

Authors: Isabella Bianco, Gian Andrea Blengini

Abstract:

The environmental impact related to ornamental stones (such as marbles and granites) is largely debated. Starting from the industrial revolution, continuous improvements of machineries led to a higher exploitation of this natural resource and to a more international interaction between markets. As a consequence, the environmental impact of the extraction and processing of stones has increased. Nevertheless, if compared with other building materials, ornamental stones are generally more durable, natural, and recyclable. From the scientific point of view, studies on stone life cycle sustainability have been carried out, but these are often partial or not very significant because of the high percentage of approximations and assumptions in calculations. This is due to the lack, in life cycle databases (e.g. Ecoinvent, Thinkstep, and ELCD), of datasets about the specific technologies employed in the stone production chain. For example, databases do not contain information about diamond wires, chains or explosives, materials commonly used in quarries and transformation plants. The project presented in this paper aims to populate the life cycle databases with specific data of specific stone processes. To this goal, the methodology follows the standardized approach of Life Cycle Assessment (LCA), according to the requirements of UNI 14040-14044 and to the International Reference Life Cycle Data System (ILCD) Handbook guidelines of the European Commission. The study analyses the processes of the entire production chain (from-cradle-to-gate system boundaries), including the extraction of benches, the cutting of blocks into slabs/tiles and the surface finishing. Primary data have been collected in Italian quarries and transformation plants which use technologies representative of the current state-of-the-art. Since the technologies vary according to the hardness of the stone, the case studies comprehend both soft stones (marbles) and hard stones (gneiss). In particular, data about energy, materials and emissions were collected in marble basins of Carrara and in Beola and Serizzo basins located in the province of Verbano Cusio Ossola. Data were then elaborated through an appropriate software to build a life cycle model. The model was realized setting free parameters that allow an easy adaptation to specific productions. Through this model, the study aims to boost the direct participation of stone companies and encourage the use of LCA tool to assess and improve the stone sector environmental sustainability. At the same time, the realization of accurate Life Cycle Inventory data aims at making available, to researchers and stone experts, ILCD compliant datasets of the most significant processes and technologies related to the ornamental stone sector.

Keywords: LCA datasets, life cycle assessment, ornamental stone, stone environmental impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1106
18 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach

Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi

Abstract:

Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.

Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial information science. Remote sensing, surface elevation changes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1103
17 An Index for the Differential Diagnosis of Morbid Obese Children with and without Metabolic Syndrome

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Metabolic syndrome (MetS) is a severe health problem caused by morbid obesity, the severest form of obesity. The components of MetS are rather stable in adults. However, the diagnosis of MetS in morbid obese (MO) children still constitutes a matter of discussion. The aim of this study was to develop a formula, which facilitated the diagnosis of MetS in MO children and was capable of discriminating MO children with and without MetS findings. The study population comprised MO children. Age and sex-dependent body mass index (BMI) percentiles of the children were above 99. Increased blood pressure, elevated fasting blood glucose (FBG), elevated triglycerides (TRG) and/or decreased high density lipoprotein cholesterol (HDL-C) in addition to central obesity were listed as MetS components for each child. Two groups were constituted. In the first group, there were 42 MO children without MetS components. Second group was composed of 44 MO children with at least two MetS components. Anthropometric measurements including weight, height, waist and hip circumferences were performed during physical examination. BMI and homeostatic model assessment of insulin resistance (HOMA-IR) values were calculated. Informed consent forms were obtained from the parents of the children. Institutional Non-Interventional Clinical Studies Ethics Committee approved the study design. Routine biochemical analyses including FBG, insulin (INS), TRG, HDL-C were performed. The performance and the clinical utility of Diagnostic Obesity Notation Model Assessment Metabolic Syndrome Index (DONMA MetS index) [(INS/FBG)/(HDL-C/TRG)*100] was tested. Appropriate statistical tests were applied to the study data. p value smaller than 0.05 was defined as significant. MetS index values were 41.6 ± 5.1 in MO group and 104.4 ± 12.8 in MetS group. Corresponding values for HDL-C values were 54.5 ± 13.2 mg/dl and 44.2 ± 11.5 mg/dl. There was a statistically significant difference between the groups (p < 0.001). Upon evaluation of the correlations between MetS index and HDL-C values, a much stronger negative correlation was found in MetS group (r = -0.515; p = 0.001) in comparison with the correlation detected in MO group (r = -0.371; p = 0.016). From these findings, it was concluded that the statistical significance degree of the difference between MO and MetS groups was highly acceptable for this recently introduced MetS index. This was due to the involvement of all of the biochemically defined MetS components into the index. This is particularly important because each of these four parameters used in the formula is a cardiac risk factor. Aside from discriminating MO children with and without MetS findings, MetS index introduced in this study is important from the cardiovascular risk point of view in MetS group of children.

Keywords: Fasting blood glucose, high density lipoprotein cholesterol, insulin, metabolic syndrome, morbid obesity, triglycerides.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170
16 A Grid Synchronization Method Based on Adaptive Notch Filter for SPV System with Modified MPPT

Authors: Priyanka Chaudhary, M. Rizwan

Abstract:

This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.

Keywords: Solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1920
15 Transforming Ganges to be a Living River through Waste Water Management

Authors: P. M. Natarajan, Shambhu Kallolikar, S. Ganesh

Abstract:

By size and volume of water, Ganges River basin is the biggest among the fourteen major river basins in India. By Hindu’s faith, it is the main ‘holy river’ in this nation. But, of late, the pollution load, both domestic and industrial sources are deteriorating the surface and groundwater as well as land resources and hence the environment of the Ganges River basin is under threat. Seeing this scenario, the Indian government began to reclaim this river by two Ganges Action Plans I and II since 1986 by spending Rs. 2,747.52 crores ($457.92 million). But the result was no improvement in the water quality of the river and groundwater and environment even after almost three decades of reclamation, and hence now the New Indian Government is taking extra care to rejuvenate this river and allotted Rs. 2,037 cores ($339.50 million) in 2014 and Rs. 20,000 crores ($3,333.33 million) in 2015. The reasons for the poor water quality and stinking environment even after three decades of reclamation of the river are either no treatment/partial treatment of the sewage. Hence, now the authors are suggesting a tertiary level treatment standard of sewages of all sources and origins of the Ganges River basin and recycling the entire treated water for nondomestic uses. At 20million litres per day (MLD) capacity of each sewage treatment plant (STP), this basin needs about 2020 plants to treat the entire sewage load. Cost of the STPs is Rs. 3,43,400 million ($5,723.33 million) and the annual maintenance cost is Rs. 15,352 million ($255.87 million). The advantages of the proposed exercise are: we can produce a volume of 1,769.52 million m3 of biogas. Since biogas is energy, can be used as a fuel, for any heating purpose, such as cooking. It can also be used in a gas engine to convert the energy in the gas into electricity and heat. It is possible to generate about 3,539.04 million kilowatt electricity per annum from the biogas generated in the process of wastewater treatment in Ganges basin. The income generation from electricity works out to Rs 10,617.12million ($176.95million). This power can be used to bridge the supply and demand gap of energy in the power hungry villages where 300million people are without electricity in India even today, and to run these STPs as well. The 664.18 million tonnes of sludge generated by the treatment plants per annum can be used in agriculture as manure with suitable amendments. By arresting the pollution load the 187.42 cubic kilometer (km3) of groundwater potential of the Ganges River basin could be protected from deterioration. Since we can recycle the sewage for non-domestic purposes, about 14.75km3 of fresh water per annum can be conserved for future use. The total value of the water saving per annum is Rs.22,11,916million ($36,865.27million) and each citizen of Ganges River basin can save Rs. 4,423.83/ ($73.73) per annum and Rs. 12.12 ($0.202) per day by recycling the treated water for nondomestic uses. Further the environment of this basin could be kept clean by arresting the foul smell as well as the 3% of greenhouse gages emission from the stinking waterways and land. These are the ways to reclaim the waterways of Ganges River basin from deterioration.

Keywords: Holy Ganges River, lifeline of India, wastewater treatment and management, making Ganges permanently holy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1788
14 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities

Authors: A. Appe, B. Poluparthi, L. Kasivajjula, U. Mv, S. Bagadi, P. Modi, A. Singh, H. Gunupudi, S. Troiano, J. Paul, J. Stovall, J. Yamamoto

Abstract:

The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data are considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP (SHapley Additive exPlanations), to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since it is data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for e.g., quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP, a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.

Keywords: Competition, DAGs, hospital, healthcare, machine learning, market share, random forest, SHAP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168
13 Coastal Vulnerability Index and Its Projection for Odisha Coast, East Coast of India

Authors: Bishnupriya Sahoo, Prasad K. Bhaskaran

Abstract:

Tropical cyclone is one among the worst natural hazards that results in a trail of destruction causing enormous damage to life, property, and coastal infrastructures. In a global perspective, the Indian Ocean is considered as one of the cyclone prone basins in the world. Specifically, the frequency of cyclogenesis in the Bay of Bengal is higher compared to the Arabian Sea. Out of the four maritime states in the East coast of India, Odisha is highly susceptible to tropical cyclone landfall. Historical records clearly decipher the fact that the frequency of cyclones have reduced in this basin. However, in the recent decades, the intensity and size of tropical cyclones have increased. This is a matter of concern as the risk and vulnerability level of Odisha coast exposed to high wind speed and gusts during cyclone landfall have increased. In this context, there is a need to assess and evaluate the severity of coastal risk, area of exposure under risk, and associated vulnerability with a higher dimension in a multi-risk perspective. Changing climate can result in the emergence of a new hazard and vulnerability over a region with differential spatial and socio-economic impact. Hence there is a need to have coastal vulnerability projections in a changing climate scenario. With this motivation, the present study attempts to estimate the destructiveness of tropical cyclones based on Power Dissipation Index (PDI) for those cyclones that made landfall along Odisha coast that exhibits an increasing trend based on historical data. The study also covers the futuristic scenarios of integral coastal vulnerability based on the trends in PDI for the Odisha coast. This study considers 11 essential and important parameters; the cyclone intensity, storm surge, onshore inundation, mean tidal range, continental shelf slope, topo-graphic elevation onshore, rate of shoreline change, maximum wave height, relative sea level rise, rainfall distribution, and coastal geomorphology. The study signifies that over a decadal scale, the coastal vulnerability index (CVI) depends largely on the incremental change in variables such as cyclone intensity, storm surge, and associated inundation. In addition, the study also performs a critical analysis on the modulation of PDI on storm surge and inundation characteristics for the entire coastal belt of Odisha State. Interestingly, the study brings to light that a linear correlation exists between the storm-tide with PDI. The trend analysis of PDI and its projection for coastal Odisha have direct practical applications in effective coastal zone management and vulnerability assessment.

Keywords: Bay of Bengal, coastal vulnerability index, power dissipation index, tropical cyclone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1248
12 Impact of Liquidity Crunch on Interbank Network

Authors: I. Lucas, N. Schomberg, F-A. Couturier

Abstract:

Most empirical studies have analyzed how liquidity risks faced by individual institutions turn into systemic risk. Recent banking crisis has highlighted the importance of grasping and controlling the systemic risk, and the acceptance by Central Banks to ease their monetary policies for saving default or illiquid banks. This last point shows that banks would pay less attention to liquidity risk which, in turn, can become a new important channel of loss. The financial regulation focuses on the most important and “systemic” banks in the global network. However, to quantify the expected loss associated with liquidity risk, it is worth to analyze sensitivity to this channel for the various elements of the global bank network. A small bank is not considered as potentially systemic; however the interaction of small banks all together can become a systemic element. This paper analyzes the impact of medium and small banks interaction on a set of banks which is considered as the core of the network. The proposed method uses the structure of agent-based model in a two-class environment. In first class, the data from actual balance sheets of 22 large and systemic banks (such as BNP Paribas or Barclays) are collected. In second one, to model a network as closely as possible to actual interbank market, 578 fictitious banks smaller than the ones belonging to first class have been split into two groups of small and medium ones. All banks are active on the European interbank network and have deposit and market activity. A simulation of 12 three month periods representing a midterm time interval three years is projected. In each period, there is a set of behavioral descriptions: repayment of matured loans, liquidation of deposits, income from securities, collection of new deposits, new demands of credit, and securities sale. The last two actions are part of refunding process developed in this paper. To strengthen reliability of proposed model, random parameters dynamics are managed with stochastic equations as rates the variations of which are generated by Vasicek model. The Central Bank is considered as the lender of last resort which allows banks to borrow at REPO rate and some ejection conditions of banks from the system are introduced.

Liquidity crunch due to exogenous crisis is simulated in the first class and the loss impact on other bank classes is analyzed though aggregate values representing the aggregate of loans and/or the aggregate of borrowing between classes. It is mainly shown that the three groups of European interbank network do not have the same response, and that intermediate banks are the most sensitive to liquidity risk.

Keywords: Systemic Risk, Financial Contagion, Liquidity Risk, Interbank Market, Network Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973
11 Investigation of Genetic Epidemiology of Metabolic Compromises in ß Thalassemia Minor Mutation: Phenotypic Pleiotropy

Authors: Surajit Debnath, Soma Addya

Abstract:

Human genome is not only the evolutionary summation of all advantageous events, but also houses lesions of deleterious foot prints. A single gene mutation sometimes may express multiple consequences in numerous tissues and a linear relationship of the genotype and the phenotype may often be obscure. ß Thalassemia minor, a transfusion independent mild anaemia, coupled with environment among other factors may articulate into phenotypic pleotropy with Hypocholesterolemia, Vitamin D deficiency, Tissue hypoxia, Hyper-parathyroidism and Psychological alterations. Occurrence of Pancreatic insufficiency, resultant steatorrhoea, Vitamin-D (25-OH) deficiency (13.86 ngm/ml) with Hypocholesterolemia (85mg/dl) in a 30 years old male ß Thal-minor patient (Hemoglobin 11mg/dl with Fetal Hemoglobin 2.10%, Hb A2 4.60% and Hb Adult 84.80% and altered Hemogram) with increased Para thyroid hormone (62 pg/ml) & moderate Serum Ca+2 (9.5mg/ml) indicate towards a cascade of phenotypic pleotropy where the ß Thalassemia mutation ,be it in the 5’ cap site of the mRNA , differential splicing etc in heterozygous state is effecting several metabolic pathways. Compensatory extramedulary hematopoiesis may not coped up well with the stressful life style of the young individual and increased erythropoietic stress with high demand for cholesterol for RBC membrane synthesis may have resulted in Hypocholesterolemia.Oxidative stress and tissue hypoxia may have caused the pancreatic insufficiency, leading to Vitamin D deficiency. This may in turn have caused the secondary hyperparathyroidism to sustain serum Calcium level. Irritability and stress intolerance of the patient was a cumulative effect of the vicious cycle of metabolic compromises. From these findings we propose that the metabolic deficiencies in the ß Thalassemia mutations may be considered as the phenotypic display of the pleotropy to explain the genetic epidemiology. According to the recommendations from the NIH Workshop on Gene-Environment Interplay in Common Complex Diseases: Forging an Integrative Model, study design of observations should be informed by gene-environment hypotheses and results of a study (genetic diseases) should be published to inform future hypotheses. Variety of approaches is needed to capture data on all possible aspects, each of which is likely to contribute to the etiology of disease. Speakers also agreed that there is a need for development of new statistical methods and measurement tools to appraise information that may be missed out by conventional method where large sample size is needed to segregate considerable effect. A meta analytic cohort study in future may bring about significant insight on to the title comment.

Keywords: Genetic disease, Genetic epidemiology, Heterozygous, Phenotype, Pleotropy, ß Thalassemia minor, Metabolic compromises.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6516
10 Modeling Ecological Responses of Some Forage Legumes in Iran

Authors: M. Keshavarzi

Abstract:

Grasslands of Iran are encountered with a vast desertification and destruction. Some legumes are plants of forage importance with high palatability. Studied legumes in this project are Onobrychis, Medicago sativa (alfalfa) and Trifolium repens. Seeds were cultivated in research field of Kaboutarabad (33 km East of Isfahan, Iran) with an average 80 mm. annual rainfall. Plants were cultivated in a split plot design with 3 replicate and two water treatments (weekly irrigation, and under stress with same amount per 15 days interval). Water entrance to each plots were measured by Partial flow. This project lasted 20 weeks. Destructive samplings (1m2 each time) were done weekly. At each sampling plants were gathered and weighed separately for each vegetative parts. An Area Meter (Vista) was used to measure root surface and leaf area. Total shoot and root fresh and dry weight, leaf area index and soil coverage were evaluated too. Dry weight was achieved in 750c oven after 24 hours. Statgraphic and Harvard Graphic software were used to formulate and demonstrate the parameters curves due to time. Our results show that Trifolium repens has affected 60 % and Medicago sativa 18% by water stress. Onobrychis total fresh weight was reduced 45%. Dry weight or Biomass in alfalfa is not so affected by water shortage. This means that in alfalfa fields we can decrease the irrigation amount and have some how same amount of Biomass. Onobrychis show a drastic decrease in Biomass. The increases in total dry matter due to time in studied plants are formulated. For Trifolium repens if removal or cattle entrance to meadows do not occurred at perfect time, it will decrease the palatability and water content of the shoots. Water stress in a short period could develop the root system in Trifolium repens, but if it last more than this other ecological and soil factors will affect the growth of this plant. Low level of soil water is not so important for studied legume forges. But water shortage affect palatability and water content of aerial parts. Leaf area due to time in studied legumes is formulated. In fact leaf area is decreased by shortage in available water. Higher leaf area means higher forage and biomass production. Medicago and Onobrychis reach to the maximum leaf area sooner than Trifolium and are able to produce an optimum soil cover and inhibit the transpiration of soil water of meadows. Correlation of root surface to Total biomass in studied plants is formulated. Medicago under water stress show a 40% decrease in crown cover while at optimum condition this amount reach to 100%. In order to produce forage in areas without soil erosion Medicago is the best choice even with a shortage in water resources. It is tried to represent the growth simulation of three famous Forage Legumes. By growth simulation farmers and range managers could better decide to choose best plant adapted to water availability without designing different time and labor consuming field experiments.

Keywords: Ecological parameters, Medicago, Onobrychis, Trifolium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
9 Engineering Photodynamic with Radioactive Therapeutic Systems for Sustainable Molecular Polarity: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces Luhmann’s autopoietic social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. A specific type of autopoietic system is explained in the three existing groups of the ecological phenomena: interaction, social and medical sciences. This hypothesis model, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for the exchange of photon energy with molecular without any changes in topology. The external forces in the systems environment might be concomitant with the natural fluctuations’ influence (e.g. radioactive radiation, electromagnetic waves). The cantilever sensor deploys insights to the future chip processor for prevention of social metabolic systems. Thus, the circuits with resonant electric and optical properties are prototyped on board as an intra–chip inter–chip transmission for producing electromagnetic energy approximately ranges from 1.7 mA at 3.3 V to service the detection in locomotion with the least significant power losses. Nowadays, therapeutic systems are assimilated materials from embryonic stem cells to aggregate multiple functions of the vessels nature de-cellular structure for replenishment. While, the interior actuators deploy base-pair complementarity of nucleotides for the symmetric arrangement in particular bacterial nanonetworks of the sequence cycle creating double-stranded DNA strings. The DNA strands must be sequenced, assembled, and decoded in order to reconstruct the original source reliably. The design of exterior actuators have the ability in sensing different variations in the corresponding patterns regarding beat-to-beat heart rate variability (HRV) for spatial autocorrelation of molecular communication, which consists of human electromagnetic, piezoelectric, electrostatic and electrothermal energy to monitor and transfer the dynamic changes of all the cantilevers simultaneously in real-time workspace with high precision. A prototype-enabled dynamic energy sensor has been investigated in the laboratory for inclusion of nanoscale devices in the architecture with a fuzzy logic control for detection of thermal and electrostatic changes with optoelectronic devices to interpret uncertainty associated with signal interference. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other and forms its unique spatial structure modules for providing the environment mutual contribution in the investigation of mass temperature changes due to pathogenic archival architecture of clusters.

Keywords: Autopoiesis, quantum photonics, portable energy, photonic structure, photodynamic therapeutic system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 800
8 Structural Analysis of a Composite Wind Turbine Blade

Authors: C. Amer, M. Sahin

Abstract:

The design of an optimised horizontal axis 5-meter-long wind turbine rotor blade in according with IEC 61400-2 standard is a research and development project in order to fulfil the requirements of high efficiency of torque from wind production and to optimise the structural components to the lightest and strongest way possible. For this purpose, a research study is presented here by focusing on the structural characteristics of a composite wind turbine blade via finite element modelling and analysis tools. In this work, first, the required data regarding the general geometrical parts are gathered. Then, the airfoil geometries are created at various sections along the span of the blade by using CATIA software to obtain the two surfaces, namely; the suction and the pressure side of the blade in which there is a hat shaped fibre reinforced plastic spar beam, so-called chassis starting at 0.5m from the root of the blade and extends up to 4 m and filled with a foam core. The root part connecting the blade to the main rotor differential metallic hub having twelve hollow threaded studs is then modelled. The materials are assigned as two different types of glass fabrics, polymeric foam core material and the steel-balsa wood combination for the root connection parts. The glass fabrics are applied using hand wet lay-up lamination with epoxy resin as METYX L600E10C-0, is the unidirectional continuous fibres and METYX XL800E10F having a tri-axial architecture with fibres in the 0,+45,-45 degree orientations in a ratio of 2:1:1. Divinycell H45 is used as the polymeric foam. The finite element modelling of the blade is performed via MSC PATRAN software with various meshes created on each structural part considering shell type for all surface geometries, and lumped mass were added to simulate extra adhesive locations. For the static analysis, the boundary conditions are assigned as fixed at the root through aforementioned bolts, where for dynamic analysis both fixed-free and free-free boundary conditions are made. By also taking the mesh independency into account, MSC NASTRAN is used as a solver for both analyses. The static analysis aims the tip deflection of the blade under its own weight and the dynamic analysis comprises normal mode dynamic analysis performed in order to obtain the natural frequencies and corresponding mode shapes focusing the first five in and out-of-plane bending and the torsional modes of the blade. The analyses results of this study are then used as a benchmark prior to modal testing, where the experiments over the produced wind turbine rotor blade has approved the analytical calculations.

Keywords: Dynamic analysis, Fiber Reinforced Composites, Horizontal axis wind turbine blade, Hand-wet layup, Modal Testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4959
7 A Quasi-Systematic Review on Effectiveness of Social and Cultural Sustainability Practices in Built Environment

Authors: Asif Ali, Daud Salim Faruquie

Abstract:

With the advancement of knowledge about the utility and impact of sustainability, its feasibility has been explored into different walks of life. Scientists, however; have established their knowledge in four areas viz environmental, economic, social and cultural, popularly termed as four pillars of sustainability. Aspects of environmental and economic sustainability have been rigorously researched and practiced and huge volume of strong evidence of effectiveness has been founded for these two sub-areas. For the social and cultural aspects of sustainability, dependable evidence of effectiveness is still to be instituted as the researchers and practitioners are developing and experimenting methods across the globe. Therefore, the present research aimed to identify globally used practices of social and cultural sustainability and through evidence synthesis assess their outcomes to determine the effectiveness of those practices. A PICO format steered the methodology which included all populations, popular sustainability practices including walkability/cycle tracks, social/recreational spaces, privacy, health & human services and barrier free built environment, comparators included ‘Before’ and ‘After’, ‘With’ and ‘Without’, ‘More’ and ‘Less’ and outcomes included Social well-being, cultural coexistence, quality of life, ethics and morality, social capital, sense of place, education, health, recreation and leisure, and holistic development. Search of literature included major electronic databases, search websites, organizational resources, directory of open access journals and subscribed journals. Grey literature, however, was not included. Inclusion criteria filtered studies on the basis of research designs such as total randomization, quasirandomization, cluster randomization, observational or single studies and certain types of analysis. Studies with combined outcomes were considered but studies focusing only on environmental and/or economic outcomes were rejected. Data extraction, critical appraisal and evidence synthesis was carried out using customized tabulation, reference manager and CASP tool. Partial meta-analysis was carried out and calculation of pooled effects and forest plotting were done. As many as 13 studies finally included for final synthesis explained the impact of targeted practices on health, behavioural and social dimensions. Objectivity in the measurement of health outcomes facilitated quantitative synthesis of studies which highlighted the impact of sustainability methods on physical activity, Body Mass Index, perinatal outcomes and child health. Studies synthesized qualitatively (and also quantitatively) showed outcomes such as routines, family relations, citizenship, trust in relationships, social inclusion, neighbourhood social capital, wellbeing, habitability and family’s social processes. The synthesized evidence indicates slight effectiveness and efficacy of social and cultural sustainability on the targeted outcomes. Further synthesis revealed that such results of this study are due weak research designs and disintegrated implementations. If architects and other practitioners deliver their interventions in collaboration with research bodies and policy makers, a stronger evidence-base in this area could be generated.

Keywords: Built environment, cultural sustainability, social sustainability, sustainable architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2443
6 (Anti)Depressant Effects of Non-Steroidal Antiinflammatory Drugs in Mice

Authors: Horia Păunescu

Abstract:

Purpose: The study aimed to assess the depressant or antidepressant effects of several Nonsteroidal Anti-Inflammatory Drugs (NSAIDs) in mice: the selective cyclooxygenase-2 (COX-2) inhibitor meloxicam, and the non-selective COX-1 and COX-2 inhibitors lornoxicam, sodium metamizole, and ketorolac. The current literature data regarding such effects of these agents are scarce. Materials and methods: The study was carried out on NMRI mice weighing 20-35 g, kept in a standard laboratory environment. The study was approved by the Ethics Committee of the University of Medicine and Pharmacy „Carol Davila”, Bucharest. The study agents were injected intraperitoneally, 10 mL/kg body weight (bw) 1 hour before the assessment of the locomotor activity by cage testing (n=10 mice/ group) and 2 hours before the forced swimming tests (n=15). The study agents were dissolved in normal saline (meloxicam, sodium metamizole), ethanol 11.8% v/v in normal saline (ketorolac), or water (lornoxicam), respectively. Negative and positive control agents were also given (amitryptilline in the forced swimming test). The cage floor used in the locomotor activity assessment was divided into 20 equal 10 cm squares. The forced swimming test involved partial immersion of the mice in cylinders (15/9cm height/diameter) filled with water (10 cm depth at 28C), where they were left for 6 minutes. The cage endpoint used in the locomotor activity assessment was the number of treaded squares. Four endpoints were used in the forced swimming test (immobility latency for the entire 6 minutes, and immobility, swimming, and climbing scores for the final 4 minutes of the swimming session), recorded by an observer that was „blinded” to the experimental design. The statistical analysis used the Levene test for variance homogeneity, ANOVA and post-hoc analysis as appropriate, Tukey or Tamhane tests. Results: No statistically significant increase or decrease in the number of treaded squares was seen in the locomotor activity assessment of any mice group. In the forced swimming test, amitryptilline showed an antidepressant effect in each experiment, at the 10 mg/kg bw dosage. Sodium metamizole was depressant at 100 mg/kg bw (increased the immobility score, p=0.049, Tamhane test), but not in lower dosages as well (25 and 50 mg/kg bw). Ketorolac showed an antidepressant effect at the intermediate dosage of 5 mg/kg bw, but not so in the dosages of 2.5 and 10 mg/kg bw, respectively (increased the swimming score, p=0.012, Tamhane test). Meloxicam and lornoxicam did not alter the forced swimming endpoints at any dosage level. Discussion: 1) Certain NSAIDs caused changes in the forced swimming patterns without interfering with locomotion. 2) Sodium metamizole showed a depressant effect, whereas ketorolac proved antidepressant. Conclusion: NSAID-induced mood changes are not class effects of these agents and apparently are independent of the type of inhibited cyclooxygenase (COX-1 or COX-2). Disclosure: This paper was co-financed from the European Social Fund, through the Sectorial Operational Programme Human Resources Development 2007-2013, project number POSDRU /159 /1.5 /S /138907 "Excellence in scientific interdisciplinary research, doctoral and postdoctoral, in the economic, social and medical fields -EXCELIS", coordinator The Bucharest University of Economic Studies.

Keywords: Antidepressant, depressant, forced swim, NSAIDs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2277
5 A Study on the Relation among Primary Care Professionals Serving the Disadvantaged Community, Socioeconomic Status, and Adverse Health Outcome

Authors: Chau-Kuang Chen, Juanita Buford, Colette Davis, Raisha Allen, John Hughes, Jr., James Tyus, Dexter Samuels

Abstract:

During the post-Civil War era, the city of Nashville, Tennessee, had the highest mortality rate in the United States. The elevated death and disease rates among former slaves were attributable to lack of quality healthcare. To address the paucity of healthcare services, Meharry Medical College, an institution with the mission of educating minority professionals and serving the underserved population, was established in 1876. Purpose: The social ecological framework and partial least squares (PLS) path modeling were used to quantify the impact of socioeconomic status and adverse health outcome on primary care professionals serving the disadvantaged community. Thus, the study results could demonstrate the accomplishment of the College’s mission of training primary care professionals to serve in underserved areas. Methods: Various statistical methods were used to analyze alumni data from 1975 – 2013. K-means cluster analysis was utilized to identify individual medical and dental graduates in the cluster groups of the practice communities (Disadvantaged or Non-disadvantaged Communities). Discriminant analysis was implemented to verify the classification accuracy of cluster analysis. The independent t-test was performed to detect the significant mean differences of respective clustering and criterion variables. Chi-square test was used to test if the proportions of primary care and non-primary care specialists are consistent with those of medical and dental graduates practicing in the designated community clusters. Finally, the PLS path model was constructed to explore the construct validity of analytic model by providing the magnitude effects of socioeconomic status and adverse health outcome on primary care professionals serving the disadvantaged community. Results: Approximately 83% (3,192/3,864) of Meharry Medical College’s medical and dental graduates from 1975 to 2013 were practicing in disadvantaged communities. Independent t-test confirmed the content validity of the cluster analysis model. Also, the PLS path modeling demonstrated that alumni served as primary care professionals in communities with significantly lower socioeconomic status and higher adverse health outcome (p < .001). The PLS path modeling exhibited the meaningful interrelation between primary care professionals practicing communities and surrounding environments (socioeconomic statues and adverse health outcome), which yielded model reliability, validity, and applicability. Conclusion: This study applied social ecological theory and analytic modeling approaches to assess the attainment of Meharry Medical College’s mission of training primary care professionals to serve in underserved areas, particularly in communities with low socioeconomic status and high rates of adverse health outcomes. In summary, the majority of medical and dental graduates from Meharry Medical College provided primary care services to disadvantaged communities with low socioeconomic status and high adverse health outcome, which demonstrated that Meharry Medical College has fulfilled its mission. The high reliability, validity, and applicability of this model imply that it could be replicated for comparable universities and colleges elsewhere.

Keywords: Disadvantaged Community, K-means Cluster Analysis, PLS Path Modeling, Primary care.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1982
4 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: Growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2428
3 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topologically order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment ‘interactional cycle’ for exchange photon energy with molecules without changes in topology (i.e., chemical transformation into products do not propagate any changes or variation in the network topology of physical configuration). The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure, and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies, which are automated, real-time, reliable, reproducible and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody–antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due the pathogenic archival architecture of cell clusters.

Keywords: autopoiesis, engineering topology, photonic system molecular structure, biosensor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 396
2 The Efficiency of Mechanization in Weed Control in Artificial Regeneration of Oriental Beech (Fagus orientalis Lipsky.)

Authors: Tuğrul Varol, Halil Barış Özel

Abstract:

In this study which has been conducted in Akçasu Forest Range District of Devrek Forest Directorate; 3 methods (weed control with labourer power, cover removal with Hitachi F20 Excavator, and weed control with agricultural equipment mounted on a Ferguson 240S agriculture tractor) were utilized in weed control efforts in regeneration of degraded oriental beech forests have been compared. In this respect, 3 methods have been compared by determining certain work hours and standard durations of unit areas (1 hectare). For this purpose, evaluating the tasks made with human and machine force from the aspects of duration, productivity and costs, it has been aimed to determine the most productive method in accordance with the actual ecological conditions of research field. Within the scope of the study, the time studies have been conducted for 3 methods used in weed control efforts. While carrying out those studies, the performed implementations have been evaluated by dividing them into business stages. Also, the actual data have been used while calculating the cost accounts. In those calculations, the latest formulas and equations which are also used in developed countries have been utilized. The variance of analysis (ANOVA) was used in order to determine whether there is any statistically significant difference among obtained results, and the Duncan test was used for grouping if there is significant difference. According to the measurements and findings carried out within the scope of this study, it has been found during living cover removal efforts in regeneration efforts in demolished oriental beech forests that the removal of weed layer in 1 hectare of field has taken 920 hours with labourer force, 15.1 hours with excavator and 60 hours with an equipment mounted on a tractor. On the other hand, it has been determined that the cost of removal of living cover in unit area (1 hectare) was 3220.00 TL for labourer power, 1250 TL for excavator and 1825 TL for equipment mounted on a tractor. According to the obtained results, it has been found that the utilization of excavator in weed control effort in regeneration of degraded oriental beech regions under actual ecological conditions of research field has been found to be more productive from both of aspects of duration and costs. These determinations carried out should be repeated in weed control efforts in degraded forest fields with different ecological conditions, it is compulsory for finding the most efficient weed control method. These findings will light the way of technical staff of forestry directorate in determination of the most effective and economic weed control method. Thus, the more actual data will be used while preparing the weed control budgets, and there will be significant contributions to national economy. Also the results of this and similar studies are very important for developing the policies for our forestry in short and long term.

Keywords: Artificial regeneration, weed control, oriental beech, productivity, mechanization, man power, cost analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1956
1 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation

Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya

Abstract:

The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.

Keywords: Biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888