Search results for: signal intensity
2342 Analytical Technique for Definition of Internal Forces in Links of Robotic Systems and Mechanisms with Statically Indeterminate and Determinate Structures Taking into Account the Distributed Dynamical Loads and Concentrated Forces
Authors: Saltanat Zhilkibayeva, Muratulla Utenov, Nurzhan Utenov
Abstract:
The distributed inertia forces of complex nature appear in links of rod mechanisms within the motion process. Such loads raise a number of problems, as the problems of destruction caused by a large force of inertia; elastic deformation of the mechanism can be considerable, that can bring the mechanism out of action. In this work, a new analytical approach for the definition of internal forces in links of robotic systems and mechanisms with statically indeterminate and determinate structures taking into account the distributed inertial and concentrated forces is proposed. The relations between the intensity of distributed inertia forces and link weight with geometrical, physical and kinematic characteristics are determined in this work. The distribution laws of inertia forces and dead weight make it possible at each position of links to deduce the laws of distribution of internal forces along the axis of the link, in which loads are found at any point of the link. The approximation matrixes of forces of an element under the action of distributed inertia loads with the trapezoidal intensity are defined. The obtained approximation matrixes establish the dependence between the force vector in any cross-section of the element and the force vector in calculated cross-sections, as well as allow defining the physical characteristics of the element, i.e., compliance matrix of discrete elements. Hence, the compliance matrixes of an element under the action of distributed inertial loads of trapezoidal shape along the axis of the element are determined. The internal loads of each continual link are unambiguously determined by a set of internal loads in its separate cross-sections and by the approximation matrixes. Therefore, the task is reduced to the calculation of internal forces in a final number of cross-sections of elements. Consequently, it leads to a discrete model of elastic calculation of links of rod mechanisms. The discrete model of the elements of mechanisms and robotic systems and their discrete model as a whole are constructed. The dynamic equilibrium equations for the discrete model of the elements are also received in this work as well as the equilibrium equations of the pin and rigid joints expressed through required parameters of internal forces. Obtained systems of dynamic equilibrium equations are sufficient for the definition of internal forces in links of mechanisms, which structure is statically definable. For determination of internal forces of statically indeterminate mechanisms (in the way of determination of internal forces), it is necessary to build a compliance matrix for the entire discrete model of the rod mechanism, that is reached in this work. As a result by means of developed technique the programs in the MAPLE18 system are made and animations of the motion of the fourth class mechanisms of statically determinate and statically indeterminate structures with construction on links the intensity of cross and axial distributed inertial loads, the bending moments, cross and axial forces, depending on kinematic characteristics of links are obtained.Keywords: distributed inertial forces, internal forces, statically determinate mechanisms, statically indeterminate mechanisms
Procedia PDF Downloads 2172341 Internal Combustion Engine Fuel Composition Detection by Analysing Vibration Signals Using ANFIS Network
Authors: M. N. Khajavi, S. Nasiri, E. Farokhi, M. R. Bavir
Abstract:
Alcohol fuels are renewable, have low pollution and have high octane number; therefore, they are important as fuel in internal combustion engines. Percentage detection of these alcoholic fuels with gasoline is a complicated, time consuming, and expensive process. Nowadays, these processes are done in equipped laboratories, based on international standards. The aim of this research is to determine percentage detection of different fuels based on vibration analysis of engine block signals. By doing, so considerable saving in time and cost can be achieved. Five different fuels consisted of pure gasoline (G) as base fuel and combination of this fuel with different percent of ethanol and methanol are prepared. For example, volumetric combination of pure gasoline with 10 percent ethanol is called E10. By this convention, we made M10 (10% methanol plus 90% pure gasoline), E30 (30% ethanol plus 70% pure gasoline), and M30 (30% Methanol plus 70% pure gasoline) were prepared. To simulate real working condition for this experiment, the vehicle was mounted on a chassis dynamometer and run under 1900 rpm and 30 KW load. To measure the engine block vibration, a three axis accelerometer was mounted between cylinder 2 and 3. After acquisition of vibration signal, eight time feature of these signals were used as inputs to an Adaptive Neuro Fuzzy Inference System (ANFIS). The designed ANFIS was trained for classifying these five different fuels. The results show suitable classification ability of the designed ANFIS network with 96.3 percent of correct classification.Keywords: internal combustion engine, vibration signal, fuel composition, classification, ANFIS
Procedia PDF Downloads 4012340 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy
Authors: Kemal Efe Eseller, Göktuğ Yazici
Abstract:
Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing
Procedia PDF Downloads 872339 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods
Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie
Abstract:
The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence
Procedia PDF Downloads 2482338 Computational Fluid Dynamics Modeling of Flow Properties Fluctuations in Slug-Churn Flow through Pipe Elbow
Authors: Nkemjika Chinenye-Kanu, Mamdud Hossain, Ghazi Droubi
Abstract:
Prediction of multiphase flow induced forces, void fraction and pressure is crucial at both design and operating stages of practical energy and process pipe systems. In this study, transient numerical simulations of upward slug-churn flow through a vertical 90-degree elbow have been conducted. The volume of fluid (VOF) method was used to model the two-phase flows while the K-epsilon Reynolds-Averaged Navier-Stokes (RANS) equations were used to model turbulence in the flows. The simulation results were validated using experimental results. Void fraction signal, peak frequency and maximum magnitude of void fraction fluctuation of the slug-churn flow validation case studies compared well with experimental results. The x and y direction force fluctuation signals at the elbow control volume were obtained by carrying out force balance calculations using the directly extracted time domain signals of flow properties through the control volume in the numerical simulation. The computed force signal compared well with experiment for the slug and churn flow validation case studies. Hence, the present numerical simulation technique was able to predict the behaviours of the one-way flow induced forces and void fraction fluctuations.Keywords: computational fluid dynamics, flow induced vibration, slug-churn flow, void fraction and force fluctuation
Procedia PDF Downloads 1562337 Agile Smartphone Porting and App Integration of Signal Processing Algorithms Obtained through Rapid Development
Authors: Marvin Chibuzo Offiah, Susanne Rosenthal, Markus Borschbach
Abstract:
Certain research projects in Computer Science often involve research on existing signal processing algorithms and developing improvements on them. Research budgets are usually limited, hence there is limited time for implementing the algorithms from scratch. It is therefore common practice, to use implementations provided by other researchers as a template. These are most commonly provided in a rapid development, i.e. 4th generation, programming language, usually Matlab. Rapid development is a common method in Computer Science research for quickly implementing and testing new developed algorithms, which is also a common task within agile project organization. The growing relevance of mobile devices in the computer market also gives rise to the need to demonstrate the successful executability and performance measurement of these algorithms on a mobile device operating system and processor, particularly on a smartphone. Open mobile systems such as Android, are most suitable for this task, which is to be performed most efficiently. Furthermore, efficiently implementing an interaction between the algorithm and a graphical user interface (GUI) that runs exclusively on the mobile device is necessary in cases where the project’s goal statement also includes such a task. This paper examines different proposed solutions for porting computer algorithms obtained through rapid development into a GUI-based smartphone Android app and evaluates their feasibilities. Accordingly, the feasible methods are tested and a short success report is given for each tested method.Keywords: SMARTNAVI, Smartphone, App, Programming languages, Rapid Development, MATLAB, Octave, C/C++, Java, Android, NDK, SDK, Linux, Ubuntu, Emulation, GUI
Procedia PDF Downloads 4782336 An Adaptive Back-Propagation Network and Kalman Filter Based Multi-Sensor Fusion Method for Train Location System
Authors: Yu-ding Du, Qi-lian Bao, Nassim Bessaad, Lin Liu
Abstract:
The Global Navigation Satellite System (GNSS) is regarded as an effective approach for the purpose of replacing the large amount used track-side balises in modern train localization systems. This paper describes a method based on the data fusion of a GNSS receiver sensor and an odometer sensor that can significantly improve the positioning accuracy. A digital track map is needed as another sensor to project two-dimensional GNSS position to one-dimensional along-track distance due to the fact that the train’s position can only be constrained on the track. A model trained by BP neural network is used to estimate the trend positioning error which is related to the specific location and proximate processing of the digital track map. Considering that in some conditions the satellite signal failure will lead to the increase of GNSS positioning error, a detection step for GNSS signal is applied. An adaptive weighted fusion algorithm is presented to reduce the standard deviation of train speed measurement. Finally an Extended Kalman Filter (EKF) is used for the fusion of the projected 1-D GNSS positioning data and the 1-D train speed data to get the estimate position. Experimental results suggest that the proposed method performs well, which can reduce positioning error notably.Keywords: multi-sensor data fusion, train positioning, GNSS, odometer, digital track map, map matching, BP neural network, adaptive weighted fusion, Kalman filter
Procedia PDF Downloads 2522335 Numerical Simulation of Laser Propagation through Turbulent Atmosphere Using Zernike Polynomials
Authors: Mohammad Moradi
Abstract:
In this article, propagation of a laser beam through turbulent atmosphere is evaluated. At first the laser beam is simulated and then turbulent atmosphere will be simulated by using Zernike polynomials. Some parameter like intensity, PSF will be measured for four wavelengths in different Cn2.Keywords: laser beam propagation, phase screen, turbulent atmosphere, Zernike polynomials
Procedia PDF Downloads 5112334 Effectiveness of High-Intensity Interval Training in Overweight Individuals between 25-45 Years of Age Registered in Sports Medicine Clinic, General Hospital Kalutara
Authors: Dimuthu Manage
Abstract:
Introduction: The prevalence of obesity and obesity-related non-communicable diseases are becoming a massive health concern in the whole world. Physical activity is recognized as an effective solution for this matter. The published data on the effectiveness of High-Intensity Interval Training (HIIT) in improving health parameters in overweight and obese individuals in Sri Lanka is sparse. Hence this study is conducted. Methodology: This is a quasi-experimental study that was conducted at the Sports medicine clinic, General Hospital, Kalutara. Participants have engaged in a programme of HIIT three times per week for six weeks. Data collection was based on precise measurements by using structured and validated methods. Ethical clearance was obtained. Results: Registered number for the study was 48, and only 52% have completed the study. The mean age was 32 (SD=6.397) years, with 64% males. All the anthropometric measurements which were assessed (i.e. waist circumference(P<0.001), weight(P<0.001) and BMI(P<0.001)), body fat percentage(P<0.001), VO2 max(P<0.001), and lipid profile (ie. HDL(P=0.016), LDL(P<0.001), cholesterol(P<0.001), triglycerides(P<0.010) and LDL: HDL(P<0.001)) had shown statistically significant improvement after the intervention with the HIIT programme. Conclusions: This study confirms HIIT as a time-saving and effective exercise method, which helps in preventing obesity as well as non-communicable diseases. HIIT ameliorates body anthropometry, fat percentage, cardiopulmonary status, and lipid profile in overweight and obese individuals markedly. As with the majority of studies, the design of the current study is subject to some limitations. The first is the study focused on a correlational study. If it is a comparative study, comparing it with other methods of training programs would have given more validity. Although the validated tools used to measure variables and the same tools used in pre and post-exercise occasions with the available facilities, it would have been better to measure some of them using gold-standard methods. However, this evidence should be further assessed in larger-scale trials using comparative groups to generalize the efficacy of the HIIT exercise program.Keywords: HIIT, lipid profile, BMI, VO2 max
Procedia PDF Downloads 642333 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet
Authors: Amir Moslemi, Amir movafeghi, Shahab Moradi
Abstract:
One of the most important challenging factors in medical images is nominated as noise.Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjected to low quality due to the noise. The quality of CT images is dependent on the absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on the purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete wavelet transform(DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result in good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).Keywords: computed tomography (CT), noise reduction, curve-let, contour-let, signal to noise peak-peak ratio (PSNR), structure similarity (Ssim), absorbed dose to patient (ADP)
Procedia PDF Downloads 4412332 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data
Authors: M. Kharrat, G. Moreau, Z. Aboura
Abstract:
The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition
Procedia PDF Downloads 1552331 Nanowire Substrate to Control Differentiation of Mesenchymal Stem Cells
Authors: Ainur Sharip, Jose E. Perez, Nouf Alsharif, Aldo I. M. Bandeas, Enzo D. Fabrizio, Timothy Ravasi, Jasmeen S. Merzaban, Jürgen Kosel
Abstract:
Bone marrow-derived human mesenchymal stem cells (MSCs) are attractive candidates for tissue engineering and regenerative medicine, due to their ability to differentiate into osteoblasts, chondrocytes or adipocytes. Differentiation is influenced by biochemical and biophysical stimuli provided by the microenvironment of the cell. Thus, altering the mechanical characteristics of a cell culture scaffold can directly influence a cell’s microenvironment and lead to stem cell differentiation. Mesenchymal stem cells were cultured on densely packed, vertically aligned magnetic iron nanowires (NWs) and the effect of NWs on the cell cytoskeleton rearrangement and differentiation were studied. An electrochemical deposition method was employed to fabricate NWs into nanoporous alumina templates, followed by a partial release to reveal the NW array. This created a cell growth substrate with free-standing NWs. The Fe NWs possessed a length of 2-3 µm, with each NW having a diameter of 33 nm on average. Mechanical stimuli generated by the physical movement of these iron NWs, in response to a magnetic field, can stimulate osteogenic differentiation. Induction of osteogenesis was estimated using an osteogenic marker, osteopontin, and a reduction of stem cell markers, CD73 and CD105. MSCs were grown on the NWs, and fluorescent microscopy was employed to monitor the expression of markers. A magnetic field with an intensity of 250 mT and a frequency of 0.1 Hz was applied for 12 hours/day over a period of one week and two weeks. The magnetically activated substrate enhanced the osteogenic differentiation of the MSCs compared to the culture conditions without magnetic field. Quantification of the osteopontin signal revealed approximately a seven-fold increase in the expression of this protein after two weeks of culture. Immunostaining staining against CD73 and CD105 revealed the expression of antibodies at the earlier time point (two days) and a considerable reduction after one-week exposure to a magnetic field. Overall, these results demonstrate the application of a magnetic NW substrate in stimulating the osteogenic differentiation of MSCs. This method significantly decreases the time needed to induce osteogenic differentiation compared to commercial biochemical methods, such as osteogenic differentiation kits, that usually require more than two weeks. Contact-free stimulation of MSC differentiation using a magnetic field has potential uses in tissue engineering, regenerative medicine, and bone formation therapies.Keywords: cell substrate, magnetic nanowire, mesenchymal stem cell, stem cell differentiation
Procedia PDF Downloads 1962330 Deep Learning for Image Correction in Sparse-View Computed Tomography
Authors: Shubham Gogri, Lucia Florescu
Abstract:
Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net
Procedia PDF Downloads 1622329 Calculation of Instrumental Results of the Tohoku Earthquake, Japan (Mw 9.0) on March 11, 2011 and Other Destructive Earthquakes during Seismic Hazard Assessment
Authors: J. K. Karapetyan
Abstract:
In this paper seismological-statistical analysis of actual instrumental data on the main tremor of the Great Japan earthquake 11.03.2011 is implemented for finding out the dependence between maximal values of peak ground accelerations (PGA) and epicentric distances. A number of peculiarities of manifestation of accelerations' maximum values at the interval of long epicentric distances are revealed which do not correspond with current scales of seismic intensity.Keywords: earthquakes, instrumental records, seismic hazard, Japan
Procedia PDF Downloads 3652328 The Effects of Ultrasound on the Extraction of Ficus deltoidea Leaves
Authors: Nur Aimi Syairah Mohd Abdul Alim, Azilah Ajit, A. Z. Sulaiman
Abstract:
The present study aimed to investigate the effects of ultrasound-assisted extraction (UAE) on the extraction of Vitexin and Iso-Vitexin from Ficus deltoidea plants. In recent years, ultrasound technology has been found to be a potential herbal extraction technique. The passage of ultrasound energy in a liquid medium generates mechanical agitation and other physical effects due to acoustic cavitation. The main goal is to optimised ultrasonic-assisted extraction condition providing the highest extraction yield with the most desirable antioxidant activity and stability. Thus, a series of experiments has been developed to investigate the effect of ultrasound energy on the vegetal material and the implemented parameters by using HPLC-photodiode array detection. The influences of several experimental parameters on the ultrasonic extraction of Ficus deltoidea leaves were investigated: extraction time (1-8 h), solvent-to-water ratio (1:10 to 1:50), temperature (50–100 °C), duty cycle (10–continuous sonication) and intensity. The extracts at the optimized condition were compared with those obtained by conventional boiling extraction, in terms of bioactive constituents yield and chemical composition. The compounds of interest identified in the extracts were Vitexin and Isovitexin, which possess anti-diabetic, anti-oxidant and anti-cancer properties. Results showed that the main variables affecting the extraction process were temperature and time. Though in less extent, solvent-to-water ratio, duty cycle and intensity are also demonstrated to be important parameters. The experimental values under optimal conditions were in good consistent with the predicted values, which suggested that ultrasonic-assisted extraction (UAE) is more efficient process as compared to conventional boiling extraction. It recommended that ultrasound extraction of Ficus deltoidea plants are feasible to replace the traditional time-consuming and low efficiency preparation procedure in the future modernized and commercialized manufacture of this highly valuable herbal medicine.Keywords: Ficus, ultrasounds, vitexin, isovitexin
Procedia PDF Downloads 4162327 A Grid Synchronization Method Based On Adaptive Notch Filter for SPV System with Modified MPPT
Authors: Priyanka Chaudhary, M. Rizwan
Abstract:
This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.Keywords: solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique
Procedia PDF Downloads 5942326 Technical and Practical Aspects of Sizing a Autonomous PV System
Authors: Abdelhak Bouchakour, Mustafa Brahami, Layachi Zaghba
Abstract:
The use of photovoltaic energy offers an inexhaustible supply of energy but also a clean and non-polluting energy, which is a definite advantage. The geographical location of Algeria promotes the development of the use of this energy. Indeed, given the importance of the intensity of the radiation received and the duration of sunshine. For this reason, the objective of our work is to develop a data-processing tool (software) of calculation and optimization of dimensioning of the photovoltaic installations. Our approach of optimization is basing on mathematical models, which amongst other things describe the operation of each part of the installation, the energy production, the storage and the consumption of energy.Keywords: solar panel, solar radiation, inverter, optimization
Procedia PDF Downloads 6082325 Eco-Biological Study of Artemia salina (Branchiopoda, Anostraca) in Sahline Salt Lake, Tunisia
Authors: Khalil Trigui, Rafik Ben Said, Fourat Akrout, Neji Aloui
Abstract:
In this study, we examined in the first part the eco-biology of Artemia (A.salina) collected from Sahline Salt Lake (governorate of Monastir: Tunisia) during an annual cycle. The correlations between environmental factors and some biological parameters of Artemia were determined. The results obtained showed that the environmental factors affected the biology of Artemia. The highest abundance was recorded in May (550 ± 2,16 ind/l) and all life history stages existed with different seasonal proportions. The Artemia population is bisexual with ovoviviparous reproduction at the beginning and oviparous at the end of the life cycle. We also recorded the dominance of males at the start and the females at the end of the cycle. During all the study period, the size of mature females is bigger than that of males. The fertility obtained resulted in a significant production of cysts compared to the nauplii. A negative correlation with highly significant effect was deduced between environmental factors (temperature and salinity) and the production of nauplii (ovoviviparity) in contrast with dissolved oxygen. In the second part of our work is consecrated to the mastery of breeding Artemia. For this, we tested the effect of five external factors (temperature, salinity, dissolved oxygen, light intensity and food) on the survival of this crustacean. Thereby, the survival rates of Artemia were affected by the different values of studied factors. The recorded results showed that Artemia salina has an optimum temperature ranged from 25 to 27°C with a survival rate ranging from 84 to 88%. The optimal salinity to breed Artemia salina was 37 psu (62 ± 0,23%). Nevertheless, this crustacean was able to survive and withstand the salinity of 0 psu (freshwater). The optimum concentration of dissolved oxygen was 7mg/l with a survival rate of 87,11 ± 0,04%. An optimum light intensity of 10 lux revealed a survival rate equal to 85,33 ± 0,01%. The results also showed that the preferred micro-algae by Artemia is Dunaliella salina with a maximum survival rate of the order of 80 ± 0,15%. There is a significant effect for all experienced parameters on the survival of Artemia reared except the nature of food.Keywords: Artemia salina, biology, breeding, ecology, Sahline salt lake
Procedia PDF Downloads 3592324 A Randomized, Controlled Trial to Test Habit Formation Theory for Low Intensity Physical Exercise Promotion in Older Adults
Authors: Patrick Louie Robles, Jerry Suls, Ciaran Friel, Mark Butler, Samantha Gordon, Frank Vicari, Joan Duer-Hefele, Karina W. Davidson
Abstract:
Physical activity guidelines focus on increasing moderate-intensity activity for older adults, but adherence to recommendations remains low. This is despite the fact that scientific evidence finds increasing physical activity is positively associated with health benefits. Behavior change techniques (BCTs) have demonstrated some effectiveness in reducing sedentary behavior and promoting physical activity. This pilot study uses a personalized trials (N-of-1) design, delivered virtually, to evaluate the efficacy of using five BCTs in increasing low-intensity physical activity (by 2,000 steps of walking per day) in adults aged 45-75 years old. The 5 BCTs described in habit formation theory are goal setting, action planning, rehearsal, rehearsal in a consistent context, and self-monitoring. The study recruited health system employees in the target age range who had no mobility restrictions and expressed interest in increasing their daily activity by a minimum of 2,000 steps per day at least five days per week. Participants were sent a Fitbit Charge 4 fitness tracker with an established study account and password. Participants were recommended to wear the Fitbit device 24/7 but were required to wear it for a minimum of ten hours per day. Baseline physical activity was measured by Fitbit for two weeks. Participants then engaged remotely with a clinical research coordinator to establish a “walking plan” that included a time and day interval (e.g., between 7am -8am on Monday-Friday), a location for the walk (e.g., park), and how much time the plan would need to achieve a minimum of 2,000 steps over their baseline average step count (20 minutes). All elements of the walking plan were required to remain consistent throughout the study. In the 10-week intervention phase of the study, participants received all five BCTs in a single, time-sensitive text message. The text message was delivered 30 minutes prior to the established walk time and signaled participants to begin walking when the context (i.e., day of the week, time of day) they pre-selected is encountered. Participants were asked to log both the start and conclusion of their activity session by pressing a button on the Fitbit tracker. Within 30 minutes of the planned conclusion of the activity session, participants received a text message with a link to a secure survey. Here, they noted whether they engaged in the BCTs when prompted and completed an automaticity survey to identify how “automatic” their walking behavior had become. At the end of their trial, participants received a personalized summary of their step data over time, helping them learn more about their responses to the five BCTs. Whether the use of these 5 ‘habit formation’ BCTs in combination elicits a change in physical activity behavior among older adults will be reported. This study will inform the feasibility of a virtually-delivered N-of-1 study design to effectively promote physical activity as a component of healthy aging.Keywords: aging, exercise, habit, walking
Procedia PDF Downloads 1382323 The Formation of Mutual Understanding in Conversation: An Embodied Approach
Authors: Haruo Okabayashi
Abstract:
The mutual understanding in conversation is very important for human relations. This study investigates the mental function of the formation of mutual understanding between two people in conversation using the embodied approach. Forty people participated in this study. They are divided into pairs randomly. Four conversation situations between two (make/listen to fun or pleasant talk, make/listen to regrettable talk) are set for four minutes each, and the finger plethysmogram (200 Hz) of each participant is measured. As a result, the attractors of the participants who reported “I did not understand my partner” show the collapsed shape, which means the fluctuation of their rhythm is too small to match their partner’s rhythm, and their cross correlation is low. The autonomic balance of both persons tends to resonate during conversation, and both LLEs tend to resonate, too. In human history, in order for human beings as weak mammals to live, they may have been with others; that is, they have brought about resonating characteristics, which is called self-organization. However, the resonant feature sometimes collapses, depending on the lifestyle that the person was formed by himself after birth. It is difficult for people who do not have a lifestyle of mutual gaze to resonate their biological signal waves with others’. These people have features such as anxiety, fatigue, and confusion tendency. Mutual understanding is thought to be formed as a result of cooperation between the features of self-organization of the persons who are talking and the lifestyle indicated by mutual gaze. Such an entanglement phenomenon is called a nonlinear relation. By this research, it is found that the formation of mutual understanding is expressed by the rhythm of a biological signal showing a nonlinear relationship.Keywords: embodied approach, finger plethysmogram, mutual understanding, nonlinear phenomenon
Procedia PDF Downloads 2662322 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor
Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng
Abstract:
Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.Keywords: electrohysterogram, feature, preterm labor, term labor
Procedia PDF Downloads 5712321 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space
Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson
Abstract:
Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling
Procedia PDF Downloads 2352320 Leaf Image Processing: Review
Authors: T. Vijayashree, A. Gopal
Abstract:
The aim of the work is to classify and authenticate medicinal plant materials and herbs widely used for Indian herbal medicinal preparation. The quality and authenticity of these raw materials are to be ensured for the preparation of herbal medicines. These raw materials are to be carefully screened, analyzed and documented due to mistaken of look-alike materials which do not have medicinal characteristics.Keywords: authenticity, standardization, principal component analysis, imaging processing, signal processing
Procedia PDF Downloads 2462319 A Diurnal Light Based CO₂ Elevation Strategy for Up-Scaling Chlorella sp. Production by Minimizing Oxygen Accumulation
Authors: Venkateswara R. Naira, Debasish Das, Soumen K. Maiti
Abstract:
Achieving high cell densities of microalgae under obligatory light-limiting and high light conditions of diurnal (low-high-low variations of daylight intensity) sunlight are further limited by CO₂ supply and dissolved oxygen (DO) accumulation in large-scale photobioreactors. High DO levels cause low growth due to photoinhibition and/or photorespiration. Hence, scalable elevated CO₂ levels (% in air) and their effect on DO accumulation in a 10 L cylindrical membrane photobioreactor (a vertical tubular type) are studied in the present study. The CO₂ elevation strategies; biomass-based, pH control based (types II & I) and diurnal light based, were explored to study the growth of Chlorella sp. FC2 IITG under single-sided LED lighting in the laboratory, mimicking diurnal sunlight. All the experiments were conducted in fed-batch mode by maintaining N and P sources at least 50% of initial concentrations of the optimized BG-11 medium. It was observed that biomass-based (2% - 1st day, 2.5% - 2nd day and 3% - thereafter) and well-known pH control based, type-I (5.8 pH throughout) strategies were found lethal for FC2 growth. In both strategies, the highest peak DO accumulation of 150% air saturation was resulted due to high photosynthetic activity caused by higher CO₂ levels. In the pH control based type-I strategy, automatically resulted CO₂ levels for pH control were recorded so high (beyond the inhibition range, 5%). However, pH control based type-II strategy (5.8 – 2 days, 6.3 – 3 days, 6.7 – thereafter) showed final biomass titer up to 4.45 ± 0.05 g L⁻¹ with peak DO of 122% air saturation; high CO₂ levels beyond 5% (in air) were recorded thereafter. Thus, it became sustainable for obtaining high biomass. Finally, a diurnal light based (2% - low light, 2.5 % - medium light and 3% - high light) strategy was applied on the basis of increasing/decreasing photosynthesis due to increase/decrease in diurnal light intensity. It has resulted in maximum final biomass titer of 5.33 ± 0.12 g L⁻¹, with total biomass productivity of 0.59 ± 0.01 g L⁻¹ day⁻¹. The values are remarkably higher than constant 2% CO₂ level (final biomass titer: 4.26 ± 0.09 g L⁻¹; biomass productivity: 0.27 ± 0.005 g L⁻¹ day⁻¹). However, 135% air saturation of peak DO was observed. Thus, the diurnal light based elevation should be further improved by using CO₂ enriched N₂ instead of air. To the best of knowledge, the light-based CO₂ elevation strategy is not reported elsewhere.Keywords: Chlorella sp., CO₂ elevation strategy, dissolved oxygen accumulation, diurnal light based CO₂ elevation, high cell density, microalgae, scale-up
Procedia PDF Downloads 1252318 Satellite Based Assessment of Urban Heat Island Effects on Major Cities of Pakistan
Authors: Saad Bin Ismail, Muhammad Ateeq Qureshi, Rao Muhammad Zahid Khalil
Abstract:
In the last few decades, urbanization worldwide has been sprawled manifold, which is denunciated in the growth of urban infrastructure and transportation. Urban Heat Island (UHI) can induce deterioration of the living environment, disabilities, and rises in energy usages. In this study, the prevalence/presence of Surface Urban Heat Island (SUHI) effect in major cities of Pakistan, including Islamabad, Rawalpindi, Lahore, Karachi, Quetta, and Peshawar has been investigated. Landsat and SPOT satellite images were acquired for the assessment of urban sprawl. MODIS Land Surface Temperature product MOD11A2 was acquired between 1000-1200 hours (local time) for assessment of urban heat island. The results of urban sprawl informed that the extent of Islamabad and Rawalpindi urban area increased from 240 km2 to 624 km2 between 2000 and 2016, accounted 24 km2 per year, Lahore 29 km2, accounted 1.6 km2 per year, Karachi 261 km2, accounted for 16 km2/ per year, Peshawar 63 km2, accounted 4 km2/per year, and Quetta 76 km2/per year, accounted 5 km2/per year approximately. The average Surface Urban Heat Island (SUHI) magnitude is observed at a scale of 0.63 ᵒC for Islamabad and Rawalpindi, 1.25 ᵒC for Lahore, and 1.16 ᵒC for Karachi, which is 0.89 ᵒC for Quetta, and 1.08 ᵒC for Peshawar from 2000 to 2016. The pixel-based maximum SUHI intensity reaches up to about 11.40 ᵒC for Islamabad and Rawalpindi, 15.66 ᵒC for Lahore, 11.20 ᵒC for Karachi, 14.61 ᵒC for Quetta, and 15.22 ᵒC for Peshawar from the baseline of zero degrees Centigrade (ᵒC). The overall trend of SUHI in planned cities (e.g., Islamabad) is not found to increase significantly. Spatial and temporal patterns of SUHI for selected cities reveal heterogeneity and a unique pattern for each city. It is well recognized that SUHI intensity is modulated by land use/land cover patterns (due to their different surface properties and cooling rates), meteorological conditions, and anthropogenic activities. The study concluded that the selected cities (Islamabad, Rawalpindi, Lahore, Karachi, Quetta, and Peshawar) are examples where dense urban pockets observed about 15 ᵒC warmer than a nearby rural area.Keywords: urban heat island , surface urban heat island , urbanization, anthropogenic source
Procedia PDF Downloads 3222317 The Implantable MEMS Blood Pressure Sensor Model With Wireless Powering And Data Transmission
Authors: Vitaliy Petrov, Natalia Shusharina, Vitaliy Kasymov, Maksim Patrushev, Evgeny Bogdanov
Abstract:
The leading worldwide death reasons are ischemic heart disease and other cardiovascular illnesses. Generally, the common symptom is high blood pressure. Long-time blood pressure control is very important for the prophylaxis, correct diagnosis and timely therapy. Non-invasive methods which are based on Korotkoff sounds are impossible to apply often and for a long time. Implantable devices can combine longtime monitoring with high accuracy of measurements. The main purpose of this work is to create a real-time monitoring system for decreasing the death rate from cardiovascular diseases. These days implantable electronic devices began to play an important role in medicine. Usually implantable devices consist of a transmitter, powering which could be wireless with a special made battery and measurement circuit. Common problems in making implantable devices are short lifetime of the battery, big size and biocompatibility. In these work, blood pressure measure will be the focus because it’s one of the main symptoms of cardiovascular diseases. Our device will consist of three parts: the implantable pressure sensor, external transmitter and automated workstation in a hospital. The Implantable part of pressure sensors could be based on piezoresistive or capacitive technologies. Both sensors have some advantages and some limitations. The Developed circuit is based on a small capacitive sensor which is made of the technology of microelectromechanical systems (MEMS). The Capacitive sensor can provide high sensitivity, low power consumption and minimum hysteresis compared to the piezoresistive sensor. For this device, it was selected the oscillator-based circuit where frequency depends from the capacitance of sensor hence from capacitance one can calculate pressure. The external device (transmitter) used for wireless charging and signal transmission. Some implant devices for these applications are passive, the external device sends radio wave signal on internal LC circuit device. The external device gets reflected the signal from the implant and from a change of frequency is possible to calculate changing of capacitance and then blood pressure. However, this method has some disadvantages, such as the patient position dependence and static using. Developed implantable device doesn’t have these disadvantages and sends blood pressure data to the external part in real-time. The external device continuously sends information about blood pressure to hospital cloud service for analysis by a physician. Doctor’s automated workstation at the hospital also acts as a dashboard, which displays actual medical data of patients (which require attention) and stores it in cloud service. Usually, critical heart conditions occur few hours before heart attack but the device is able to send an alarm signal to the hospital for an early action of medical service. The system was tested with wireless charging and data transmission. These results can be used for ASIC design for MEMS pressure sensor.Keywords: MEMS sensor, RF power, wireless data, oscillator-based circuit
Procedia PDF Downloads 5892316 Content-Aware Image Augmentation for Medical Imaging Applications
Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang
Abstract:
Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving
Procedia PDF Downloads 2222315 Linking Milk Price and Production Costs with Greenhouse Gas Emissions of Luxembourgish Dairy Farms
Authors: Rocco Lioy, Tom Dusseldorf, Aline Lehnen, Romain Reding
Abstract:
A study concerning both the rentability and ecological performance of dairy production in Luxembourg was carried out for the years 2017, 2018 and 2019. The data of 100 dairy farms, referring to the Greenhouse gas emissions (ecology) and the profitability (economy) of dairy production, were evaluated, and the average was compared to the corresponding figures of 80 Luxembourgish dairy farms evaluated in the years 2014, 2015 and 2016. The ecological evaluation could confirm that farm efficiency (especially defined as the lowest ratio between used feedstuff and produced milk) is the key driver for significantly reducing the level of emissions in dairy farms. In both farm groups and in the two periods, the efficient farms show almost the same level of emissions per kg ECM (1,17 kg CO2-eq) in comparison with intensive farms (1,13 kg CO2-eq), and at the same time a by far lowest level of emissions related to the production surface (9,9 vs. 13,9 t CO2-eq/ha). Concerning the economic performances, it could be observed that in the years 2017, 2018 and 2019, the intensive farms (we define intensity in the first place in terms of produced milk pro ha) reached a higher profit (incomes minus costs, only consideration for subsidies) than the efficient farms (4,8 vs. 2,6 €-cent/kg ECM), in contradiction with the observation of the years 2014, 2015 and 2015 (1,5 vs. 3,7 €-cent/kg ECM). The most important reason for this divergent behavior was a change in income and cost structure in the considered periods. In the last period (2017, 2018 and 2019), the milk price was considerably higher than in the previous period, and the production costs were lower. This was of advantage for intensive farms, which produce the highest quantity of milk with a high amount of production means. In the period 2014, 2015 and 2016, with lower milk prices but comparable production costs, the advantage was with efficient farms. In conclusion, we expect that in the next future, when especially the production costs will presumably be much higher than in the last years, the profitableness of dairy farming will decrease. In this case, we assume that efficient farms will provide not only an ecologically but also an economically better performance than production-intensive farms. High milk prices and low production costs are no good incentives for carbon-smart farming.Keywords: efficiency, intensity, dairy, emissions, prices, costs
Procedia PDF Downloads 972314 Coexistence of Two Different Types of Intermittency near the Boundary of Phase Synchronization in the Presence of Noise
Authors: Olga I. Moskalenko, Maksim O. Zhuravlev, Alexey A. Koronovskii, Alexander E. Hramov
Abstract:
Intermittent behavior near the boundary of phase synchronization in the presence of noise is studied. In certain range of the coupling parameter and noise intensity the intermittency of eyelet and ring intermittencies is shown to take place. Main results are illustrated using the example of two unidirectionally coupled Rössler systems. Similar behavior is shown to take place in two hydrodynamical models of Pierce diode coupled unidirectionally.Keywords: chaotic oscillators, phase synchronization, noise, intermittency of intermittencies
Procedia PDF Downloads 6422313 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications
Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino
Abstract:
The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses
Procedia PDF Downloads 181