Search results for: Sequential Linear Programming (SLP)
198 Simulation Research of the Aerodynamic Drag of 3D Structures for Individual Transport Vehicle
Authors: Pawel Magryta, Mateusz Paszko
Abstract:
In today's world, a big problem of individual mobility, especially in large urban areas, occurs. Commonly used grand way of transport such as buses, trains or cars do not fulfill their tasks, i.e. they are not able to meet the increasing mobility needs of the growing urban population. Additional to that, the limitations of civil infrastructure construction in the cities exist. Nowadays the most common idea is to transfer the part of urban transport on the level of air transport. However to do this, there is a need to develop an individual flying transport vehicle. The biggest problem occurring in this concept is the type of the propulsion system from which the vehicle will obtain a lifting force. Standard propeller drives appear to be too noisy. One of the ideas is to provide the required take-off and flight power by the machine using the innovative ejector system. This kind of the system will be designed through a suitable choice of the three-dimensional geometric structure with special shape of nozzle in order to generate overpressure. The authors idea is to make a device that would allow to cumulate the overpressure using the a five-sided geometrical structure that will be limited on the one side by the blowing flow of air jet. In order to test this hypothesis a computer simulation study of aerodynamic drag of such 3D structures have been made. Based on the results of these studies, the tests on real model were also performed. The final stage of work was a comparative analysis of the results of simulation and real tests. The CFD simulation studies of air flow was conducted using the Star CD - Star Pro 3.2 software. The design of virtual model was made using the Catia v5 software. Apart from the objective to obtain advanced aviation propulsion system, all of the tests and modifications of 3D structures were also aimed at achieving high efficiency of this device while maintaining the ability to generate high value of overpressures. This was possible only in case of a large mass flow rate of air. All these aspects have been possible to verify using CFD methods for observing the flow of the working medium in the tested model. During the simulation tests, the distribution and size of pressure and velocity vectors were analyzed. Simulations were made with different boundary conditions (supply air pressure), but with a fixed external conditions (ambient temp., ambient pressure, etc.). The maximum value of obtained overpressure is 2 kPa. This value is too low to exploit the power of this device for the individual transport vehicle. Both the simulation model and real object shows a linear dependence of the overpressure values obtained from the different geometrical parameters of three-dimensional structures. Application of computational software greatly simplifies and streamlines the design and simulation capabilities. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: aviation propulsion, CFD, 3d structure, aerodynamic drag
Procedia PDF Downloads 310197 Ambient Factors in the Perception of Crowding in Public Transport
Authors: John Zacharias, Bin Wang
Abstract:
Travel comfort is increasingly seen as crucial to effecting the switch from private motorized modes to public transit. Surveys suggest that travel comfort is closely related to perceived crowding, that may involve lack of available seating, difficulty entering and exiting, jostling and other physical contacts with strangers. As found in studies on environmental stress, other factors may moderate perceptions of crowding–in this case, we hypothesize that the ambient environment may play a significant role. Travel comfort was measured by applying a structured survey to randomly selected passengers (n=369) on 3 lines of the Beijing metro on workdays. Respondents were standing with all seats occupied and with car occupancy at 14 levels. A second research assistant filmed the metro car while passengers were interviewed, to obtain the total number of passengers. Metro lines 4, 6 and 10 were selected that travel through the central city north-south, east-west and circumferentially. Respondents evaluated the following factors: crowding, noise, smell, air quality, temperature, illumination, vibration and perceived safety as they experienced them at the time of interview, and then were asked to rank these 8 factors according to their importance for their travel comfort. Evaluations were semantic differentials on a 7-point scale from highly unsatisfactory (-3) to highly satisfactory (+3). The control variables included age, sex, annual income and trip purpose. Crowding was assessed most negatively, with 41% of the scores between -3 and -2. Noise and air quality were also assessed negatively, with two-thirds of the evaluations below 0. Illumination was assessed most positively, followed by crime, vibration and temperature, all scoring at indifference (0) or slightly positive. Perception of crowding was linearly and positively related to the number of passengers in the car. Linear regression tested the impact of ambient environmental factors on perception of crowding. Noise intensity accounted for more than the actual number of individuals in the car in the perception of crowding, with smell also contributing. Other variables do not interact with the crowding variable although the evaluations are distinct. In all, only one-third of the perception of crowding (R2=.154) is explained by the number of people, with the other ambient environmental variables accounting for two-thirds of the variance (R2=.316). However, when ranking the factors by their importance to travel comfort, perceived crowding made up 69% of the first rank, followed by noise at 11%. At rank 2, smell dominates (25%), followed by noise and air quality (17%). Commuting to work induces significantly lower evaluations of travel comfort with shopping the most positive. Clearly, travel comfort is particularly important to commuters. Moreover, their perception of crowding while travelling on metro is highly conditioned by the ambient environment in the metro car. Focussing attention on the ambient environmental conditions of the metro is an effective way to address the primary concerns of travellers with overcrowding. In general, the strongly held opinions on travel comfort require more attention in the effort to induce ridership in public transit.Keywords: ambient environment, mass rail transit, public transit, travel comfort
Procedia PDF Downloads 262196 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector
Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini
Abstract:
Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products
Procedia PDF Downloads 151195 Comparison of a Capacitive Sensor Functionalized with Natural or Synthetic Receptors Selective towards Benzo(a)Pyrene
Authors: Natalia V. Beloglazova, Pieterjan Lenain, Martin Hedstrom, Dietmar Knopp, Sarah De Saeger
Abstract:
In recent years polycyclic aromatic hydrocarbons (PAHs), which represent a hazard to humans and entire ecosystem, have been receiving an increased interest due to their mutagenic, carcinogenic and endocrine disrupting properties. They are formed in all incomplete combustion processes of organic matter and, as a consequence, ubiquitous in the environment. Benzo(a)pyrene (BaP) is on the priority list published by the Environmental Agency (US EPA) as the first PAH to be identified as a carcinogen and has often been used as a marker for PAHs contamination in general. It can be found in different types of water samples, therefore, the European Commission set up a limit value of 10 ng L–1 (10 ppt) for BAP in water intended for human consumption. Generally, different chromatographic techniques are used for PAHs determination, but these assays require pre-concentration of analyte, create large amounts of solvent waste, and are relatively time consuming and difficult to perform on-site. An alternative robust, stand-alone, and preferably cheap solution is needed. For example, a sensing unit which can be submerged in a river to monitor and continuously sample BaP. An affinity sensor based on capacitive transduction was developed. Natural antibodies or their synthetic analogues can be used as ligands. Ideally the sensor should operate independently over a longer period of time, e.g. several weeks or months, therefore the use of molecularly imprinted polymers (MIPs) was discussed. MIPs are synthetic antibodies which are selective for a chosen target molecule. Their robustness allows application in environments for which biological recognition elements are unsuitable or denature. They can be reused multiple times, which is essential to meet the stand-alone requirement. BaP is a highly lipophilic compound and does not contain any functional groups in its structure, thus excluding non-covalent imprinting methods based on ionic interactions. Instead, the MIPs syntheses were based on non-covalent hydrophobic and π-π interactions. Different polymerization strategies were compared and the best results were demonstrated by the MIPs produced using electropolymerization. 4-vinylpyridin (VP) and divinylbenzene (DVB) were used as monomer and cross-linker in the polymerization reaction. The selectivity and recovery of the MIP were compared to a non-imprinted polymer (NIP). Electrodes were functionalized with natural receptor (monoclonal anti-BaP antibody) and with MIPs selective towards BaP. Different sets of electrodes were evaluated and their properties such as sensitivity, selectivity and linear range were determined and compared. It was found that both receptor can reach the cut-off level comparable to the established ML, and despite the fact that the antibody showed the better cross-reactivity and affinity, MIPs were more convenient receptor due to their ability to regenerate and stability in river till 7 days.Keywords: antibody, benzo(a)pyrene, capacitive sensor, MIPs, river water
Procedia PDF Downloads 303194 A Dynamic Model for Circularity Assessment of Nutrient Recovery from Domestic Sewage
Authors: Anurag Bhambhani, Jan Peter Van Der Hoek, Zoran Kapelan
Abstract:
The food system depends on the availability of Phosphorus (P) and Nitrogen (N). Growing population, depleting Phosphorus reserves and energy-intensive industrial nitrogen fixation are threats to their future availability. Recovering P and N from domestic sewage water offers a solution. Recovered P and N can be applied to agricultural land, replacing virgin P and N. Thus, recovery from sewage water offers a solution befitting a circular economy. To ensure minimum waste and maximum resource efficiency a circularity assessment method is crucial to optimize nutrient flows and minimize losses. Material Circularity Indicator (MCI) is a useful method to quantify the circularity of materials. It was developed for materials that remain within the market and recently extended to include biotic materials that may be composted or used for energy recovery after end-of-use. However, MCI has not been used in the context of nutrient recovery. Besides, MCI is time-static, i.e., it cannot account for dynamic systems such as the terrestrial nutrient cycles. Nutrient application to agricultural land is a highly dynamic process wherein flows and stocks change with time. The rate of recycling of nutrients in nature can depend on numerous factors such as prevailing soil conditions, local hydrology, the presence of animals, etc. Therefore, a dynamic model of nutrient flows with indicators is needed for the circularity assessment. A simple substance flow model of P and N will be developed with the help of flow equations and transfer coefficients that incorporate the nutrient recovery step along with the agricultural application, the volatilization and leaching processes, plant uptake and subsequent animal and human uptake. The model is then used for calculating the proportions of linear and restorative flows (coming from reused/recycled sources). The model will simulate the adsorption process based on the quantity of adsorbent and nutrient concentration in the water. Thereafter, the application of the adsorbed nutrients to agricultural land will be simulated based on adsorbate release kinetics, local soil conditions, hydrology, vegetation, etc. Based on the model, the restorative nutrient flow (returning to the sewage plant following human consumption) will be calculated. The developed methodology will be applied to a case study of resource recovery from wastewater. In the aforementioned case study located in Italy, biochar or zeolite is to be used for recovery of P and N from domestic sewage through adsorption and thereafter, used as a slow-release fertilizer in agriculture. Using this model, information regarding the efficiency of nutrient recovery and application can be generated. This can help to optimize the recovery process and application of the nutrients. Consequently, this will help to optimize nutrient recovery and application and reduce the dependence of the food system on the virgin extraction of P and N.Keywords: circular economy, dynamic substance flow, nutrient cycles, resource recovery from water
Procedia PDF Downloads 197193 The Relationship between Wasting and Stunting in Young Children: A Systematic Review
Authors: Susan Thurstans, Natalie Sessions, Carmel Dolan, Kate Sadler, Bernardette Cichon, Shelia Isanaka, Dominique Roberfroid, Heather Stobagh, Patrick Webb, Tanya Khara
Abstract:
For many years, wasting and stunting have been viewed as separate conditions without clear evidence supporting this distinction. In 2014, the Emergency Nutrition Network (ENN) examined the relationship between wasting and stunting and published a report highlighting the evidence for linkages between the two forms of undernutrition. This systematic review aimed to update the evidence generated since this 2014 report to better understand the implications for improving child nutrition, health and survival. Following PRISMA guidelines, this review was conducted using search terms to describe the relationship between wasting and stunting. Studies related to children under five from low- and middle-income countries that assessed both ponderal growth/wasting and linear growth/stunting, as well as the association between the two, were included. Risk of bias was assessed in all included studies using SIGN checklists. 45 studies met the inclusion criteria- 39 peer reviewed studies, 1 manual chapter, 3 pre-print publications and 2 published reports. The review found that there is a strong association between the two conditions whereby episodes of wasting contribute to stunting and, to a lesser extent, stunting leads to wasting. Possible interconnected physiological processes and common risk factors drive an accumulation of vulnerabilities. Peak incidence of both wasting and stunting was found to be between birth and three months. A significant proportion of children experience concurrent wasting and stunting- Country level data suggests that up to 8% of children under 5 may be both wasted and stunted at the same time, global estimates translate to around 16 million children. Children with concurrent wasting and stunting have an elevated risk of mortality when compared to children with one deficit alone. These children should therefore be considered a high-risk group in the targeting of treatment. Wasting, stunting and concurrent wasting and stunting appear to be more prevalent in boys than girls and it appears that concurrent wasting and stunting peaks between 12- 30 months of age with younger children being the most affected. Seasonal patterns in prevalence of both wasting and stunting are seen in longitudinal and cross sectional data and in particular season of birth has been shown to have an impact on a child’s subsequent experience of wasting and stunting. Evidence suggests that the use of mid-upper-arm circumference combined with weight-for-age Z-score might effectively identify children most at risk of near-term mortality, including those concurrently wasted and stunted. Wasting and stunting frequently occur in the same child, either simultaneously or at different moments through their life course. Evidence suggests there is a process of accumulation of nutritional deficits and therefore risk over the life course of a child demonstrates the need for a more integrated approach to prevention and treatment strategies to interrupt this process. To achieve this, undernutrition policies, programmes, financing and research must become more unified.Keywords: Concurrent wasting and stunting, Review, Risk factors, Undernutrition
Procedia PDF Downloads 127192 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction
Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini
Abstract:
Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable
Procedia PDF Downloads 280191 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 70190 Steel Concrete Composite Bridge: Modelling Approach and Analysis
Authors: Kaviyarasan D., Satish Kumar S. R.
Abstract:
India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge
Procedia PDF Downloads 185189 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands
Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé
Abstract:
The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis
Procedia PDF Downloads 163188 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 147187 The Impact of Gestational Weight Gain on Subclinical Atherosclerosis, Placental Circulation and Neonatal Complications
Authors: Marina Shargorodsky
Abstract:
Aim: Gestational weight gain (GWG) has been related to altering future weight-gain curves and increased risks of obesity later in life. Obesity may contribute to vascular atherosclerotic changes as well as excess cardiovascular morbidity and mortality observed in these patients. Noninvasive arterial testing, such as ultrasonographic measurement of carotid IMT, is considered a surrogate for systemic atherosclerotic disease burden and is predictive of cardiovascular events in asymptomatic individuals as well as recurrent events in patients with known cardiovascular disease. Currently, there is no consistent evidence regarding the vascular impact of excessive GWG. The present study was designed to investigate the impact of GWG on early atherosclerotic changes during late pregnancy, using intima-media thickness, as well as placental vascular circulation and inflammatory lesions and pregnancy outcomes. Methods: The study group consisted of 59 pregnant women who gave birth and underwent a placental histopathological examination at the Department of Obstetrics and Gynecology, Edith Wolfson Medical Center, Israel, in 2019. According to the IOM guidelines the study group has been divided into two groups: Group 1 included 32 women with pregnancy weight gain within recommended range; Group 2 included 27 women with excessive weight gain during pregnancy. The IMT was measured from non-diseased intimal and medial wall layers of the carotid artery on both sides, visualized by high-resolution 7.5 MHz ultrasound (Apogee CX Color, ATL). Placental histology subdivided placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion according to the criteria of the Society for Pediatric Pathology, subdividing placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion, as well as the inflammatory response of maternal and fetal origin. Results: IMT levels differed between groups and were significantly higher in Group 1 compared to Group 2 (0.7+/-0.1 vs 0.6+/-0/1, p=0.028). Multiple linear regression analysis of IMT included variables based on their associations in univariate analyses with a backward approach. Included in the model were pre-gestational BMI, HDL cholesterol and fasting glucose. The model was significant (p=0.001) and correctly classified 64.7% of study patients. In this model, pre-pregnancy BMI remained a significant independent predictor of subclinical atherosclerosis assessed by IMT (OR 4.314, 95% CI 0.0599-0.674, p=0.044). Among placental lesions related to fetal vascular malperfusion, villous changes consistent with fetal thrombo-occlusive disease (FTOD) were significantly higher in Group 1 than in Group 2, p=0.034). In Conclusion, the present study demonstrated that excessive weight gain during pregnancy is associated with an adverse effect on early stages of subclinical atherosclerosis, placental vascular circulation and neonatal complications. The precise mechanism for these vascular changes, as well as the overall clinical impact of weight control during pregnancy on IMT, placental vascular circulation as well as pregnancy outcomes, deserves further investigation.Keywords: obesity, pregnancy, complications, weight gain
Procedia PDF Downloads 53186 Construction of a Dynamic Migration Model of Extracellular Fluid in Brain for Future Integrated Control of Brain State
Authors: Tomohiko Utsuki, Kyoka Sato
Abstract:
In emergency medicine, it is recognized that brain resuscitation is very important for the reduction of mortality rate and neurological sequelae. Especially, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) are most required for stabilizing brain’s physiological state in the treatment for such as brain injury, stroke, and encephalopathy. However, the manual control of BT, ICP, and CBF frequently requires the decision and operation of medical staff, relevant to medication and the setting of therapeutic apparatus. Thus, the integration and the automation of the control of those is very effective for not only improving therapeutic effect but also reducing staff burden and medical cost. For realizing such integration and automation, a mathematical model of brain physiological state is necessary as the controlled object in simulations, because the performance test of a prototype of the control system using patients is not ethically allowed. A model of cerebral blood circulation has already been constructed, which is the most basic part of brain physiological state. Also, a migration model of extracellular fluid in brain has been constructed, however the condition that the total volume of intracranial cavity is almost changeless due to the hardness of cranial bone has not been considered in that model. Therefore, in this research, the dynamic migration model of extracellular fluid in brain was constructed on the consideration of the changelessness of intracranial cavity’s total volume. This model is connectable to the cerebral blood circulation model. The constructed model consists of fourteen compartments, twelve of which corresponds to perfused area of bilateral anterior, middle and posterior cerebral arteries, the others corresponds to cerebral ventricles and subarachnoid space. This model enable to calculate the migration of tissue fluid from capillaries to gray matter and white matter, the flow of tissue fluid between compartments, the production and absorption of cerebrospinal fluid at choroid plexus and arachnoid granulation, and the production of metabolic water. Further, the volume, the colloid concentration, and the tissue pressure of/in each compartment are also calculable by solving 40-dimensional non-linear simultaneous differential equations. In this research, the obtained model was analyzed for its validation under the four condition of a normal adult, an adult with higher cerebral capillary pressure, an adult with lower cerebral capillary pressure, and an adult with lower colloid concentration in cerebral capillary. In the result, calculated fluid flow, tissue volume, colloid concentration, and tissue pressure were all converged to suitable value for the set condition within 60 minutes at a maximum. Also, because these results were not conflict with prior knowledge, it is certain that the model can enough represent physiological state of brain under such limited conditions at least. One of next challenges is to integrate this model and the already constructed cerebral blood circulation model. This modification enable to simulate CBF and ICP more precisely due to calculating the effect of blood pressure change to extracellular fluid migration and that of ICP change to CBF.Keywords: dynamic model, cerebral extracellular migration, brain resuscitation, automatic control
Procedia PDF Downloads 156185 Reworking of the Anomalies in the Discounted Utility Model as a Combination of Cognitive Bias and Decrease in Impatience: Decision Making in Relation to Bounded Rationality and Emotional Factors in Intertemporal Choices
Authors: Roberta Martino, Viviana Ventre
Abstract:
Every day we face choices whose consequences are deferred in time. These types of choices are the intertemporal choices and play an important role in the social, economic, and financial world. The Discounted Utility Model is the mathematical model of reference to calculate the utility of intertemporal prospects. The discount rate is the main element of the model as it describes how the individual perceives the indeterminacy of subsequent periods. Empirical evidence has shown a discrepancy between the behavior expected from the predictions of the model and the effective choices made from the decision makers. In particular, the term temporal inconsistency indicates those choices that do not remain optimal with the passage of time. This phenomenon has been described with hyperbolic models of the discount rate which, unlike the linear or exponential nature assumed by the discounted utility model, is not constant over time. This paper explores the problem of inconsistency by tracing the decision-making process through the concept of impatience. The degree of impatience and the degree of decrease of impatience are two parameters that allow to quantify the weight of emotional factors and cognitive limitations during the evaluation and selection of alternatives. In fact, although the theory assumes perfectly rational decision makers, behavioral finance and cognitive psychology have made it possible to understand that distortions in the decision-making process and emotional influence have an inevitable impact on the decision-making process. The degree to which impatience is diminished is the focus of the first part of the study. By comparing consistent and inconsistent preferences over time, it was possible to verify that some anomalies in the discounted utility model are a result of the combination of cognitive bias and emotional factors. In particular: the delay effect and the interval effect are compared through the concept of misperception of time; starting from psychological considerations, a criterion is proposed to identify the causes of the magnitude effect that considers the differences in outcomes rather than their ratio; the sign effect is analyzed by integrating in the evaluation of prospects with negative outcomes the psychological aspects of loss aversion provided by Prospect Theory. An experiment implemented confirms three findings: the greatest variation in the degree of decrease in impatience corresponds to shorter intervals close to the present; the greatest variation in the degree of impatience occurs for outcomes of lower magnitude; the variation in the degree of impatience is greatest for negative outcomes. The experimental phase was implemented with the construction of the hyperbolic factor through the administration of questionnaires constructed for each anomaly. This work formalizes the underlying causes of the discrepancy between the discounted utility model and the empirical evidence of preference reversal.Keywords: decreasing impatience, discount utility model, hyperbolic discount, hyperbolic factor, impatience
Procedia PDF Downloads 103184 Co₂Fe LDH on Aromatic Acid Functionalized N Doped Graphene: Hybrid Electrocatalyst for Oxygen Evolution Reaction
Authors: Biswaranjan D. Mohapatra, Ipsha Hota, Swarna P. Mantry, Nibedita Behera, Kumar S. K. Varadwaj
Abstract:
Designing highly active and low-cost oxygen evolution (2H₂O → 4H⁺ + 4e⁻ + O₂) electrocatalyst is one of the most active areas of advanced energy research. Some precious metal-based electrocatalysts, such as IrO₂ and RuO₂, have shown excellent performance for oxygen evolution reaction (OER); however, they suffer from high-cost and low abundance which limits their applications. Recently, layered double hydroxides (LDHs), composed of layers of divalent and trivalent transition metal cations coordinated to hydroxide anions, have gathered attention as an alternative OER catalyst. However, LDHs are insulators and coupled with carbon materials for the electrocatalytic applications. Graphene covalently doped with nitrogen has been demonstrated to be an excellent electrocatalyst for energy conversion technologies such as; oxygen reduction reaction (ORR), oxygen evolution reaction (OER) & hydrogen evolution reaction (HER). However, they operate at high overpotentials, significantly above the thermodynamic standard potentials. Recently, we reported remarkably enhanced catalytic activity of benzoate or 1-pyrenebutyrate functionalized N-doped graphene towards the ORR in alkaline medium. The molecular and heteroatom co-doping on graphene is expected to tune the electronic structure of graphene. Therefore, an innovative catalyst architecture, in which LDHs are anchored on aromatic acid functionalized ‘N’ doped graphene may presumably boost the OER activity to a new benchmark. Herein, we report fabrication of Co₂Fe-LDH on aromatic acid (AA) functionalized ‘N’ doped reduced graphene oxide (NG) and studied their OER activities in alkaline medium. In the first step, a novel polyol method is applied for synthesis of AA functionalized NG, which is well dispersed in aqueous medium. In the second step, Co₂Fe LDH were grown on AA functionalized NG by co-precipitation method. The hybrid samples are abbreviated as Co₂Fe LDH/AA-NG, where AA is either Benzoic acid or 1, 3-Benzene dicarboxylic acid (BDA) or 1, 3, 5 Benzene tricarboxylic acid (BTA). The crystal structure and morphology of the samples were characterized by X-ray diffraction (XRD), scanning electron microscope (SEM) and transmission electron microscope (TEM). These studies confirmed the growth of layered single phase LDH. The electrocatalytic OER activity of these hybrid materials was investigated by rotating disc electrode (RDE) technique on a glassy carbon electrode. The linear sweep voltammetry (LSV) on these catalyst samples were taken at 1600rpm. We observed significant OER performance enhancement in terms of onset potential and current density on Co₂Fe LDH/BTA-NG hybrid, indicating the synergic effect. This exploration of molecular functionalization effect in doped graphene and LDH system may provide an excellent platform for innovative design of OER catalysts.Keywords: π-π functionalization, layered double hydroxide, oxygen evolution reaction, reduced graphene oxide
Procedia PDF Downloads 207183 Synthesis of Carbon Nanotubes from Coconut Oil and Fabrication of a Non Enzymatic Cholesterol Biosensor
Authors: Mitali Saha, Soma Das
Abstract:
The fabrication of nanoscale materials for use in chemical sensing, biosensing and biological analyses has proven a promising avenue in the last few years. Cholesterol has aroused considerable interest in recent years on account of its being an important parameter in clinical diagnosis. There is a strong positive correlation between high serum cholesterol level and arteriosclerosis, hypertension, and myocardial infarction. Enzyme-based electrochemical biosensors have shown high selectivity and excellent sensitivity, but the enzyme is easily denatured during its immobilization procedure and its activity is also affected by temperature, pH, and toxic chemicals. Besides, the reproducibility of enzyme-based sensors is not very good which further restrict the application of cholesterol biosensor. It has been demonstrated that carbon nanotubes could promote electron transfer with various redox active proteins, ranging from cytochrome c to glucose oxidase with a deeply embedded redox center. In continuation of our earlier work on the synthesis and applications of carbon and metal based nanoparticles, we have reported here the synthesis of carbon nanotubes (CCNT) by burning coconut oil under insufficient flow of air using an oil lamp. The soot was collected from the top portion of the flame, where the temperature was around 6500C which was purified, functionalized and then characterized by SEM, p-XRD and Raman spectroscopy. The SEM micrographs showed the formation of tubular structure of CCNT having diameter below 100 nm. The XRD pattern indicated the presence of two predominant peaks at 25.20 and 43.80, which corresponded to (002) and (100) planes of CCNT respectively. The Raman spectrum (514 nm excitation) showed the presence of 1600 cm-1 (G-band) related to the vibration of sp2-bonded carbon and at 1350 cm-1 (D-band) responsible for the vibrations of sp3-bonded carbon. A nonenzymatic cholesterol biosensor was then fabricated on an insulating Teflon material containing three silver wires at the surface, covered by CCNT, obtained from coconut oil. Here, CCNTs worked as working as well as counter electrodes whereas reference electrode and electric contacts were made of silver. The dimensions of the electrode was 3.5 cm×1.0 cm×0.5 cm (length× width × height) and it is ideal for working with 50 µL volume like the standard screen printed electrodes. The voltammetric behavior of cholesterol at CCNT electrode was investigated by cyclic voltammeter and differential pulse voltammeter using 0.001 M H2SO4 as electrolyte. The influence of the experimental parameters on the peak currents of cholesterol like pH, accumulation time, and scan rates were optimized. Under optimum conditions, the peak current was found to be linear in the cholesterol concentration range from 1 µM to 50 µM with a sensitivity of ~15.31 μAμM−1cm−2 with lower detection limit of 0.017 µM and response time of about 6s. The long-term storage stability of the sensor was tested for 30 days and the current response was found to be ~85% of its initial response after 30 days.Keywords: coconut oil, CCNT, cholesterol, biosensor
Procedia PDF Downloads 282182 Supermarket Shoppers Perceptions to Genetically Modified Foods in Trinidad and Tobago: Focus on Health Risks and Benefits
Authors: Safia Hasan Varachhia, Neela Badrie, Marsha Singh
Abstract:
Genetic modification of food is an innovative technology that offers a host of benefits and advantages to consumers. Consumer attitudes towards GM food and GM technologies can be identified a major determinant in conditioning market force and encouraging policy makers and regulators to recognize the significance of consumer influence on the market. This study aimed to investigate and evaluate the extent of consumer awareness, knowledge, perception and acceptance of GM foods and its associated health risks and benefit in Trinidad and Tobago, West Indies. The specific objectives of this study were to (determine consumer awareness to GM foods, ascertain their perspectives on health and safety risks and ethical issues associated with GM foods and determine whether labeling of GM foods and ingredients will influence consumers’ willingness to purchase GM foods. A survey comprising of a questionnaire consisting of 40 questions, both open-ended and close-ended was administered to 240 shoppers in small, medium and large-scale supermarkets throughout Trinidad between April-May, 2015 using convenience sampling. This survey investigated consumer awareness, knowledge, perception and acceptance of GM foods and its associated health risks/benefits. The data was analyzed using SPSS 19.0 and Minitab 16.0. One-way ANOVA investigated the effects categories of supermarkets and knowledge scores on shoppers’ awareness, knowledge, perception and acceptance of GM foods. Linear Regression tested whether demographic variables (category of supermarket, age of consumer, level of were useful predictors of consumer’s knowledge of GM foods). More than half of respondents (64.3%) were aware of GM foods and GM technologies, 28.3% of consumers indicated the presence of GM foods in local supermarkets and 47.1% claimed to be knowledgeable of GM foods. Furthermore, significant associations (P < 0.05) were observed between demographic variables (age, income, and education), and consumer knowledge of GM foods. Also, significant differences (P < 0.05) were observed between demographic variables (education, gender, and income) and consumer knowledge of GM foods. In addition, age, education, gender and income (P < 0.05) were useful predictors of consumer knowledge of GM foods. There was a contradiction as whilst 35% of consumers considered GM foods safe for consumption, 70% of consumers were wary of the unknown health risks of GM foods. About two-thirds of respondents (67.5%) considered the creation of GM foods morally wrong and unethical. Regarding GM food labeling preferences, 88% of consumers preferred mandatory labeling of GM foods and 67% of consumers specified that any food product containing a trace of GM food ingredients required mandatory GM labeling. Also, despite the declaration of GM food ingredients on food labels and the reassurance of its safety for consumption by food safety and regulatory institutions, the majority of consumers (76.1%) still preferred conventionally produced foods over GM foods. The study revealed the need to inform shoppers of the presence of GM foods and technologies, present the scientific evidence as to the benefits and risks and the need for a policy on labeling so that informed choices could be taken.Keywords: genetically modified foods, income, labeling consumer awareness, ingredients, morality and ethics, policy
Procedia PDF Downloads 328181 Seismic Assessment of Flat Slab and Conventional Slab System for Irregular Building Equipped with Shear Wall
Authors: Muhammad Aji Fajari, Ririt Aprilin Sumarsono
Abstract:
Particular instability of structural building under lateral load (e.g earthquake) will rise due to irregularity in vertical and horizontal direction as stated in SNI 03-1762-2012. The conventional slab has been considered for its less contribution in increasing the stability of the structure, except special slab system such as flat slab turned into account. In this paper, the analysis of flat slab system at Sequis Tower located in South Jakarta will be assessed its performance under earthquake. It consists of 6 floors of the basement where the flat slab system is applied. The flat slab system will be the main focus in this paper to be compared for its performance with conventional slab system under earthquake. Regarding the floor plan of Sequis Tower basement, re-entrant corner signed for this building is 43.21% which exceeded the allowable re-entrant corner is 15% as stated in ASCE 7-05 Based on that, the horizontal irregularity will be another concern for analysis, otherwise vertical irregularity does not exist for this building. Flat slab system is a system where the slabs use drop panel with shear head as their support instead of using beams. Major advantages of flat slab application are decreasing dead load of structure, removing beams so that the clear height can be maximized, and providing lateral resistance due to lateral load. Whilst, deflection at middle strip and punching shear are problems to be detail considered. Torsion usually appears when the structural member under flexure such as beam or column dimension is improper in ratio. Considering flat slab as alternative slab system will keep the collapse due to torsion down. Common seismic load resisting system applied in the building is a shear wall. Installation of shear wall will keep the structural system stronger and stiffer affecting in reduced displacement under earthquake. Eccentricity of shear wall location of this building resolved the instability due to horizontal irregularity so that the earthquake load can be absorbed. Performing linear dynamic analysis such as response spectrum and time history analysis due to earthquake load is suitable as the irregularity arise so that the performance of structure can be significantly observed. Utilization of response spectrum data for South Jakarta which PGA 0.389g is basic for the earthquake load idealization to be involved in several load combinations stated on SNI 03-1726-2012. The analysis will result in some basic seismic parameters such as period, displacement, and base shear of the system; besides the internal forces of the critical member will be presented. Predicted period of a structure under earthquake load is 0.45 second, but as different slab system applied in the analysis then the period will show a different value. Flat slab system will probably result in better performance for the displacement parameter compare to conventional slab system due to higher contribution of stiffness to the whole system of the building. In line with displacement, the deflection of the slab will result smaller for flat slab than a conventional slab. Henceforth, shear wall will be effective to strengthen the conventional slab system than flat slab system.Keywords: conventional slab, flat slab, horizontal irregularity, response spectrum, shear wall
Procedia PDF Downloads 191180 Efficient Treatment of Azo Dye Wastewater with Simultaneous Energy Generation by Microbial Fuel Cell
Authors: Soumyadeep Bhaduri, Rahul Ghosh, Rahul Shukla, Manaswini Behera
Abstract:
The textile industry consumes a substantial amount of water throughout the processing and production of textile fabrics. The water eventually turns into wastewater, where it acts as an immense damaging nuisance due to its dye content. Wastewater streams contain a percentage ranging from 2.0% to 50.0% of the total weight of dye used, depending on the dye class. The management of dye effluent in textile industries presents a formidable challenge to global sustainability. The current focus is on implementing wastewater treatment technology that enable the recycling of wastewater, reduce energy usage and offset carbon emissions. Microbial fuel cell (MFC) is a device that utilizes microorganisms as a bio-catalyst to effectively treat wastewater while also producing electricity. The MFC harnesses the chemical energy present in wastewater by oxidizing organic compounds in the anodic chamber and reducing an electron acceptor in the cathodic chamber, thereby generating electricity. This research investigates the potential of MFCs to tackle this challenge of azo dye removal with simultaneously generating electricity. Although MFCs are well-established for wastewater treatment, their application in dye decolorization with concurrent electricity generation remains relatively unexplored. This study aims to address this gap by assessing the effectiveness of MFCs as a sustainable solution for treating wastewater containing azo dyes. By harnessing microorganisms as biocatalysts, MFCs offer a promising avenue for environmentally friendly dye effluent management. The performance of MFCs in treating azo dyes and generating electricity was evaluated by optimizing the Chemical Oxygen Demand (COD) and Hydraulic Retention Time (HRT) of influent. COD and HRT values ranged from 1600 mg/L to 2400 mg/L and 5 to 9 days, respectively. Results showed that the maximum open circuit voltage (OCV) reached 648 mV at a COD of 2400 mg/L and HRT of 5 days. Additionally, maximum COD removal of 98% and maximum color removal of 98.91% were achieved at a COD of 1600 mg/L and HRT of 9 days. Furthermore, the study observed a maximum power density of 19.95 W/m3 at a COD of 2400 mg/L and HRT of 5 days. Electrochemical analysis, including linear sweep voltammetry (LSV), cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) were done to find out the response current and internal resistance of the system. To optimize pH and dye concentration, pH values were varied from 4 to 10, and dye concentrations ranged from 25 mg/L to 175 mg/L. The highest voltage output of 704 mV was recorded at pH 7, while a dye concentration of 100 mg/L yielded the maximum output of 672 mV. This study demonstrates that MFCs offer an efficient and sustainable solution for treating azo dyes in textile industry wastewater, while concurrently generating electricity. These findings suggest the potential of MFCs to contribute to environmental remediation and sustainable development efforts on a global scale.Keywords: textile wastewater treatment, microbial fuel cell, renewable energy, sustainable wastewater treatment
Procedia PDF Downloads 22179 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 176178 Lake Water Surface Variations and Its Influencing Factors in Tibetan Plateau in Recent 10 Years
Authors: Shanlong Lu, Jiming Jin, Xiaochun Wang
Abstract:
The Tibetan Plateau has the largest number of inland lakes with the highest elevation on the planet. These massive and large lakes are mostly in natural state and are less affected by human activities. Their shrinking or expansion can truly reflect regional climate and environmental changes and are sensitive indicators of global climate change. However, due to the sparsely populated nature of the plateau and the poor natural conditions, it is difficult to effectively obtain the change data of the lake, which has affected people's understanding of the temporal and spatial processes of lake water changes and their influencing factors. By using the MODIS (Moderate Resolution Imaging Spectroradiometer) MOD09Q1 surface reflectance images as basic data, this study produced the 8-day lake water surface data set of the Tibetan Plateau from 2000 to 2012 at 250 m spatial resolution, with a lake water surface extraction method of combined with lake water surface boundary buffer analyzing and lake by lake segmentation threshold determining. Then based on the dataset, the lake water surface variations and their influencing factors were analyzed, by using 4 typical natural geographical zones of Eastern Qinghai and Qilian, Southern Qinghai, Qiangtang, and Southern Tibet, and the watersheds of the top 10 lakes of Qinghai, Siling Co, Namco, Zhari NamCo, Tangra Yumco, Ngoring, UlanUla, Yamdrok Tso, Har and Gyaring as the analysis units. The accuracy analysis indicate that compared with water surface data of the 134 sample lakes extracted from the 30 m Landsat TM (Thematic Mapper ) images, the average overall accuracy of the lake water surface data set is 91.81% with average commission and omission error of 3.26% and 5.38%; the results also show strong linear (R2=0.9991) correlation with the global MODIS water mask dataset with overall accuracy of 86.30%; and the lake area difference between the Second National Lake Survey and this study is only 4.74%, respectively. This study provides reliable dataset for the lake change research of the plateau in the recent decade. The change trends and influencing factors analysis indicate that the total water surface area of lakes in the plateau showed overall increases, but only lakes with areas larger than 10 km2 had statistically significant increases. Furthermore, lakes with area larger than 100 km2 experienced an abrupt change in 2005. In addition, the annual average precipitation of Southern Tibet and Southern Qinghai experienced significant increasing and decreasing trends, and corresponding abrupt changes in 2004 and 2006, respectively. The annual average temperature of Southern Tibet and Qiangtang showed a significant increasing trend with an abrupt change in 2004. The major reason for the lake water surface variation in Eastern Qinghai and Qilian, Southern Qinghai and Southern Tibet is the changes of precipitation, and that for Qiangtang is the temperature variations.Keywords: lake water surface variation, MODIS MOD09Q1, remote sensing, Tibetan Plateau
Procedia PDF Downloads 231177 Theoretical-Methodological Model to Study Vulnerability of Death in the Past from a Bioarchaeological Approach
Authors: Geraldine G. Granados Vazquez
Abstract:
Every human being is exposed to the risk of dying; wherein some of them are more susceptible than others depending on the cause. Therefore, the cause could be the hazard to die that a group or individual has, making this irreversible damage the condition of vulnerability. Risk is a dynamic concept; which means that it depends on the environmental, social, economic and political conditions. Thus vulnerability may only be evaluated in terms of relative parameters. This research is focusing specifically on building a model that evaluate the risk or propensity of death in past urban societies in connection with the everyday life of individuals, considering that death can be a consequence of two coexisting issues: hazard and the deterioration of the resistance to destruction. One of the most important discussions in bioarchaeology refers to health and life conditions in ancient groups; the researchers are looking for more flexible models that evaluate these topics. In that way, this research proposes a theoretical-methodological model that assess the vulnerability of death in past urban groups. This model pretends to be useful to evaluate the risk of death, considering their sociohistorical context, and their intrinsic biological features. This theoretical and methodological model, propose four areas to assess vulnerability. The first three areas use statistical methods or quantitative analysis. While the last and fourth area, which corresponds to the embodiment, is based on qualitative analysis. The four areas and their techniques proposed are a) Demographic dynamics. From the distribution of age at the time of death, the analysis of mortality will be performed using life tables. From here, four aspects may be inferred: population structure, fertility, mortality-survival, and productivity-migration, b) Frailty. Selective mortality and heterogeneity in frailty can be assessed through the relationship between characteristics and the age at death. There are two indicators used in contemporary populations to evaluate stress: height and linear enamel hypoplasias. Height estimates may account for the individual’s nutrition and health history in specific groups; while enamel hypoplasias are an account of the individual’s first years of life, c) Inequality. Space reflects various sectors of society, also in ancient cities. In general terms, the spatial analysis uses measures of association to show the relationship between frail variables and space, d) Embodiment. The story of everyone leaves some evidence on the body, even in the bones. That led us to think about the dynamic individual's relations in terms of time and space; consequently, the micro analysis of persons will assess vulnerability from the everyday life, where the symbolic meaning also plays a major role. In sum, using some Mesoamerica examples, as study cases, this research demonstrates that not only the intrinsic characteristics related to the age and sex of individuals are conducive to vulnerability, but also the social and historical context that determines their state of frailty before death. An attenuating factor for past groups is that some basic aspects –such as the role they played in everyday life– escape our comprehension, and are still under discussion.Keywords: bioarchaeology, frailty, Mesoamerica, vulnerability
Procedia PDF Downloads 225176 The Effect of Emotional Intelligence on Physiological Stress of Managers
Authors: Mikko Salminen, Simo Järvelä, Niklas Ravaja
Abstract:
One of the central models of emotional intelligence (EI) is that of Mayer and Salovey’s, which includes ability to monitor own feelings and emotions and those of others, ability to discriminate different emotions, and to use this information to guide thinking and actions. There is vast amount of previous research where positive links between EI and, for example, leadership successfulness, work outcomes, work wellbeing and organizational climate have been reported. EI has also a role in the effectiveness of work teams, and the effects of EI are especially prominent in jobs requiring emotional labor. Thus, also the organizational context must be taken into account when considering the effects of EI on work outcomes. Based on previous research, it is suggested that EI can also protect managers from the negative consequences of stress. Stress may have many detrimental effects on the manager’s performance in essential work tasks. Previous studies have highlighted the effects of stress on, not only health, but also, for example, on cognitive tasks such as decision-making, which is important in managerial work. The motivation for the current study came from the notion that, unfortunately, many stressed individuals may not be aware of the circumstance; periods of stress-induced physiological arousal may be prolonged if there is not enough time for recovery. To tackle this problem, physiological stress levels of managers were collected using recording of heart rate variability (HRV). The goal was to use this data to provide the managers with feedback on their stress levels. The managers could access this feedback using a www-based learning environment. In the learning environment, in addition to the feedback on stress level and other collected data, also developmental tasks were provided. For example, those with high stress levels were sent instructions for mindfulness exercises. The current study focuses on the relation between the measured physiological stress levels and EI of the managers. In a pilot study, 33 managers from various fields wore the Firstbeat Bodyguard HRV measurement devices for three consecutive days and nights. From the collected HRV data periods (minutes) of stress and recovery were detected using dedicated software. The effects of EI on HRV-calculated stress indexes were studied using Linear Mixed Models procedure in SPSS. There was a statistically significant effect of total EI, defined as an average score of Schutte’s emotional intelligence test, on the percentage of stress minutes during the whole measurement period (p=.025). More stress minutes were detected on those managers who had lower emotional intelligence. It is suggested, that high EI provided managers with better tools to cope with stress. Managing of own emotions helps the manager in controlling possible negative emotions evoked by, e.g., critical feedback or increasing workload. High EI managers may also be more competent in detecting emotions of others, which would lead to smoother interactions and less conflicts. Given the recent trend to different quantified-self applications, it is suggested that monitoring of bio-signals would prove to be a fruitful direction to further develop new tools for managerial and leadership coaching.Keywords: emotional intelligence, leadership, heart rate variability, personality, stress
Procedia PDF Downloads 226175 The Touch Sensation: Ageing and Gender Influences
Authors: A. Abdouni, C. Thieulin, M. Djaghloul, R. Vargiolu, H. Zahouani
Abstract:
A decline in the main sensory modalities (vision, hearing, taste, and smell) is well reported to occur with advancing age, it is expected a similar change to occur with touch sensation and perception. In this study, we have focused on the touch sensations highlighting ageing and gender influences with in vivo systems. The touch process can be divided into two main phases: The first phase is the first contact between the finger and the object, during this contact, an adhesive force has been created which is the needed force to permit an initial movement of the finger. In the second phase, the finger mechanical properties with their surface topography play an important role in the obtained sensation. In order to understand the age and gender effects on the touch sense, we develop different ideas and systems for each phase. To better characterize the contact, the mechanical properties and the surface topography of human finger, in vivo studies on the pulp of 40 subjects (20 of each gender) of four age groups of 26±3, 35+-3, 45+-2 and 58±6 have been performed. To understand the first touch phase a classical indentation system has been adapted to measure the finger contact properties. The normal force load, the indentation speed, the contact time, the penetration depth and the indenter geometry have been optimized. The penetration depth of a glass indenter is recorded as a function of the applied normal force. Main assessed parameter is the adhesive force F_ad. For the second phase, first, an innovative approach is proposed to characterize the dynamic finger mechanical properties. A contactless indentation test inspired from the techniques used in ophthalmology has been used. The test principle is to blow an air blast to the finger and measure the caused deformation by a linear laser. The advantage of this test is the real observation of the skin free return without any outside influence. Main obtained parameters are the wave propagation speed and the Young's modulus E. Second, negative silicon replicas of subject’s fingerprint have been analyzed by a probe laser defocusing. A laser diode transmits a light beam on the surface to be measured, and the reflected signal is returned to a set of four photodiodes. This technology allows reconstructing three-dimensional images. In order to study the age and gender effects on the roughness properties, a multi-scale characterization of roughness has been realized by applying continuous wavelet transform. After determining the decomposition of the surface, the method consists of quantifying the arithmetic mean of surface topographic at each scale SMA. Significant differences of the main parameters are shown with ageing and gender. The comparison between men and women groups reveals that the adhesive force is higher for women. The results of mechanical properties show a Young’s modulus higher for women and also increasing with age. The roughness analysis shows a significant difference in function of age and gender.Keywords: ageing, finger, gender, touch
Procedia PDF Downloads 265174 The Location of Park and Ride Facilities Using the Fuzzy Inference Model
Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas
Abstract:
Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location
Procedia PDF Downloads 325173 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem
Authors: Nan Xu
Abstract:
In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC
Procedia PDF Downloads 146172 Quantitative Wide-Field Swept-Source Optical Coherence Tomography Angiography and Visual Outcomes in Retinal Artery Occlusion
Authors: Yifan Lu, Ying Cui, Ying Zhu, Edward S. Lu, Rebecca Zeng, Rohan Bajaj, Raviv Katz, Rongrong Le, Jay C. Wang, John B. Miller
Abstract:
Purpose: Retinal artery occlusion (RAO) is an ophthalmic emergency that can lead to poor visual outcome and is associated with an increased risk of cerebral stroke and cardiovascular events. Fluorescein angiography (FA) is the traditional diagnostic tool for RAO; however, wide-field swept-source optical coherence tomography angiography (WF SS-OCTA), as a nascent imaging technology, is able to provide quick and non-invasive angiographic information with a wide field of view. In this study, we looked for associations between OCT-A vascular metrics and visual acuity in patients with prior diagnosis of RAO. Methods: Patients with diagnoses of central retinal artery occlusion (CRAO) or branched retinal artery occlusion (BRAO) were included. A 6mm x 6mm Angio and a 15mm x 15mm AngioPlex Montage OCT-A image were obtained for both eyes in each patient using the Zeiss Plex Elite 9000 WF SS-OCTA device. Each 6mm x 6mm image was divided into nine Early Treatment Diabetic Retinopathy Study (ETDRS) subfields. The average measurement of the central foveal subfield, inner ring, and outer ring was calculated for each parameter. Non-perfusion area (NPA) was manually measured using 15mm x 15mm Montage images. A linear regression model was utilized to identify a correlation between the imaging metrics and visual acuity. A P-value less than 0.05 was considered to be statistically significant. Results: Twenty-five subjects were included in the study. For RAO eyes, there was a statistically significant negative correlation between vision and retinal thickness as well as superficial capillary plexus vessel density (SCP VD). A negative correlation was found between vision and deep capillary plexus vessel density (DCP VD) without statistical significance. There was a positive correlation between vision and choroidal thickness as well as choroidal volume without statistical significance. No statistically significant correlation was found between vision and the above metrics in contralateral eyes. For NPA measurements, no significant correlation was found between vision and NPA. Conclusions: This is the first study to our best knowledge to investigate the utility of WF SS-OCTA in RAO and to demonstrate correlations between various retinal vascular imaging metrics and visual outcomes. Further investigations should explore the associations between these imaging findings and cardiovascular risk as RAO patients are at elevated risk for symptomatic stroke. The results of this study provide a basis to understand the structural changes involved in visual outcomes in RAO. Furthermore, they may help guide management of RAO and prevention of cerebral stroke and cardiovascular accidents in patients with RAO.Keywords: OCTA, swept-source OCT, retinal artery occlusion, Zeiss Plex Elite
Procedia PDF Downloads 139171 A Nonlinear Feature Selection Method for Hyperspectral Image Classification
Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo
Abstract:
For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine
Procedia PDF Downloads 265170 TRAC: A Software Based New Track Circuit for Traffic Regulation
Authors: Jérôme de Reffye, Marc Antoni
Abstract:
Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling
Procedia PDF Downloads 331169 Comparative Appraisal of Polymeric Matrices Synthesis and Characterization Based on Maleic versus Itaconic Anhydride and 3,9-Divinyl-2,4,8,10-Tetraoxaspiro[5.5]-Undecane
Authors: Iordana Neamtu, Aurica P. Chiriac, Loredana E. Nita, Mihai Asandulesa, Elena Butnaru, Nita Tudorachi, Alina Diaconu
Abstract:
In the last decade, the attention of many researchers is focused on the synthesis of innovative “intelligent” copolymer structures with great potential for different uses. This considerable scientific interest is stimulated by possibility of the significant improvements in physical, mechanical, thermal and other important specific properties of these materials. Functionalization of polymer in synthesis by designing a suitable composition with the desired properties and applications is recognized as a valuable tool. In this work is presented a comparative study of the properties of the new copolymers poly(maleic anhydride maleic-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) and poly(itaconic-anhydride-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) obtained by radical polymerization in dioxane, using 2,2′-azobis(2-methylpropionitrile) as free-radical initiator. The comonomers are able for generating special effects as for example network formation, biodegradability and biocompatibility, gel formation capacity, binding properties, amphiphilicity, good oxidative and thermal stability, good film formers, and temperature and pH sensitivity. Maleic anhydride (MA) and also the isostructural analog itaconic anhydride (ITA) as polyfunctional monomers are widely used in the synthesis of reactive macromolecules with linear, hyperbranched and self & assembled structures to prepare high performance engineering, bioengineering and nano engineering materials. The incorporation of spiroacetal groups in polymer structures improves the solubility and the adhesive properties, induce good oxidative and thermal stability, are formers of good fiber or films with good flexibility and tensile strength. Also, the spiroacetal rings induce interactions on ether oxygen such as hydrogen bonds or coordinate bonds with other functional groups determining bulkiness and stiffness. The synthesized copolymers are analyzed by DSC, oscillatory and rotational rheological measurements and dielectric spectroscopy with the aim of underlying the heating behavior, solution viscosity as a function of shear rate and temperature and to investigate the relaxation processes and the motion of functional groups present in side chain around the main chain or bonds of the side chain. Acknowledgments This work was financially supported by the grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-132/2014 “Magnetic biomimetic supports as alternative strategy for bone tissue engineering and repair’’ (MAGBIOTISS).Keywords: Poly(maleic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); Poly(itaconic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); DSC; oscillatory and rotational rheological analysis; dielectric spectroscopy
Procedia PDF Downloads 227