Search results for: JM-MB-TBD filter
54 Predicting Susceptibility to Coronary Artery Disease using Single Nucleotide Polymorphisms with a Large-Scale Data Extraction from PubMed and Validation in an Asian Population Subset
Authors: K. H. Reeta, Bhavana Prasher, Mitali Mukerji, Dhwani Dholakia, Sangeeta Khanna, Archana Vats, Shivam Pandey, Sandeep Seth, Subir Kumar Maulik
Abstract:
Introduction Research has demonstrated a connection between coronary artery disease (CAD) and genetics. We did a deep literature mining using both bioinformatics and manual efforts to identify the susceptible polymorphisms in coronary artery disease. Further, the study sought to validate these findings in an Asian population. Methodology In first phase, we used an automated pipeline which organizes and presents structured information on SNPs, Population and Diseases. The information was obtained by applying Natural Language Processing (NLP) techniques to approximately 28 million PubMed abstracts. To accomplish this, we utilized Python scripts to extract and curate disease-related data, filter out false positives, and categorize them into 24 hierarchical groups using named Entity Recognition (NER) algorithms. From the extensive research conducted, a total of 466 unique PubMed Identifiers (PMIDs) and 694 Single Nucleotide Polymorphisms (SNPs) related to coronary artery disease (CAD) were identified. To refine the selection process, a thorough manual examination of all the studies was carried out. Specifically, SNPs that demonstrated susceptibility to CAD and exhibited a positive Odds Ratio (OR) were selected, and a final pool of 324 SNPs was compiled. The next phase involved validating the identified SNPs in DNA samples of 96 CAD patients and 37 healthy controls from Indian population using Global Screening Array. ResultsThe results exhibited out of 324, only 108 SNPs were expressed, further 4 SNPs showed significant difference of minor allele frequency in cases and controls. These were rs187238 of IL-18 gene, rs731236 of VDR gene, rs11556218 of IL16 gene and rs5882 of CETP gene. Prior researches have reported association of these SNPs with various pathways like endothelial damage, susceptibility of vitamin D receptor (VDR) polymorphisms, and reduction of HDL-cholesterol levels, ultimately leading to the development of CAD. Among these, only rs731236 had been studied in Indian population and that too in diabetes and vitamin D deficiency. For the first time, these SNPs were reported to be associated with CAD in Indian population. Conclusion: This pool of 324 SNP s is a unique kind of resource that can help to uncover risk associations in CAD. Here, we validated in Indian population. Further, validation in different populations may offer valuable insights and contribute to the development of a screening tool and may help in enabling the implementation of primary prevention strategies targeted at the vulnerable population.Keywords: coronary artery disease, single nucleotide polymorphism, susceptible SNP, bioinformatics
Procedia PDF Downloads 7653 Anaerobic Digestion of Spent Wash through Biomass Development for Obtaining Biogas
Authors: Sachin B. Patil, Narendra M. Kanhe
Abstract:
A typical cane molasses based distillery generates 15 L of waste water per liter of alcohol production. Distillery waste with COD of over 1,00,000 mg/l and BOD of over 30,000 mg/l ranks high amongst the pollutants produced by industries both in magnitude and strength. Treatment and safe disposal of this waste is a challenging task since long. The high strength of waste water renders aerobic treatment very expensive and physico-chemical processes have met with little success. Thermophilic anaerobic treatment of distillery waste may provide high degree of treatment and better recovery of biogas. It may prove more feasible in most part of tropical country like India, where temperature is suitable for thermophilic micro-organisms. Researchers have reviled that, at thermophilic conditions due to increased destruction rate of organic matter and pathogens, higher digestion rate can be achieved. Literature review reveals that the variety of anaerobic reactors including anaerobic lagoon, conventional digester, anaerobic filter, two staged fixed film reactors, sludge bed and granular bed reactors have been studied, but little attempts have been made to evaluate the usefulness of thermophilic anaerobic treatment for treating distillery waste. The present study has been carried out, to study feasibility of thermophilic anaerobic digestion to facilitate the design of full scale reactor. A pilot scale anaerobic fixed film fixed bed reactor (AFFFB) of capacity 25m3 was designed, fabricated, installed and commissioned for thermophilic (55-65°C) anaerobic digestion at a constant pH of 6.5-7.5, because these temperature and pH ranges are considered to be optimum for biogas recovery from distillery wastewater. In these conditions, working of the reactor was studied, for different hydraulic retention times (HRT) (0.25days to 12days) and variable organic loading rates (361.46 to 7.96 Kg COD/m3d). The parameters such as flow rate and temperature, various chemical parameters such as pH, chemical oxygen demands (COD), biogas quantity, and biogas composition were regularly monitored. It was observed that, with the increase in OLR, the biogas production was increased, but the specific biogas yield decreased. Similarly, with the increase in HRT, the biogas production got decrease, but the specific biogas yield was increased. This may also be due to the predominant activity of acid producers to methane producers at the higher substrate loading rates. From the present investigation, it can be concluded that for thermophilic conditions the highest COD removal percentage was obtained at an HRT of 08 days, thereafter it tends to decrease from 8 to 12 days HRT. There is a little difference between COD removal efficiency of 8 days HRT (74.03%) and 5 day HRT (78.06%), therefore it would not be feasible to increase the reactor size by 1.5 times for mere 4 percent more efficiency. Hence, 5 days HRT is considered to be optimum, at which the biogas yield was 98 m3/day and specific biogas yield was 0.385 CH4 m3/Kg CODr.Keywords: spent wash, anaerobic digestion, biomass, biogas
Procedia PDF Downloads 26452 Trophic Variations in Uptake and Assimilation of Cadmium, Manganese and Zinc: An Estuarine Food-Chain Radiotracer Experiment
Authors: K. O’Mara, T. Cresswell
Abstract:
Nearly half of the world’s population live near the coast, and as a result, estuaries and coastal bays in populated or industrialized areas often receive metal pollution. Heavy metals have a chemical affinity for sediment particles and can be stored in estuarine sediments and become biologically available under changing conditions. Organisms inhabiting estuaries can be exposed to metals from a variety of sources including metals dissolved in water, bound to sediment or within contaminated prey. Metal uptake and assimilation responses can vary even between species that are biologically similar, making pollution effects difficult to predict. A multi-trophic level experiment representing a common Eastern Australian estuarine food chain was used to study the sources for Cd, Mn and Zn uptake and assimilation in organisms occupying several trophic levels. Sand cockles (Katelysia scalarina), school prawns (Metapenaeus macleayi) and sand whiting (Sillago ciliata) were exposed to radiolabelled seawater, suspended sediment and food. Three pulse-chase trials on filter-feeding sand cockles were performed using radiolabelled phytoplankton (Tetraselmis sp.), benthic microalgae (Entomoneis sp.) and suspended sediment. Benthic microalgae had lower metal uptake than phytoplankton during labelling but higher cockle assimilation efficiencies (Cd = 51%, Mn = 42%, Zn = 63 %) than both phytoplankton (Cd = 21%, Mn = 32%, Zn = 33%) and suspended sediment (except Mn; (Cd = 38%, Mn = 42%, Zn = 53%)). Sand cockles were also sensitive to uptake of Cd, Mn and Zn dissolved in seawater. Uptake of these metals from the dissolved phase was negligible in prawns and fish, with prawns only accumulating metals during moulting, which were then lost with subsequent moulting in the depuration phase. Diet appears to be the main source of metal assimilation in school prawns, with 65%, 54% and 58% assimilation efficiencies from Cd, Mn and Zn respectively. Whiting fed contaminated prawns were able to exclude the majority of the metal activity through egestion, with only 10%, 23% and 11% assimilation efficiencies from Cd, Mn and Zn respectively. The findings of this study support previous studies that find diet to be the dominant accumulation source for higher level trophic organisms. These results show that assimilation efficiencies can vary depending on the source of exposure; sand cockles assimilated more Cd, Mn, and Zn from the benthic diatom than phytoplankton and assimilation was higher in sand whiting fed prawns compared to artificial pellets. The sensitivity of sand cockles to metal uptake and assimilation from a variety of sources poses concerns for metal availability to predators ingesting the clam tissue, including humans. The high tolerance of sand whiting to these metals is reflected in their widespread presence in Eastern Australian estuaries, including contaminated estuaries such as Botany Bay and Port Jackson.Keywords: cadmium, food chain, metal, manganese, trophic, zinc
Procedia PDF Downloads 20251 Citation Analysis of New Zealand Court Decisions
Authors: Tobias Milz, L. Macpherson, Varvara Vetrova
Abstract:
The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.Keywords: case citation network, citation analysis, network analysis, Neo4j
Procedia PDF Downloads 10750 Analysis and Comparison of Asymmetric H-Bridge Multilevel Inverter Topologies
Authors: Manel Hammami, Gabriele Grandi
Abstract:
In recent years, multilevel inverters have become more attractive for single-phase photovoltaic (PV) systems, due to their known advantages over conventional H-bridge pulse width-modulated (PWM) inverters. They offer improved output waveforms, smaller filter size, lower total harmonic distortion (THD), higher output voltages and others. The most common multilevel converter topologies, presented in literature, are the neutral-point-clamped (NPC), flying capacitor (FC) and Cascaded H-Bridge (CHB) converters. In both NPC and FC configurations, the number of components drastically increases with the number of levels what leads to complexity of the control strategy, high volume, and cost. Whereas, increasing the number of levels in case of the cascaded H-bridge configuration is a flexible solution. However, it needs isolated power sources for each stage, and it can be applied to PV systems only in case of PV sub-fields. In order to improve the ratio between the number of output voltage levels and the number of components, several hybrids and asymmetric topologies of multilevel inverters have been proposed in the literature such as the FC asymmetric H-bridge (FCAH) and the NPC asymmetric H-bridge (NPCAH) topologies. Another asymmetric multilevel inverter configuration that could have interesting applications is the cascaded asymmetric H-bridge (CAH), which is based on a modular half-bridge (two switches and one capacitor, also called level doubling network, LDN) cascaded to a full H-bridge in order to double the output voltage level. This solution has the same number of switches as the above mentioned AH configurations (i.e., six), and just one capacitor (as the FCAH). CAH is becoming popular, due to its simple, modular and reliable structure, and it can be considered as a retrofit which can be added in series to an existing H-Bridge configuration in order to double the output voltage levels. In this paper, an original and effective method for the analysis of the DC-link voltage ripple is given for single-phase asymmetric H-bridge multilevel inverters based on level doubling network (LDN). Different possible configurations of the asymmetric H-Bridge multilevel inverters have been considered and the analysis of input voltage and current are analytically determined and numerically verified by Matlab/Simulink for the case of cascaded asymmetric H-bridge multilevel inverters. A comparison between FCAH and the CAH configurations is done on the basis of the analysis of the DC and voltage ripple for the DC source (i.e., the PV system). The peak-to-peak DC and voltage ripple amplitudes are analytically calculated over the fundamental period as a function of the modulation index. On the basis of the maximum peak-to-peak values of low frequency and switching ripple voltage components, the DC capacitors can be designed. Reference is made to unity output power factor, as in case of most of the grid-connected PV generation systems. Simulation results will be presented in the full paper in order to prove the effectiveness of the proposed developments in all the operating conditions.Keywords: asymmetric inverters, dc-link voltage, level doubling network, single-phase multilevel inverter
Procedia PDF Downloads 20749 Combustion Characteristics and Pollutant Emissions in Gasoline/Ethanol Mixed Fuels
Authors: Shin Woo Kim, Eui Ju Lee
Abstract:
The recent development of biofuel production technology facilitates the use of bioethanol and biodiesel on automobile. Bioethanol, especially, can be used as a fuel for gasoline vehicles because the addition of ethanol has been known to increase octane number and reduce soot emissions. However, the wide application of biofuel has been still limited because of lack of detailed combustion properties such as auto-ignition temperature and pollutant emissions such as NOx and soot, which has been concerned mainly on the vehicle fire safety and environmental safety. In this study, the combustion characteristics of gasoline/ethanol fuel were investigated both numerically and experimentally. For auto-ignition temperature and NOx emission, the numerical simulation was performed on the well-stirred reactor (WSR) to simulate the homogeneous gasoline engine and to clarify the effect of ethanol addition in the gasoline fuel. Also, the response surface method (RSM) was introduced as a design of experiment (DOE), which enables the various combustion properties to be predicted and optimized systematically with respect to three independent variables, i.e., ethanol mole fraction, equivalence ratio and residence time. The results of stoichiometric gasoline surrogate show that the auto-ignition temperature increases but NOx yields decrease with increasing ethanol mole fraction. This implies that the bioethanol added gasoline is an eco-friendly fuel on engine running condition. However, unburned hydrocarbon is increased dramatically with increasing ethanol content, which results from the incomplete combustion and hence needs to adjust combustion itself rather than an after-treatment system. RSM results analyzed with three independent variables predict the auto-ignition temperature accurately. However, NOx emission had a big difference between the calculated values and the predicted values using conventional RSM because NOx emission varies very steeply and hence the obtained second order polynomial cannot follow the rates. To relax the increasing rate of dependent variable, NOx emission is taken as common logarithms and worked again with RSM. NOx emission predicted through logarithm transformation is in a fairly good agreement with the experimental results. For more tangible understanding of gasoline/ethanol fuel on pollutant emissions, experimental measurements of combustion products were performed in gasoline/ethanol pool fires, which is widely used as a fire source of laboratory scale experiments. Three measurement methods were introduced to clarify the pollutant emissions, i.e., various gas concentrations including NOx, gravimetric soot filter sampling for elements analysis and pyrolysis, thermophoretic soot sampling with transmission electron microscopy (TEM). Soot yield by gravimetric sampling was decreased dramatically as ethanol was added, but NOx emission was almost comparable regardless of ethanol mole fraction. The morphology of the soot particle was investigated to address the degree of soot maturing. The incipient soot such as a liquid like PAHs was observed clearly on the soot of higher ethanol containing gasoline, and the soot might be matured under the undiluted gasoline fuel.Keywords: gasoline/ethanol fuel, NOx, pool fire, soot, well-stirred reactor (WSR)
Procedia PDF Downloads 21248 Music Genre Classification Based on Non-Negative Matrix Factorization Features
Authors: Soyon Kim, Edward Kim
Abstract:
In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)
Procedia PDF Downloads 30347 Application of Typha domingensis Pers. in Artificial Floating for Sewage Treatment
Authors: Tatiane Benvenuti, Fernando Hamerski, Alexandre Giacobbo, Andrea M. Bernardes, Marco A. S. Rodrigues
Abstract:
Population growth in urban areas has caused damages to the environment, a consequence of the uncontrolled dumping of domestic and industrial wastewater. The capacity of some plants to purify domestic and agricultural wastewater has been demonstrated by several studies. Since natural wetlands have the ability to transform, retain and remove nutrients, constructed wetlands have been used for wastewater treatment. They are widely recognized as an economical, efficient and environmentally acceptable means of treating many different types of wastewater. T. domingensis Pers. species have shown a good performance and low deployment cost to extract, detoxify and sequester pollutants. Constructed Floating Wetlands (CFWs) consist of emergent vegetation established upon a buoyant structure, floating on surface waters. The upper parts of the vegetation grow and remain primarily above the water level, while the roots extend down in the water column, developing an extensive under water-level root system. Thus, the vegetation grows hydroponically, performing direct nutrient uptake from the water column. Biofilm is attached on the roots and rhizomes, and as physical and biochemical processes take place, the system functions as a natural filter. The aim of this study is to diagnose the application of macrophytes in artificial floating in the treatment of domestic sewage in south Brazil. The T. domingensis Pers. plants were placed in a flotation system (polymer structure), in full scale, in a sewage treatment plant. The sewage feed rate was 67.4 m³.d⁻¹ ± 8.0, and the hydraulic retention time was 11.5 d ± 1.3. This CFW treat the sewage generated by 600 inhabitants, which corresponds to 12% of the population served by this municipal treatment plant. During 12 months, samples were collected every two weeks, in order to evaluate parameters as chemical oxygen demand (COD), biochemical oxygen demand in 5 days (BOD5), total Kjeldahl nitrogen (TKN), total phosphorus, total solids, and metals. The average removal of organic matter was around 55% for both COD and BOD5. For nutrients, TKN was reduced in 45.9% what was similar to the total phosphorus removal, while for total solids the reduction was 33%. For metals, aluminum, copper, and cadmium, besides in low concentrations, presented the highest percentage reduction, 82.7, 74.4 and 68.8% respectively. Chromium, iron, and manganese removal achieved values around 40-55%. The use of T. domingensis Pers. in artificial floating for sewage treatment is an effective and innovative alternative in Brazilian sewage treatment systems. The evaluation of additional parameters in the treatment system may give useful information in order to improve the removal efficiency and increase the quality of the water bodies.Keywords: constructed wetland, floating system, sewage treatment, Typha domingensis Pers.
Procedia PDF Downloads 21046 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands
Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé
Abstract:
The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis
Procedia PDF Downloads 16345 Simulation of Hydraulic Fracturing Fluid Cleanup for Partially Degraded Fracturing Fluids in Unconventional Gas Reservoirs
Authors: Regina A. Tayong, Reza Barati
Abstract:
A stable, fast and robust three-phase, 2D IMPES simulator has been developed for assessing the influence of; breaker concentration on yield stress of filter cake and broken gel viscosity, varying polymer concentration/yield stress along the fracture face, fracture conductivity, fracture length, capillary pressure changes and formation damage on fracturing fluid cleanup in tight gas reservoirs. This model has been validated as against field data reported in the literature for the same reservoir. A 2-D, two-phase (gas/water) fracture propagation model is used to model our invasion zone and create the initial conditions for our clean-up model by distributing 200 bbls of water around the fracture. A 2-D, three-phase IMPES simulator, incorporating a yield-power-law-rheology has been developed in MATLAB to characterize fluid flow through a hydraulically fractured grid. The variation in polymer concentration along the fracture is computed from a material balance equation relating the initial polymer concentration to total volume of injected fluid and fracture volume. All governing equations and the methods employed have been adequately reported to permit easy replication of results. The effect of increasing capillary pressure in the formation simulated in this study resulted in a 10.4% decrease in cumulative production after 100 days of fluid recovery. Increasing the breaker concentration from 5-15 gal/Mgal on the yield stress and fluid viscosity of a 200 lb/Mgal guar fluid resulted in a 10.83% increase in cumulative gas production. For tight gas formations (k=0.05 md), fluid recovery increases with increasing shut-in time, increasing fracture conductivity and fracture length, irrespective of the yield stress of the fracturing fluid. Mechanical induced formation damage combined with hydraulic damage tends to be the most significant. Several correlations have been developed relating pressure distribution and polymer concentration to distance along the fracture face and average polymer concentration variation with injection time. The gradient in yield stress distribution along the fracture face becomes steeper with increasing polymer concentration. The rate at which the yield stress (τ_o) is increasing is found to be proportional to the square of the volume of fluid lost to the formation. Finally, an improvement on previous results was achieved through simulating yield stress variation along the fracture face rather than assuming constant values because fluid loss to the formation and the polymer concentration distribution along the fracture face decreases as we move away from the injection well. The novelty of this three-phase flow model lies in its ability to (i) Simulate yield stress variation with fluid loss volume along the fracture face for different initial guar concentrations. (ii) Simulate increasing breaker activity on yield stress and broken gel viscosity and the effect of (i) and (ii) on cumulative gas production within reasonable computational time.Keywords: formation damage, hydraulic fracturing, polymer cleanup, multiphase flow numerical simulation
Procedia PDF Downloads 13044 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 26243 The Impact of Using Flattening Filter-Free Energies on Treatment Efficiency for Prostate SBRT
Authors: T. Al-Alawi, N. Shorbaji, E. Rashaidi, M.Alidrisi
Abstract:
Purpose/Objective(s): The main purpose of this study is to analyze the planning of SBRT treatments for localized prostate cancer with 6FFF and 10FFF energies to see if there is a dosimetric difference between the two energies and how we can increase the plan efficiency and reduce its complexity. Also, to introduce a planning method in our department to treat prostate cancer by utilizing high energy photons without increasing patient toxicity and fulfilled all dosimetric constraints for OAR (an organ at risk). Then toevaluate the target 95% coverage PTV95, V5%, V2%, V1%, low dose volume for OAR (V1Gy, V2Gy, V5Gy), monitor unit (beam-on time), and estimate the values of homogeneity index HI, conformity index CI a Gradient index GI for each treatment plan.Materials/Methods: Two treatment plans were generated for15 patients with localized prostate cancer retrospectively using the CT planning image acquired for radiotherapy purposes. Each plan contains two/three complete arcs with two/three different collimator angle sets. The maximum dose rate available is 1400MU/min for the energy 6FFF and 2400MU/min for 10FFF. So in case, we need to avoid changing the gantry speed during the rotation, we tend to use the third arc in the plan with 6FFF to accommodate the high dose per fraction. The clinical target volume (CTV) consists of the entire prostate for organ-confined disease. The planning target volume (PTV) involves a margin of 5 mm. A 3-mm margin is favored posteriorly. Organs at risk identified and contoured include the rectum, bladder, penile bulb, femoral heads, and small bowel. The prescription dose is to deliver 35Gyin five fractions to the PTV and apply constraints for organ at risk (OAR) derived from those reported in references. Results: In terms of CI=0.99, HI=0.7, and GI= 4.1, it was observed that they are all thesame for both energies 6FFF and 10FFF with no differences, but the total delivered MUs are much less for the 10FFF plans (2907 for 6FFF vs.2468 for 10FFF) and the total delivery time is 124Sc for 6FFF vs. 61Sc for 10FFF beams. There were no dosimetric differences between 6FFF and 10FFF in terms of PTV coverage and mean doses; the mean doses for the bladder, rectum, femoral heads, penile bulb, and small bowel were collected, and they were in favor of the 10FFF. Also, we got lower V1Gy, V2Gy, and V5Gy doses for all OAR with 10FFF plans. Integral dosesID in (Gy. L) were recorded for all OAR, and they were lower with the 10FFF plans. Conclusion: High energy 10FFF has lower treatment time and lower delivered MUs; also, 10FFF showed lower integral and meant doses to organs at risk. In this study, we suggest usinga 10FFF beam for SBRTprostate treatment, which has the advantage of lowering the treatment time and that lead to lessplan complexity with respect to 6FFF beams.Keywords: FFF beam, SBRT prostate, VMAT, prostate cancer
Procedia PDF Downloads 8442 Platform Virtual for Joint Amplitude Measurement Based in MEMS
Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez
Abstract:
Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation
Procedia PDF Downloads 25941 Identification and Understanding of Colloidal Destabilization Mechanisms in Geothermal Processes
Authors: Ines Raies, Eric Kohler, Marc Fleury, Béatrice Ledésert
Abstract:
In this work, the impact of clay minerals on the formation damage of sandstone reservoirs is studied to provide a better understanding of the problem of deep geothermal reservoir permeability reduction due to fine particle dispersion and migration. In some situations, despite the presence of filters in the geothermal loop at the surface, particles smaller than the filter size (<1 µm) may surprisingly generate significant permeability reduction affecting in the long term the overall performance of the geothermal system. Our study is carried out on cores from a Triassic reservoir in the Paris Basin (Feigneux, 60 km Northeast of Paris). Our goal is to first identify the clays responsible for clogging, a mineralogical characterization of these natural samples was carried out by coupling X-Ray Diffraction (XRD), Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The results show that the studied stratigraphic interval contains mostly illite and chlorite particles. Moreover, the spatial arrangement of the clays in the rocks as well as the morphology and size of the particles, suggest that illite is more easily mobilized than chlorite by the flow in the pore network. Thus, based on these results, illite particles were prepared and used in core flooding in order to better understand the factors leading to the aggregation and deposition of this type of clay particles in geothermal reservoirs under various physicochemical and hydrodynamic conditions. First, the stability of illite suspensions under geothermal conditions has been investigated using different characterization techniques, including Dynamic Light Scattering (DLS) and Scanning Transmission Electron Microscopy (STEM). Various parameters such as the hydrodynamic radius (around 100 nm), the morphology and surface area of aggregates were measured. Then, core-flooding experiments were carried out using sand columns to mimic the permeability decline due to the injection of illite-containing fluids in sandstone reservoirs. In particular, the effects of ionic strength, temperature, particle concentration and flow rate of the injected fluid were investigated. When the ionic strength increases, a permeability decline of more than a factor of 2 could be observed for pore velocities representative of in-situ conditions. Further details of the retention of particles in the columns were obtained from Magnetic Resonance Imaging and X-ray Tomography techniques, showing that the particle deposition is nonuniform along the column. It is clearly shown that very fine particles as small as 100 nm can generate significant permeability reduction under specific conditions in high permeability porous media representative of the Triassic reservoirs of the Paris basin. These retention mechanisms are explained in the general framework of the DLVO theoryKeywords: geothermal energy, reinjection, clays, colloids, retention, porosity, permeability decline, clogging, characterization, XRD, SEM-EDS, STEM, DLS, NMR, core flooding experiments
Procedia PDF Downloads 17640 Development of Three-Dimensional Bio-Reactor Using Magnetic Field Stimulation to Enhance PC12 Cell Axonal Extension
Authors: Eiji Nakamachi, Ryota Sakiyama, Koji Yamamoto, Yusuke Morita, Hidetoshi Sakamoto
Abstract:
The regeneration of injured central nerve network caused by the cerebrovascular accidents is difficult, because of poor regeneration capability of central nerve system composed of the brain and the spinal cord. Recently, new regeneration methods such as transplant of nerve cells and supply of nerve nutritional factor were proposed and examined. However, there still remain many problems with the canceration of engrafted cells and so on and it is strongly required to establish an efficacious treating method of a central nerve system. Blackman proposed the electromagnetic stimulation method to enhance the axonal nerve extension. In this study, we try to design and fabricate a new three-dimensional (3D) bio-reactor, which can load a uniform AC magnetic field stimulation on PC12 cells in the extracellular environment for enhancement of an axonal nerve extension and 3D nerve network generation. Simultaneously, we measure the morphology of PC12 cell bodies, axons, and dendrites by the multiphoton excitation fluorescence microscope (MPM) and evaluate the effectiveness of the uniform AC magnetic stimulation to enhance the axonal nerve extension. Firstly, we designed and fabricated the uniform AC magnetic field stimulation bio-reactor. For the AC magnetic stimulation system, we used the laminated silicon steel sheets for a yoke structure of 3D chamber, which had a high magnetic permeability. Next, we adopted the pole piece structure and installed similar specification coils on both sides of the yoke. We searched an optimum pole piece structure using the magnetic field finite element (FE) analyses and the response surface methodology. We confirmed that the optimum 3D chamber structure showed a uniform magnetic flux density in the PC12 cell culture area by using FE analysis. Then, we fabricated the uniform AC magnetic field stimulation bio-reactor by adopting analytically determined specifications, such as the size of chamber and electromagnetic conditions. We confirmed that measurement results of magnetic field in the chamber showed a good agreement with FE results. Secondly, we fabricated a dish, which set inside the uniform AC magnetic field stimulation of bio-reactor. PC12 cells were disseminated with collagen gel and could be 3D cultured in the dish. The collagen gel were poured in the dish. The collagen gel, which had a disk shape of 6 mm diameter and 3mm height, was set on the membrane filter, which was located at 4 mm height from the bottom of dish. The disk was full filled with the culture medium inside the dish. Finally, we evaluated the effectiveness of the uniform AC magnetic field stimulation to enhance the nurve axonal extension. We confirmed that a 6.8 increase in the average axonal extension length of PC12 under the uniform AC magnetic field stimulation at 7 days culture in our bio-reactor, and a 24.7 increase in the maximum axonal extension length. Further, we confirmed that a 60 increase in the number of dendrites of PC12 under the uniform AC magnetic field stimulation. Finally, we confirm the availability of our uniform AC magnetic stimulation bio-reactor for the nerve axonal extension and the nerve network generation.Keywords: nerve regeneration, axonal extension , PC12 cell, magnetic field, three-dimensional bio-reactor
Procedia PDF Downloads 16839 Accounting and Prudential Standards of Banks and Insurance Companies in EU: What Stakes for Long Term Investment?
Authors: Sandra Rigot, Samira Demaria, Frederic Lemaire
Abstract:
The starting point of this research is the contemporary capitalist paradox: there is a real scarcity of long term investment despite the boom of potential long term investors. This gap represents a major challenge: there are important needs for long term financing in developed and emerging countries in strategic sectors such as energy, transport infrastructure, information and communication networks. Moreover, the recent financial and sovereign debt crises, which have respectively reduced the ability of financial banking intermediaries and governments to provide long term financing, questions the identity of the actors able to provide long term financing, their methods of financing and the most appropriate forms of intermediation. The issue of long term financing is deemed to be very important by the EU Commission, as it issued a 2013 Green Paper (GP) on long-term financing of the EU economy. Among other topics, the paper discusses the impact of the recent regulatory reforms on long-term investment, both in terms of accounting (in particular fair value) and prudential standards for banks. For banks, prudential and accounting standards are also crucial. Fair value is indeed well adapted to the trading book in a short term view, but this method hardly suits for a medium and long term portfolio. Banks’ ability to finance the economy and long term projects depends on their ability to distribute credit and the way credit is valued (fair value or amortised cost) leads to different banking strategies. Furthermore, in the banking industry, accounting standards are directly connected to the prudential standards, as the regulatory requirements of Basel III use accounting figures with prudential filter to define the needs for capital and to compute regulatory ratios. The objective of these regulatory requirements is to prevent insolvency and financial instability. In the same time, they can represent regulatory constraints to long term investing. The balance between financial stability and the need to stimulate long term financing is a key question raised by the EU GP. Does fair value accounting contributes to short-termism in the investment behaviour? Should prudential rules be “appropriately calibrated” and “progressively implemented” not to prevent banks from providing long-term financing? These issues raised by the EU GP lead us to question to what extent the main regulatory requirements incite or constrain banks to finance long term projects. To that purpose, we study the 292 responses received by the EU Commission during the public consultation. We analyze these contributions focusing on particular questions related to fair value accounting and prudential norms. We conduct a two stage content analysis of the responses. First, we proceed to a qualitative coding to identify arguments of respondents and subsequently we run a quantitative coding in order to conduct statistical analyses. This paper provides a better understanding of the position that a large panel of European stakeholders have on these issues. Moreover, it adds to the debate on fair value accounting and its effects on prudential requirements for banks. This analysis allows us to identify some short term bias in banking regulation.Keywords: basel 3, fair value, securitization, long term investment, banks, insurers
Procedia PDF Downloads 29138 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics
Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere
Abstract:
Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciencesKeywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet
Procedia PDF Downloads 13737 Influence of Packing Density of Layers Placed in Specific Order in Composite Nonwoven Structure for Improved Filtration Performance
Authors: Saiyed M Ishtiaque, Priyal Dixit
Abstract:
Objectives: An approach is being suggested to design the filter media to maximize the filtration efficiency with minimum possible pressure drop of composite nonwoven by incorporating the layers of different packing densities induced by fibre of different deniers and punching parameters by using the concept of sequential punching technique in specific order in layered composite nonwoven structure. X-ray computed tomography technique is used to measure the packing density along the thickness of layered nonwoven structure composed by placing the layer of differently oriented fibres influenced by fibres of different deniers and punching parameters in various combinations to minimize the pressure drop at maximum possible filtration efficiency. Methodology Used: This work involves preparation of needle punched layered structure with batts 100g/m2 basis weight having fibre denier, punch density and needle penetration depth as variables to produce 300 g/m2 basis weight nonwoven composite. X-ray computed tomography technique is used to measure the packing density along the thickness of layered nonwoven structure composed by placing the layers of differently oriented fibres influenced by considered variables in various combinations. to minimize the pressure drop at maximum possible filtration efficiencyFor developing layered nonwoven fabrics, batts made of fibre of different deniers having 100g/m2 each basis weight were placed in various combinations. For second set of experiment, the composite nonwoven fabrics were prepared by using 3 denier circular cross section polyester fibre having 64 mm length on needle punched nonwoven machine by using the sequential punching technique to prepare the composite nonwoven fabrics. In this technique, three semi punched fabrics of 100 g/m2 each having either different punch densities or needle penetration depths were prepared for first phase of fabric preparation. These fabrics were later punched altogether to obtain the overall basis weight of 300 g/m2. The total punch density of the composite nonwoven fabric was kept at 200 punches/ cm2 with a needle penetration depth of 10 mm. The layered structures so formed were subcategorised into two groups- homogeneous layered structure in which all the three batts comprising the nonwoven fabric were made from same denier of fibre, punch density and needle penetration depth and were placed in different positions in respective fabric and heterogeneous layered structure in which batts were made from fibres of different deniers, punch densities and needle penetration depths and were placed in different positions. Contributions: The results concluded that reduction in pressure drop is not derived by the overall packing density of the layered nonwoven fabric rather sequencing of layers of specific packing density in layered structure decides the pressure drop. Accordingly, creation of inverse gradient of packing density in layered structure provided maximum filtration efficiency with least pressure drop. This study paves the way for the possibility of customising the composite nonwoven fabrics by the incorporation of differently oriented fibres in constituent layers induced by considered variablres for desired filtration properties.Keywords: filtration efficiency, layered nonwoven structure, packing density, pressure drop
Procedia PDF Downloads 7636 Microplastic Concentrations in Cultured Oyster in Two Bays of Baja California, Mexico
Authors: Eduardo Antonio Lozano Hernandez, Nancy Ramirez Alvarez, Lorena Margarita Rios Mendoza, Jose Vinicio Macias Zamora, Felix Augusto Hernandez Guzman, Jose Luis Sanchez Osorio
Abstract:
Microplastics (MPs) are one of the most numerous reported wastes found in the marine ecosystem, representing one of the greatest risks for organisms that inhabit that environment due to their bioavailability. Such is the case of bivalve mollusks, since they are capable of filtering large volumes of water, which increases the risk of contamination by microplastics through the continuous exposure to these materials. This study aims to determine, quantify and characterize microplastics found in the cultured oyster Crassostrea gigas. We also analyzed if there are spatio-temporal differences in the microplastic concentration of organisms grown in two bays having quite different human population. In addition, we wanted to have an idea of the possible impact on humans via consumption of these organisms. Commercial size organisms (>6cm length; n = 15) were collected by triplicate from eight oyster farming sites in Baja California, Mexico during winter and summer. Two sites are located in Todos Santos Bay (TSB), while the other six are located in San Quintin Bay (SQB). Site selection was based on commercial concessions for oyster farming in each bay. The organisms were chemically digested with 30% KOH (w/v) and 30% H₂O₂ (v/v) to remove the organic matter and subsequently filtered using a GF/D filter. All particles considered as possible MPs were quantified according to their physical characteristics using a stereoscopic microscope. The type of synthetic polymer was determined using a FTIR-ATR microscope and using a user as well as a commercial reference library (Nicolet iN10 Thermo Scientific, Inc.) of IR spectra of plastic polymers (with a certainty ≥70% for polymers pure; ≥50% for composite polymers). Plastic microfibers were found in all the samples analyzed. However, a low incidence of MP fragments was observed in our study (approximately 9%). The synthetic polymers identified were mainly polyester and polyacrylonitrile. In addition, polyethylene, polypropylene, polystyrene, nylon, and T. elastomer. On average, the content of microplastics in organisms were higher in TSB (0.05 ± 0.01 plastic particles (pp)/g of wet weight) than found in SQB (0.02 ± 0.004 pp/g of wet weight) in the winter period. The highest concentration of MPs found in TSB coincides with the rainy season in the region, which increases the runoff from streams and wastewater discharges to the bay, as well as the larger population pressure (> 500,000 inhabitants). Otherwise, SQB is a mainly rural location, where surface runoff from streams is minimal and in addition, does not have a wastewater discharge into the bay. During the summer, no significant differences (Manne-Whitney U test; P=0.484) were observed in the concentration of MPs found in the cultured oysters of TSB and SQB, (average: 0.01 ± 0.003 pp/g and 0.01 ± 0.002 pp/g, respectively). Finally, we concluded that the consumption of oyster does not represent a risk for humans due to the low concentrations of MPs found. The concentration of MPs is influenced by the variables such as temporality, circulations dynamics of the bay and existing demographic pressure.Keywords: FTIR-ATR, Human risk, Microplastic, Oyster
Procedia PDF Downloads 17435 The Philosophical Hermeneutics Contribution to Form a Highly Qualified Judiciary in Brazil
Authors: Thiago R. Pereira
Abstract:
The philosophical hermeneutics is able to change the Brazilian Judiciary because of the understanding of the characteristics of the human being. It is impossible for humans, to be invested in the function of being a judge, making absolutely neutral decisions, but the philosophical hermeneutics can assist the judge making impartial decisions, based on the federal constitution. The normative legal positivism imagined a neutral judge, a judge able to try without any preconceived ideas, without allowing his/her background to influence him/her. When a judge arbitrates based on legal rules, the problem is smaller, but when there are no clear legal rules, and the judge must try based on principles, the risk of the decision is based on what they believe in. Solipsistically, this issue gains a huge dimension. Today, the Brazilian judiciary is independent, but there must be a greater knowledge of philosophy and the philosophy of law, partially because the bigger problem is the unpredictability of decisions made by the judiciary. Actually, when a lawsuit is filed, the result of this judgment is absolutely unpredictable. It is almost a gamble. There must be the slightest legal certainty and predictability of judicial decisions, so that people, with similar cases, may not receive opposite sentences. The relativism, since classical antiquity, believes in the possibility of multiple answers. Since the Greeks in in the sixth century before Christ, through the Germans in the eighteenth century, and even today, it has been established the constitution as the great law, the Groundnorm, and thus, the relativism of life can be greatly reduced when a hermeneut uses the Constitution as North interpretational, where all interpretation must act as the hermeneutic constitutional filter. For a current philosophy of law, that inside a legal system with a Federal Constitution, there is a single correct answer to a specific case. The challenge is how to find this right answer. The only answer to this question will be that we should use the constitutional principles. But in many cases, a collision between principles will take place, and to resolve this issue, the judge or the hermeneut will choose a solipsism way, using what they personally believe to be the right one. For obvious reasons, that conduct is not safe. Thus, a theory of decision is necessary to seek justice, and the hermeneutic philosophy and the linguistic turn will be necessary for one to find the right answer. In order to help this difficult mission, it will be necessary to use philosophical hermeneutics in order to find the right answer, which is the constitutionally most appropriate response. The constitutionally appropriate response will not always be the answer that individuals agree to, but we must put aside our preferences and defend the answer that the Constitution gives us. Therefore, the hermeneutics applied to Law, in search constitutionally appropriate response, should be the safest way to avoid judicial individual decisions. The aim of this paper is to present the science of law starting from the linguistic turn, the philosophical hermeneutics, moving away from legal positivism. The methodology used in this paper is qualitative, academic and theoretical, philosophical hermeneutics with the mission to conduct research proposing a new way of thinking about the science of law. The research sought to demonstrate the difficulty of the Brazilian courts to depart from the secular influence of legal positivism. Moreover, the research sought to demonstrate the need to think science of law within a contemporary perspective, where the linguistic turn, philosophical hermeneutics, will be the surest way to conduct the science of law in the present century.Keywords: hermeneutic, right answer, solipsism, Brazilian judiciary
Procedia PDF Downloads 35034 Indoor Air Pollution and Reduced Lung Function in Biomass Exposed Women: A Cross Sectional Study in Pune District, India
Authors: Rasmila Kawan, Sanjay Juvekar, Sandeep Salvi, Gufran Beig, Rainer Sauerborn
Abstract:
Background: Indoor air pollution especially from the use of biomass fuels, remains a potentially large global health threat. The inefficient use of such fuels in poorly ventilated conditions results in high levels of indoor air pollution, most seriously affecting women and young children. Objectives: The main aim of this study was to measure and compare the lung function of the women exposed in the biomass fuels and LPG fuels and relate it to the indoor emission measured using a structured questionnaire, spirometer and filter based low volume samplers respectively. Methodology: This cross-sectional comparative study was conducted among the women (aged > 18 years) living in rural villages of Pune district who were not diagnosed of chronic pulmonary diseases or any other respiratory diseases and using biomass fuels or LPG for cooking for a minimum period of 5 years or more. Data collection was done from April to June 2017 in dry season. Spirometer was performed using the portable, battery-operated ultrasound Easy One spirometer (Spiro bank II, NDD Medical Technologies, Zurich, Switzerland) to determine the lung function over Forced expiratory volume. The primary outcome variable was forced expiratory volume in 1 second (FEV1). Secondary outcome was chronic obstruction pulmonary disease (post bronchodilator FEV1/ Forced Vital Capacity (FVC) < 70%) as defined by the Global Initiative for Obstructive Lung Disease. Potential confounders such as age, height, weight, smoking history, occupation, educational status were considered. Results: Preliminary results showed that the lung function of the women using Biomass fuels (FEV1/FVC = 85% ± 5.13) had comparatively reduced lung function than the LPG users (FEV1/FVC = 86.40% ± 5.32). The mean PM 2.5 mass concentration in the biomass user’s kitchen was 274.34 ± 314.90 and 85.04 ± 97.82 in the LPG user’s kitchen. Black carbon amount was found higher in the biomass users (black carbon = 46.71 ± 46.59 µg/m³) than LPG users (black carbon=11.08 ± 22.97 µg/m³). Most of the houses used separate kitchen. Almost all the houses that used the clean fuel like LPG had minimum amount of the particulate matter 2.5 which might be due to the background pollution and cross ventilation from the houses using biomass fuels. Conclusions: Therefore, there is an urgent need to adopt various strategies to improve indoor air quality. There is a lacking of current state of climate active pollutants emission from different stove designs and identify major deficiencies that need to be tackled. Moreover, the advancement in research tools, measuring technique in particular, is critical for researchers in developing countries to improve their capability to study the emissions for addressing the growing climate change and public health concerns.Keywords: black carbon, biomass fuels, indoor air pollution, lung function, particulate matter
Procedia PDF Downloads 17433 Spectral Responses of the Laser Generated Coal Aerosol
Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki
Abstract:
Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation
Procedia PDF Downloads 36132 Review of Concepts and Tools Applied to Assess Risks Associated with Food Imports
Authors: A. Falenski, A. Kaesbohrer, M. Filter
Abstract:
Introduction: Risk assessments can be performed in various ways and in different degrees of complexity. In order to assess risks associated with imported foods additional information needs to be taken into account compared to a risk assessment on regional products. The present review is an overview on currently available best practise approaches and data sources used for food import risk assessments (IRAs). Methods: A literature review has been performed. PubMed was searched for articles about food IRAs published in the years 2004 to 2014 (English and German texts only, search string “(English [la] OR German [la]) (2004:2014 [dp]) import [ti] risk”). Titles and abstracts were screened for import risks in the context of IRAs. The finally selected publications were analysed according to a predefined questionnaire extracting the following information: risk assessment guidelines followed, modelling methods used, data and software applied, existence of an analysis of uncertainty and variability. IRAs cited in these publications were also included in the analysis. Results: The PubMed search resulted in 49 publications, 17 of which contained information about import risks and risk assessments. Within these 19 cross references were identified to be of interest for the present study. These included original articles, reviews and guidelines. At least one of the guidelines of the World Organisation for Animal Health (OIE) and the Codex Alimentarius Commission were referenced in any of the IRAs, either for import of animals or for imports concerning foods, respectively. Interestingly, also a combination of both was used to assess the risk associated with the import of live animals serving as the source of food. Methods ranged from full quantitative IRAs using probabilistic models and dose-response models to qualitative IRA in which decision trees or severity tables were set up using parameter estimations based on expert opinions. Calculations were done using @Risk, R or Excel. Most heterogeneous was the type of data used, ranging from general information on imported goods (food, live animals) to pathogen prevalence in the country of origin. These data were either publicly available in databases or lists (e.g., OIE WAHID and Handystatus II, FAOSTAT, Eurostat, TRACES), accessible on a national level (e.g., herd information) or only open to a small group of people (flight passenger import data at national airport customs office). In the IRAs, an uncertainty analysis has been mentioned in some cases, but calculations have been performed only in a few cases. Conclusion: The current state-of-the-art in the assessment of risks of imported foods is characterized by a great heterogeneity in relation to general methodology and data used. Often information is gathered on a case-by-case basis and reformatted by hand in order to perform the IRA. This analysis therefore illustrates the need for a flexible, modular framework supporting the connection of existing data sources with data analysis and modelling tools. Such an infrastructure could pave the way to IRA workflows applicable ad-hoc, e.g. in case of a crisis situation.Keywords: import risk assessment, review, tools, food import
Procedia PDF Downloads 30231 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 12830 Hydro-Mechanical Characterization of PolyChlorinated Biphenyls Polluted Sediments in Interaction with Geomaterials for Landfilling
Authors: Hadi Chahal, Irini Djeran-Maigre
Abstract:
This paper focuses on the hydro-mechanical behavior of polychlorinated biphenyl (PCB) polluted sediments when stored in landfills and the interaction between PCBs and geosynthetic clay liners (GCL) with respect to hydraulic performance of the liner and the overall performance and stability of landfills. A European decree, adopted in the French regulation forbids the reintroducing of contaminated dredged sediments containing more than 0,64mg/kg Σ 7 PCBs to rivers. At these concentrations, sediments are considered hazardous and a remediation process must be adopted to prevent the release of PCBs into the environment. Dredging and landfilling polluted sediments is considered an eco-environmental remediation solution. French regulations authorize the storage of PCBs contaminated components with less than 50mg/kg in municipal solid waste facilities. Contaminant migration via leachate may be possible. The interactions between PCBs contaminated sediments and the GCL barrier present in the bottom of a landfill for security confinement are not known. Moreover, the hydro-mechanical behavior of stored sediments may affect the performance and the stability of the landfill. In this article, hydro-mechanical characterization of the polluted sediment is presented. This characterization led to predict the behavior of the sediment at the storage site. Chemical testing showed that the concentration of PCBs in sediment samples is between 1.7 and 2,0 mg/kg. Physical characterization showed that the sediment is organic silty sand soil (%Silt=65, %Sand=27, %OM=8) characterized by a high plasticity index (Ip=37%). Permeability tests using permeameter and filter press showed that sediment permeability is in the order of 10-9 m/s. Compressibility tests showed that the sediment is a very compressible soil with Cc=0,53 and Cα =0,0086. In addition, effects of PCB on the swelling behavior of bentonite were studied and the hydraulic performance of the GCL in interaction with PCBs was examined. Swelling tests showed that PCBs don’t affect the swelling behavior of bentonite. Permeability tests were conducted on a 1.0 m pilot scale experiment, simulating a storage facility. PCBs contaminated sediments were directly placed over a passive barrier containing GCL to study the influence of the direct contact of polluted sediment leachate with the GCL. An automatic water system has been designed to simulate precipitation. Effluent quantity and quality have been examined. The sediment settlements and the water level in the sediment have been monitored. The results showed that desiccation affected the behavior of the sediment in the pilot test and that laboratory tests alone are not sufficient to predict the behavior of the sediment in landfill facility. Furthermore, the concentration of PCB in the sediment leachate was very low ( < 0,013 µg/l) and that the permeability of the GCL was affected by other components present in the sediment leachate. Desiccation and cracks were the main parameters that affected the hydro-mechanical behavior of the sediment in the pilot test. In order to reduce these infects, the polluted sediment should be stored at a water content inferior to its shrinkage limit (w=39%). We also propose to conduct other pilot tests with the maximum concentration of PCBs allowed in municipal solid waste facility of 50 mg/kg.Keywords: geosynthetic clay liners, landfill, polychlorinated biphenyl, polluted dredged materials
Procedia PDF Downloads 12329 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application
Authors: Ramesh P., Aby Joseph
Abstract:
Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling
Procedia PDF Downloads 13328 Phycoremiadation of Heavy Metals by Marine Macroalgae Collected from Olaikuda, Rameswaram, Southeast Coast of India
Authors: Suparna Roy, Anatharaman Perumal
Abstract:
The industrial effluent with high amount of heavy metals is known to have adverse effects on the environment. For the removal of heavy metals from aqueous environment, different conventional treatment technologies had been applied gradually which are not economically beneficial and also produce huge quantity of toxic chemical sludge. So, bio-sorption of heavy metals by marine plant is an eco-friendly innovative and alternative technology for removal of these pollutants from aqueous environment. The aim of this study is to evaluate the capacity of heavy metals accumulation and removal by some selected marine macroalgae (seaweeds) from marine environment. Methods: Seaweeds Acanthophora spicifera (Vahl.) Boergesen, Codium tomentosum Stackhouse, Halimeda gracilis Harvey ex. J. Agardh, Gracilaria opuntia Durairatnam.nom. inval. Valoniopsis pachynema (Martens) Boergesen, Caulerpa racemosa var. macrophysa (Sonder ex Kutzing) W. R. Taylor and Hydroclathrus clathratus (C. Agardh) Howe were collected from Olaikuda (09°17.526'N-079°19.662'E), Rameshwaram, south east coast of India during post monsoon period (April’2016). Seaweeds were washed with sterilized and filtered in-situ seawater repeatedly to remove all the epiphytes and debris and clean seaweeds were kept for shade drying for one week. The dried seaweeds were grinded to powder, and one gm powder seaweeds were taken in a 250ml conical flask, and 8 ml of 10 % HNO3 (70 % pure) was added to each sample and kept in room temperature (28 ̊C) for 24 hours and then samples were heated in hotplate at 120 ̊C, boiled to evaporate up to dryness and 20 ml of Nitric acid: Percholoric acid in 4:1 were added to it and again heated to hotplate at 90 ̊C up to evaporate to dryness, then samples were kept in room temperature for few minutes to cool and 10ml 10 % HNO3 were added to it and kept for 24 hours in cool and dark place and filtered with Whatman (589/2) filter paper and the filtrates were collected in 250ml clean conical flask and diluted accurately to 25 ml volume with double deionised water and triplicate of each sample were analysed with Inductively-Coupled plasma analysis (ICP-OES) to analyse total eleven heavy metals (Ag, Cd, B, Cu, Mn, Co, Ni, Cr, Pb, Zn, and Al content of the specified species and data were statistically evaluated for standard deviation. Results: Acanthophora spicifera contains highest amount of Ag (0.1± 0.2 mg/mg) followed by Cu (0.16±0.01 mg/mg), Mn (1.86±0.02 mg/mg), B (3.59±0.2 mg/mg), Halimeda gracilis showed highest accumulation of Al (384.75±0.12mg/mg), Valoniopsis pachynema accumulates maximum amount of Co (0.12±0.01 mg/mg), Zn (0.64±0.02 mg/mg), Caulerpa racemosa var. macrophysa contains Zn (0.63±0.01), Cr (0.26±0.01 mg/mg ), Ni (0.21±0.05), Pb (0.16±0.03 ) and Cd ( 0.02±00 ). Hydroclathrus clathratus, Codium tomentosum and Gracilaria opuntia also contain adequate amount of heavy metals. Conclusions: The mentioned species of seaweeds are contributing important role for decreasing the heavy metals pollution in marine environment by bioaccumulation. So, we can utilise this species to remove excess amount of heavy metals from polluted area.Keywords: heavy metals pollution, seaweeds, bioaccumulation, eco-friendly, phyco-remediation
Procedia PDF Downloads 23527 Collagen/Hydroxyapatite Compositions Doped with Transitional Metals for Bone Tissue Engineering Applications
Authors: D. Ficai, A. Ficai, D. Gudovan, I. A. Gudovan, I. Ardelean, R. Trusca, E. Andronescu, V. Mitran, A. Cimpean
Abstract:
In the last years, scientists struggled hardly to mimic bone structures to develop implants and biostructures which present higher biocompatibility and reduced rejection rate. One way to obtain this goal is to use similar materials as that of bone, namely collagen/hydroxyapatite composite materials. However, it is very important to tailor both compositions but also the microstructure of the bone that would ensure both the optimal osteointegartion and the mechanical properties required by the application. In this study, new collagen/hydroxyapatites composite materials doped with Cu, Li, Mn, Zn were successfully prepared. The synthesis method is described below: weight the Ca(OH)₂ mass, i.e., 7,3067g, and ZnCl₂ (0.134g), CuSO₄ (0.159g), LiCO₃ (0.133g), MnCl₂.4H₂O (0.1971g), and suspend in 100ml distilled water under magnetic stirring. The solution thus obtained is added a solution of NaH₂PO₄*H2O (8.247g dissolved in 50ml distilled water) under slow dropping of 1 ml/min followed by adjusting the pH to 9.5 with HCl and finally filter and wash until neutral pH. The as-obtained slurry was dried in the oven at 80°C and then calcined at 600°C in order to ensure a proper purification of the final product of organic phases, also inducing a proper sterilization of the mixture before insertion into the collagen matrix. The collagen/hydroxyapatite composite materials are tailored from morphological point of view to optimize their biocompatibility and bio-integration against mechanical properties whereas the addition of the dopants is aimed to improve the biological activity of the samples. The addition of transitional metals can improve the biocompatibility and especially the osteoblasts adhesion (Mn²⁺) or to induce slightly better osteoblast differentiation of the osteoblast, Zn²⁺ being a cofactor for many enzymes including those responsible for cell differentiation. If the amount is too high, the final material can become toxic and lose all of its biocompatibility. In order to achieve a good biocompatibility and not reach the cytotoxic effect, the amount of transitional metals added has to be maintained at low levels (0.5% molar). The amount of transitional metals entering into the elemental cell of HA will be verified using inductively-coupled plasma mass spectrometric system. This highly sensitive technique is necessary, because, at such low levels of transitional metals, the difference between biocompatible and cytotoxic is a very thin line, thus requiring proper and thorough investigation using a precise technique. In order to determine the structure and morphology of the obtained composite materials, IR spectroscopy, X-Ray diffraction (XRD), scanning electron microscopy (SEM), and Energy Dispersive X-Ray Spectrometry (EDS) were used. Acknowledgment: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project “Biomimetic porous structures obtained by 3D printing developed for bone tissue engineering (BIOGRAFTPRINT), No. 127PED/2017 is also highly acknowledged.Keywords: collagen, composite materials, hydroxyapatite, bone tissue engineering
Procedia PDF Downloads 20626 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance
Authors: Ammar Alali, Mahmoud Abughaban
Abstract:
Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe
Procedia PDF Downloads 22725 Identification and Characterization of Small Peptides Encoded by Small Open Reading Frames using Mass Spectrometry and Bioinformatics
Authors: Su Mon Saw, Joe Rothnagel
Abstract:
Short open reading frames (sORFs) located in 5’UTR of mRNAs are known as uORFs. Characterization of uORF-encoded peptides (uPEPs) i.e., a subset of short open reading frame encoded peptides (sPEPs) and their translation regulation lead to understanding of causes of genetic disease, proteome complexity and development of treatments. Existence of uORFs within cellular proteome could be detected by LC-MS/MS. The ability of uORF to be translated into uPEP and achievement of uPEP identification will allow uPEP’s characterization, structures, functions, subcellular localization, evolutionary maintenance (conservation in human and other species) and abundance in cells. It is hypothesized that a subset of sORFs are translatable and that their encoded sPEPs are functional and are endogenously expressed contributing to the eukaryotic cellular proteome complexity. This project aimed to investigate whether sORFs encode functional peptides. Liquid chromatography-mass spectrometry (LC-MS) and bioinformatics were thus employed. Due to probable low abundance of sPEPs and small in sizes, the need for efficient peptide enrichment strategies for enriching small proteins and depleting the sub-proteome of large and abundant proteins is crucial for identifying sPEPs. Low molecular weight proteins were extracted using SDS-PAGE from Human Embryonic Kidney (HEK293) cells and Strong Cation Exchange Chromatography (SCX) from secreted HEK293 cells. Extracted proteins were digested by trypsin to peptides, which were detected by LC-MS/MS. The MS/MS data obtained was searched against Swiss-Prot using MASCOT version 2.4 to filter out known proteins, and all unmatched spectra were re-searched against human RefSeq database. ProteinPilot v5.0.1 was used to identify sPEPs by searching against human RefSeq, Vanderperre and Human Alternative Open Reading Frame (HaltORF) databases. Potential sPEPs were analyzed by bioinformatics. Since SDS PAGE electrophoresis could not separate proteins <20kDa, this could not identify sPEPs. All MASCOT-identified peptide fragments were parts of main open reading frame (mORF) by ORF Finder search and blastp search. No sPEP was detected and existence of sPEPs could not be identified in this study. 13 translated sORFs in HEK293 cells by mass spectrometry in previous studies were characterized by bioinformatics. Identified sPEPs from previous studies were <100 amino acids and <15 kDa. Bioinformatics results showed that sORFs are translated to sPEPs and contribute to proteome complexity. uPEP translated from uORF of SLC35A4 was strongly conserved in human and mouse while uPEP translated from uORF of MKKS was strongly conserved in human and Rhesus monkey. Cross-species conserved uORFs in association with protein translation strongly suggest evolutionary maintenance of coding sequence and indicate probable functional expression of peptides encoded within these uORFs. Translation of sORFs was confirmed by mass spectrometry and sPEPs were characterized with bioinformatics.Keywords: bioinformatics, HEK293 cells, liquid chromatography-mass spectrometry, ProteinPilot, Strong Cation Exchange Chromatography, SDS-PAGE, sPEPs
Procedia PDF Downloads 188