Search results for: symmetric distributions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 867

Search results for: symmetric distributions

87 Hydration of Three-Piece K Peptide Fragments Studied by Means of Fourier Transform Infrared Spectroscopy

Authors: Marcin Stasiulewicz, Sebastian Filipkowski, Aneta Panuszko

Abstract:

Background: The hallmark of neurodegenerative diseases, including Alzheimer's and Parkinson's diseases, is an aggregation of the abnormal forms of peptides and proteins. Water is essential to functioning biomolecules, and it is one of the key factors influencing protein folding and misfolding. However, the hydration studies of proteins are complicated due to the complexity of protein systems. The use of model compounds can facilitate the interpretation of results involving larger systems. Objectives: The goal of the research was to characterize the properties of the hydration water surrounding the two three-residue K peptide fragments INS (Isoleucine - Asparagine - Serine) and NSR (Asparagine - Serine - Arginine). Methods: Fourier-transform infrared spectra of aqueous solutions of the tripeptides were recorded on Nicolet 8700 spectrometer (Thermo Electron Co.) Measurements were carried out at 25°C for varying molality of solute. To remove oscillation couplings from water spectra and, consequently, obtain narrow O-D semi-heavy water bands (HDO), the isotopic dilution method of HDO in H₂O was used. The difference spectra method allowed us to isolate the tripeptide-affected HDO spectrum. Results: The structural and energetic properties of water affected by the tripeptides were compared to the properties of pure water. The shift of the values of the gravity center of bands (related to the mean energy of water hydrogen bonds) towards lower values with respect to the ones corresponding to pure water suggests that the energy of hydrogen bonds between water molecules surrounding tripeptides is higher than in pure water. A comparison of the values of the mean oxygen-oxygen distances in water affected by tripeptides and pure water indicates that water-water hydrogen bonds are shorter in the presence of these tripeptides. The analysis of differences in oxygen-oxygen distance distributions between the tripeptide-affected water and pure water indicates that around the tripeptides, the contribution of water molecules with the mean energy of hydrogen bonds decreases, and simultaneously the contribution of strong hydrogen bonds increases. Conclusions: It was found that hydrogen bonds between water molecules in the hydration sphere of tripeptides are shorter and stronger than in pure water. It means that in the presence of the tested tripeptides, the structure of water is strengthened compared to pure water. Moreover, it has been shown that in the vicinity of the Asparagine - Serine - Arginine, water forms stronger and shorter hydrogen bonds. Acknowledgments: This work was funded by the National Science Centre, Poland (grant 2017/26/D/NZ1/00497).

Keywords: amyloids, K-peptide, hydration, FTIR spectroscopy

Procedia PDF Downloads 157
86 Investigation of the Possible Correlation of Earthquakes with a Red Tide Occurrence in the Persian Gulf and Oman Sea

Authors: Hadis Hosseinzadehnaseri

Abstract:

The red tide is a kind of algae blooming, caused different problems at different sizes for the human life and the environment, so it has become one of the serious global concerns in the field of Oceanography in few recent decades. This phenomenon has affected on Iran's water, especially the Persian Gulf's since last few years. Collecting data associated with this phenomenon and comparison in different parts of the world is significant as a practical way to study this phenomenon and controlling it. Effective factors to occur this phenomenon lead to the increase of the required nutrients of the algae or provide a good environment for blooming. In this study, we examined the probability of relation between the earthquake and the harmful algae blooming in the Persian Gulf's water through comparing the earthquake data and the recorded Red tides. On the one hand, earthquakes can cause changes in seawater temperature that is effective in creating a suitable environment and the other hand, it increases the possibility of water nutrients, and its transportation in the seabed, so it can play a principal role in the development of red tide occurrence. Comparing the distribution spatial-temporal maps of the earthquakes and deadly red tides in the Persian Gulf and Oman Sea, confirms the hypothesis, why there is a meaningful relation between these two distributions. Comparing the number of earthquakes around the world as well as the number of the red tides in many parts of the world indicates the correlation between these two issues. This subject due to numerous earthquakes, especially in recent years and in the southern part of the country should be considered as a warning to the possibility of re-occurrence of a critical state of red tide in a large scale, why in the year 2008, the number of recorded earthquakes have been more than near years. In this year, the distribution value of the red tide phenomenon in the Persian Gulf got measured about 140,000 square kilometers and entire Oman Sea, with 10 months Survival in the area, which is considered as a record among the occurred algae blooming in the world. In this paper, we could obtain a logical and reasonable relation between the earthquake frequency and this phenomenon occurrence, through compilation of statistics relating to the earthquakes in the southern Iran, from 2000 to the end of the first half of 2013 and also collecting statistics on the occurrence of red tide in the region as well as examination of similar data in different parts of the world. As shown in Figure 1, according to a survey conducted on the earthquake data, the most earthquakes in the southern Iran ranks first in the fourth Gregorian calendar month In April, coincided with Ordibehesht and Khordad in Persian calendar and then in the tenth Gregorian calendar month In October, coincided in Aban and Azar in Persian calendar.

Keywords: red tide, earth quake, persian gulf, harmful algae bloom

Procedia PDF Downloads 474
85 Biosensor for Determination of Immunoglobulin A, E, G and M

Authors: Umut Kokbas, Mustafa Nisari

Abstract:

Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.

Keywords: biosensor, immunosensor, immunoglobulin, infection

Procedia PDF Downloads 69
84 Subjective Temporal Resources: On the Relationship Between Time Perspective and Chronic Time Pressure to Burnout

Authors: Diamant Irene, Dar Tamar

Abstract:

Burnout, conceptualized within the framework of stress research, is to a large extent a result of a threat on resources of time or a feeling of time shortage. In reaction to numerous tasks, deadlines, high output, management of different duties encompassing work-home conflicts, many individuals experience ‘time pressure’. Time pressure is characterized as the perception of a lack of available time in relation to the amount of workload. It can be a result of local objective constraints, but it can also be a chronic attribute in coping with life. As such, time pressure is associated in the literature with general stress experience and can therefore be a direct, contributory burnout factor. The present study examines the relation of chronic time pressure – feeling of time shortage and of being rushed, with another central aspect in subjective temporal experience - time perspective. Time perspective is a stable personal disposition, capturing the extent to which people subjectively remember the past, live the present and\or anticipate the future. Based on Hobfoll’s Conservation of Resources Theory, it was hypothesized that individuals with chronic time pressure would experience a permanent threat on their time resources resulting in relatively increased burnout. In addition, it was hypothesized that different time perspective profiles, based on Zimbardo’s typology of five dimensions – Past Positive, Past Negative, Present Hedonistic, Present Fatalistic, and Future, would be related to different magnitudes of chronic time pressure and of burnout. We expected that individuals with ‘Past Negative’ or ‘Present Fatalist’ time perspectives would experience more burnout, with chronic time pressure being a moderator variable. Conversely, individuals with a ‘Present Hedonistic’ - with little concern with the future consequences of actions, would experience less chronic time pressure and less burnout. Another temporal experience angle examined in this study is the difference between the actual distribution of time (as in a typical day) versus desired distribution of time (such as would have been distributed optimally during a day). It was hypothesized that there would be a positive correlation between the gap between these time distributions and chronic time pressure and burnout. Data was collected through an online self-reporting survey distributed on social networks, with 240 participants (aged 21-65) recruited through convenience and snowball sampling methods from various organizational sectors. The results of the present study support the hypotheses and constitute a basis for future debate regarding the elements of burnout in the modern work environment, with an emphasis on subjective temporal experience. Our findings point to the importance of chronic and stable temporal experiences, as time pressure and time perspective, in occupational experience. The findings are also discussed with a view to the development of practical methods of burnout prevention.

Keywords: conservation of resources, burnout, time pressure, time perspective

Procedia PDF Downloads 148
83 The Influences of Facies and Fine Kaolinite Formation Migration on Sandstones’ Reservoir Quality, Sarir Formation, Sirt Basin Libya

Authors: Faraj M. Elkhatri, Hana Ali Alafi

Abstract:

The spatial and temporal distribution of diagenetic alterations related impact on the reservoir quality of the Sarir Formation. (present-day burial depth of about 9000 feet) Depositional facies and diagenetic alterations are the main controls on reservoir quality of Sarir Formation Sirt Basin Libya; these based on lithology and grain size as well as authigenic clay mineral types and their distributions. However, petrology investigation obtained on study area with five sandstone wells concentrated on main rock components and the parameters that may have impacts on reservoirs. the main authigenic clay minerals are kaolinite and dickite, these investigations have confirmed by X.R.D analysis and clay fraction. mainly Kaolinite and Dickite were extensively presented on all of wells with high amounts. As well as trace of detrital smectite and less amounts of illitized mud-matrix are possibly found by SEM image. Thin layers of clay presented as clay-grain coatings in local depth interpreted as remains of dissolved clay matrix is partly transformed into kaolinite adjacent and towards pore throat. This also may have impacts on most of the pore throats of this sandstone which are open and relatively clean with some of fine martial have been formed on occluded pores. This material is identified by EDS analysis to be collections of not only kaolinite booklets but also small disaggregated kaolinite platelets derived from the disaggregation of larger kaolinite booklets. These patches of kaolinite not only fill this pore, but also coat some of the surrounding framework grains. Quartz grains often enlarged by authigenic quartz overgrowths partially occlude and reduce porosity. Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM) was conducted on the post-test samples to examine any mud filtrate particles that may be in the pore throats. Semi-qualitative elemental data on selected minerals observed during the SEM study were obtained through the use of an Energy Dispersive Spectroscopy (EDS) unit. The samples showed mostly clean open pore throats, with limited occlusion by kaolinite. very fine-grained elemental combinations (Si/Al/Na/Cl, Si/Al Ca/Cl/Ti, and Qtz/Ti) have been identified and conformed by EDS analysis. However, the identification of the fine grained disaggregated material as mainly kaolinite though study area.

Keywords: fine migration, formation damage, kaolinite, soled bulging.

Procedia PDF Downloads 48
82 Optimal Delivery of Two Similar Products to N Ordered Customers

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering products located at a central depot to customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from the depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity of the goods that must be delivered. In the present work, we present a specific capacitated stochastic vehicle routing problem which has realistic applications to distributions of materials to shops or to healthcare facilities or to military units. A vehicle starts its route from a depot loaded with items of two similar but not identical products. We name these products, product 1 and product 2. The vehicle must deliver the products to N customers according to a predefined sequence. This means that first customer 1 must be serviced, then customer 2 must be serviced, then customer 3 must be serviced and so on. The vehicle has a finite capacity and after servicing all customers it returns to the depot. It is assumed that each customer prefers either product 1 or product 2 with known probabilities. The actual preference of each customer becomes known when the vehicle visits the customer. It is also assumed that the quantity that each customer demands is a random variable with known distribution. The actual demand is revealed upon the vehicle’s arrival at customer’s site. The demand of each customer cannot exceed the vehicle capacity and the vehicle is allowed during its route to return to the depot to restock with quantities of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. If there is shortage for the desired product, it is permitted to deliver the other product at a reduced price. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the expected total cost among all possible strategies. It is possible to find the optimal routing strategy using a suitable stochastic dynamic programming algorithm. It is also possible to prove that the optimal routing strategy has a specific threshold-type structure, i.e. it is characterized by critical numbers. This structural result enables us to construct an efficient special-purpose dynamic programming algorithm that operates only over those routing strategies having this structure. The findings of the present study lead us to the conclusion that the dynamic programming method may be a very useful tool for the solution of specific vehicle routing problems. A problem for future research could be the study of a similar stochastic vehicle routing problem in which the vehicle instead of delivering, it collects products from ordered customers.

Keywords: collection of similar products, dynamic programming, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 240
81 The Influences of Facies and Fine Kaolinite Formation Migration on Sandstone's Reservoir Quality, Sarir Formation, Sirt Basin Libya

Authors: Faraj M. Elkhatri

Abstract:

The spatial and temporal distribution of diagenetic alterations related impact on the reservoir quality of the Sarir Formation. ( present day burial depth of about 9000 feet) Depositional facies and diagenetic alterations are the main controls on reservoir quality of Sarir Formation Sirt Basin Libya; these based on lithology and grain size as well as authigenic clay mineral types and their distributions. However, petrology investigation obtained on study area with five sandstone wells concentrated on main rock components and the parameters that may have impacts on reservoirs. the main authigenic clay minerals are kaolinite and dickite, these investigations have confirmed by X.R.D analysis and clay fraction. mainly Kaolinite and Dickite were extensively presented on all of wells with high amounts. As well as trace of detrital smectite and less amounts of illitized mud-matrix are possibly find by SEM image. Thin layers of clay presented as clay-grain coatings in local depth interpreted as remains of dissolved clay matrix is partly transformed into kaolinite adjacent and towards pore throat. This also may have impacts on most of the pore throats of this sandstone which are open and relatively clean with some fine martial have been formed on occluded pores. This material is identified by EDS analysis to be collections of not only kaolinite booklets but also small disaggregated kaolinite platelets derived from the disaggregation of larger kaolinite booklets. These patches of kaolinite not only fill this pore but also coat some of the surrounding framework grains. Quartz grains often enlarged by authigenic quartz overgrowths partially occlude and reduce porosity. Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM) was conducted on the post-test samples to examine any mud filtrate particles that may be in the pore throats. Semi-qualitative elemental data on selected minerals observed during the SEM study were obtained through the use of an Energy Dispersive Spectroscopy (EDS) unit. The samples showed mostly clean open pore throats with limited occlusion by kaolinite. very fine-grained elemental combinations (Si/Al/Na/Cl, Si/Al Ca/Cl/Ti, and Qtz/Ti) have been identified and conformed by EDS analysis. However, the identification of the fine grained disaggregated material as mainly kaolinite though study area.

Keywords: pore throat, fine migration, formation damage, solids plugging, porosity loss

Procedia PDF Downloads 126
80 The Effects of Total Resistance Exercises Suspension Exercises Program on Physical Performance in Healthy Individuals

Authors: P. Cavlan, B. Kırmızıgil

Abstract:

Introduction: Each exercise in suspension exercises offer the use of gravity and body weight; and is thought to develop the equilibrium, flexibility and body stability necessary for daily life activities and sports, in addition to creating the correct functional force. Suspension exercises based on body weight focus the human body as an integrated system. Total Resistance Exercises (TRX) suspension training that physiotherapists, athletic health clinics, exercise centers of hospitals and chiropractic clinics now use for rehabilitation purposes. The purpose of this study is to investigate and compare the effects of TRX suspension exercises on physical performance in healthy individuals. Method: Healthy subjects divided into two groups; the study group and the control group with 40 individuals for each, between ages 20 to 45 with similar gender distributions. Study group had 2 sessions of suspension exercises per week for 8 weeks and control group had no exercises during this period. All the participants were given explosive strength, flexibility, strength and endurance tests before and after the 8 week period. The tests used for evaluation were respectively; standing long jump test and single leg (left and right) long jump tests, sit and reach test, sit up and back extension tests. Results: In the study group a statistically significant difference was found between prior- and final-tests in all evaluations, including explosive strength, flexibility, core strength and endurance of the group performing TRX exercises. These values were higher than the control groups’ values. The final test results were found to be statistically different between the study and control groups. Study group showed development in all values. Conclusions: In this study, which was conducted with the aim of investigating and comparing the effects of TRX suspension exercises on physical performance, the results of the prior-tests of both groups were similar. There was no significant difference between the prior and the final values in the control group. It was observed that in the study group, explosive strength, flexibility, strength, and endurance development was achieved after 8 weeks. According to these results, it was shown that TRX suspension exercise program improved explosive strength, flexibility, especially core strength and endurance; therefore the physical performance. Based on the results of our study, it was determined that the physical performance, an indispensable requirement of our life, was developed by the TRX suspension system. We concluded that TRX suspension exercises can be used to improve the explosive strength and flexibility in healthy individuals, as well as developing the muscle strength and endurance of the core region. The specific investigations could be done in this area so that programs that emphasize the TRX's physical performance features could be created.

Keywords: core strength, endurance, explosive strength, flexibility, physical performance, suspension exercises

Procedia PDF Downloads 143
79 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 106
78 Field Synergy Analysis of Combustion Characteristics in the Afterburner of Solid Oxide Fuel Cell System

Authors: Shing-Cheng Chang, Cheng-Hao Yang, Wen-Sheng Chang, Chih-Chia Lin, Chun-Han Li

Abstract:

The solid oxide fuel cell (SOFC) is a promising green technology which can achieve a high electrical efficiency. Due to the high operating temperature of SOFC stack, the off-gases at high temperature from anode and cathode outlets are introduced into an afterburner to convert the chemical energy into thermal energy by combustion. The heat is recovered to preheat the fresh air and fuel gases before they pass through the stack during the SOFC power generation system operation. For an afterburner of the SOFC system, the temperature control with a good thermal uniformity is important. A burner with a well-designed geometry usually can achieve a satisfactory performance. To design an afterburner for an SOFC system, the computational fluid dynamics (CFD) simulation is adoptable. In this paper, the hydrogen combustion characteristics in an afterburner with simple geometry are studied by using CFD. The burner is constructed by a cylinder chamber with the configuration of a fuel gas inlet, an air inlet, and an exhaust outlet. The flow field and temperature distributions inside the afterburner under different fuel and air flow rates are analyzed. To improve the temperature uniformity of the afterburner during the SOFC system operation, the flow paths of anode/cathode off-gases are varied by changing the positions of fuels and air inlet channel to improve the heat and flow field synergy in the burner furnace. Because the air flow rate is much larger than the fuel gas, the flow structure and heat transfer in the afterburner is dominated by the air flow path. The present work studied the effects of fluid flow structures on the combustion characteristics of an SOFC afterburner by three simulation models with a cylindrical combustion chamber and a tapered outlet. All walls in the afterburner are assumed to be no-slip and adiabatic. In each case, two set of parameters are simulated to study the transport phenomena of hydrogen combustion. The equivalence ratios are in the range of 0.08 to 0.1. Finally, the pattern factor for the simulation cases is calculated to investigate the effect of gas inlet locations on the temperature uniformity of the SOFC afterburner. The results show that the temperature uniformity of the exhaust gas can be improved by simply adjusting the position of the gas inlet. The field synergy analysis indicates the design of the fluid flow paths should be in the way that can significantly contribute to the heat transfer, i.e. the field synergy angle should be as small as possible. In the study cases, the averaged synergy angle of the burner is about 85̊, 84̊, and 81̊ respectively.

Keywords: afterburner, combustion, field synergy, solid oxide fuel cell

Procedia PDF Downloads 113
77 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data

Authors: S. Jurado, E. Pazmino

Abstract:

Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.

Keywords: medial axis, pore-throat distribution, porosity, porous media

Procedia PDF Downloads 94
76 Carbon Sequestration in Spatio-Temporal Vegetation Dynamics

Authors: Nothando Gwazani, K. R. Marembo

Abstract:

An increase in the atmospheric concentration of carbon dioxide (CO₂) from fossil fuel and land use change necessitates identification of strategies for mitigating threats associated with global warming. Oceans are insufficient to offset the accelerating rate of carbon emission. However, the challenges of oceans as a source of reducing carbon footprint can be effectively overcome by the storage of carbon in terrestrial carbon sinks. The gases with special optical properties that are responsible for climate warming include carbon dioxide (CO₂), water vapors, methane (CH₄), nitrous oxide (N₂O), nitrogen oxides (NOₓ), stratospheric ozone (O₃), carbon monoxide (CO) and chlorofluorocarbons (CFC’s). Amongst these, CO₂ plays a crucial role as it contributes to 50% of the total greenhouse effect and has been linked to climate change. Because plants act as carbon sinks, interest in terrestrial carbon sequestration has increased in an effort to explore opportunities for climate change mitigation. Removal of carbon from the atmosphere is a topical issue that addresses one important aspect of an overall strategy for carbon management namely to help mitigate the increasing emissions of CO₂. Thus, terrestrial ecosystems have gained importance for their potential to sequester carbon and reduce carbon sink in oceans, which have a substantial impact on the ocean species. Field data and electromagnetic spectrum bands were analyzed using ArcGIS 10.2, QGIS 2.8 and ERDAS IMAGINE 2015 to examine the vegetation distribution. Satellite remote sensing data coupled with Normalized Difference Vegetation Index (NDVI) was employed to assess future potential changes in vegetation distributions in Eastern Cape Province of South Africa. The observed 5-year interval analysis examines the amount of carbon absorbed using vegetation distribution. In 2015, the numerical results showed low vegetation distribution, therefore increased the acidity of the oceans and gravely affected fish species and corals. The outcomes suggest that the study area could be effectively utilized for carbon sequestration so as to mitigate ocean acidification. The vegetation changes measured through this investigation suggest an environmental shift and reduced vegetation carbon sink, and that threatens biodiversity and ecosystem. In order to sustain the amount of carbon in the terrestrial ecosystems, the identified ecological factors should be enhanced through the application of good land and forest management practices. This will increase the carbon stock of terrestrial ecosystems thereby reducing direct loss to the atmosphere.

Keywords: remote sensing, vegetation dynamics, carbon sequestration, terrestrial carbon sink

Procedia PDF Downloads 126
75 Role of Artificial Intelligence in Nano Proteomics

Authors: Mehrnaz Mostafavi

Abstract:

Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.

Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence

Procedia PDF Downloads 45
74 Building an Opinion Dynamics Model from Experimental Data

Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.

Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule

Procedia PDF Downloads 86
73 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 137
72 Development a Home-Hotel-Hospital-School Community-Based Palliative Care Model for Patients with Cancer in Suratthani, Thailand

Authors: Patcharaporn Sakulpong, Wiriya Phokhwang

Abstract:

Background: Banpunrug (Love Sharing House) established in 2013 provides a community-based palliative care for patients with cancer from 7 provinces in southern Thailand. These patients come to receive outpatient chemotherapy and radiotherapy at Suratthani Cancer Hospital. They are poor and uneducated; they need an accommodation during their 30-45 day course of therapy. Methods: A community-participatory action research (PAR) was employed to establish a model of palliative care for patients with cancer. The participants included health care providers, community, and patients and families. The PAR process includes problem identification and need assessment, community and team establishment, field survey, organization founding, model of care planning, action and inquiry (PDCA), outcome evaluation, and model distribution. Results: The model of care at Banpunrug involves the concepts of HHHS model, in that Banpunrug is a Home for patients; patients live in a house comfortable like in a Hotel resource; the patients are given care and living facilities similarly to those in a Hospital; the house is a School for patients to learn how to take care themselves, how to live well with cancer, and most importantly how to prepare themselves for a good death. The house is also a humanized care school for health care providers. Banpunrug’s philosophy of care is based on friendship therapy, social and spiritual support, community partnership, patient-family centeredness, Live & Love sharing house, and holistic and humanized care. With this philosophy, the house is managed as a home of the patients and everyone involved; everything is costless for all eligible patients and their family members; all facilities and living expense are donated from benevolent people, friends, and community. Everyone, including patients and family, has a sense of belonging to the house and there is no authority between health care providers and the patients in the house. The house is situated in a temple and a community and supported by many local nonprofit organizations and healthcare facilities such as a health promotion hospital at sub-disctrict level and Suratthani Cancer Hospital. Village health volunteers and multi-professional health care volunteers have contributed not only appropriate care, but also knowledge and experience to develop a distinguishing HHHS community-based palliative care model for patients with cancer. Since its opening the house has been a home for more than 400 patients and 300 family members. It is also a model for many national and international healthcare organizations and providers, who come to visit and learn about palliative care in and by community. Conclusions: The success of this palliative care model comes from community involvement, multi-professional volunteers and distributions, and concepts of HHHS model. Banpunrug promotes a consistent care across the cancer trajectory independent of prognosis in order to strengthen a full integration of palliative

Keywords: community-based palliative care, model, participatory action research, patients with cancer

Procedia PDF Downloads 247
71 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 383
70 Evidence-Triggers for Care of Patients with Cleft Lip and Palate in Srinagarind Hospital: The Tawanchai Center and Out-Patients Surgical Room

Authors: Suteera Pradubwong, Pattama Surit, Sumalee Pongpagatip, Tharinee Pethchara, Bowornsilp Chowchuen

Abstract:

Background: Cleft lip and palate (CLP) is a congenital anomaly of the lip and palate that is caused by several factors. It was found in approximately one per 500 to 550 live births depending on nationality and socioeconomic status. The Tawanchai Center and out-patients surgical room of Srinagarind Hospital are responsible for providing care to patients with CLP (starting from birth to adolescent) and their caregivers. From the observations and interviews with nurses working in these units, they reported that both patients and their caregivers confronted many problems which affected their physical and mental health. Based on the Soukup’s model (2000), the researchers used evidence triggers from clinical practice (practice triggers) and related literature (knowledge triggers) to investigate the problems. Objective: The purpose of this study was to investigate the problems of care for patients with CLP in the Tawanchai Center and out-patient surgical room of Srinagarind Hospital. Material and Method: The descriptive method was used in this study. For practice triggers, the researchers obtained the data from medical records of ten patients with CLP and from interviewing two patients with CLP, eight caregivers, two nurses, and two assistant workers. Instruments for the interview consisted of a demographic data form and a semi-structured questionnaire. For knowledge triggers, the researchers used a literature search. The data from both practice and knowledge triggers were collected between February and May 2016. The quantitative data were analyzed through frequency and percentage distributions, and the qualitative data were analyzed through a content analysis. Results: The problems of care gained from practice and knowledge triggers were consistent and were identified as holistic issues, including 1) insufficient feeding, 2) risks of respiratory tract infections and physical disorders, 3) psychological problems, such as anxiety, stress, and distress, 4) socioeconomic problems, such as stigmatization, isolation, and loss of income, 5)spiritual problems, such as low self-esteem and low quality of life, 6) school absence and learning limitation, 7) lack of knowledge about CLP and its treatments, 8) misunderstanding towards roles among the multidisciplinary team, 9) no available services, and 10) shortage of healthcare professionals, especially speech-language pathologists (SLPs). Conclusion: From evidence-triggers, the problems of care affect the patients and their caregivers holistically. Integrated long-term care by the multidisciplinary team is needed for children with CLP starting from birth to adolescent. Nurses should provide effective care to these patients and their caregivers by using a holistic approach and working collaboratively with other healthcare providers in the multidisciplinary team.

Keywords: evidence-triggers, cleft lip, cleft palate, problems of care

Procedia PDF Downloads 196
69 The Value of Computerized Corpora in EFL Textbook Design: The Case of Modal Verbs

Authors: Lexi Li

Abstract:

This study aims to contribute to the field of how computer technology can be exploited to enhance EFL textbook design. Specifically, the study demonstrates how computerized native and learner corpora can be used to enhance modal verb treatment in EFL textbooks. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because the pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the “secondary school” section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was compared with the textbook corpus in terms of the use (distributional features, semantic functions, and co-occurring constructions) in order to examine the degree of influence of the textbook on learners’ use of modal verbs. Moreover, the learner corpus was analyzed for the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The results indicate discrepancies between the textbook presentation of modal verbs and authentic modal use in natural discourse in terms of distributions of frequencies, semantic functions, and co-occurring structures. Furthermore, there are consistent patterns of use between the learner corpus and the textbook corpus with respect to the three above-mentioned aspects, except could, will and must, partially confirming the correlation between the frequency effects and L2 grammar acquisition. Further analysis reveals that the exceptions are caused by both positive and negative L1 transfer, indicating that the frequency effects can be intercepted by L1 interference. Besides, error analysis revealed that could, would, should and must are the most difficult for Chinese learners due to both inter-linguistic and intra-linguistic interference. The discrepancies between the textbook corpus and the native corpus point to a need to adjust the presentation of modal verbs in the textbooks in terms of frequencies, different meanings, and verb-phrase structures. Along with the adjustment of modal verb treatment based on authentic use, it is important for textbook writers to take into consideration the L1 interference as well as learners’ difficulties in their use of modal verbs. The present study is a methodological showcase of the combination both native and learner corpora in the enhancement of EFL textbook language authenticity and appropriateness for learners.

Keywords: EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 99
68 Insights into Child Malnutrition Dynamics with the Lens of Women’s Empowerment in India

Authors: Bharti Singh, Shri K. Singh

Abstract:

Child malnutrition is a multifaceted issue that transcends geographical boundaries. Malnutrition not only stunts physical growth but also leads to a spectrum of morbidities and child mortality. It is one of the leading causes of death (~50 %) among children under age five. Despite economic progress and advancements in healthcare, child malnutrition remains a formidable challenge for India. The objective is to investigate the impact of women's empowerment on child nutrition outcomes in India from 2006 to 2021. A composite index of women's empowerment was constructed using Confirmatory Factor Analysis (CFA), a rigorous technique that validates the measurement model by assessing how well-observed variables represent latent constructs. This approach ensures the reliability and validity of the empowerment index. Secondly, kernel density plots were utilised to visualise the distribution of key nutritional indicators, such as stunting, wasting, and overweight. These plots offer insights into the shape and spread of data distributions, aiding in understanding the prevalence and severity of malnutrition. Thirdly, linear polynomial graphs were employed to analyse how nutritional parameters evolved with the child's age. This technique enables the visualisation of trends and patterns over time, allowing for a deeper understanding of nutritional dynamics during different stages of childhood. Lastly, multilevel analysis was conducted to identify vulnerable levels, including State-level, PSU-level, and household-level factors impacting undernutrition. This approach accounts for hierarchical data structures and allows for the examination of factors at multiple levels, providing a comprehensive understanding of the determinants of child malnutrition. Overall, the utilisation of these statistical methodologies enhances the transparency and replicability of the study by providing clear and robust analytical frameworks for data analysis and interpretation. Our study reveals that NFHS-4 and NFHS-5 exhibit an equal density of severely stunted cases. NFHS-5 indicates a limited decline in wasting among children aged five, while the density of severely wasted children remains consistent across NFHS-3, 4, and 5. In 2019-21, women with higher empowerment had a lower risk of their children being undernourished (Regression coefficient= -0.10***; Confidence Interval [-0.18, -0.04]). Gender dynamics also play a significant role, with male children exhibiting a higher susceptibility to undernourishment. Multilevel analysis suggests household-level vulnerability (intra-class correlation=0.21), highlighting the need to address child undernutrition at the household level.

Keywords: child nutrition, India, NFHS, women’s empowerment

Procedia PDF Downloads 13
67 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique

Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina

Abstract:

The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.

Keywords: diffusion, glass-ceramics, ion exchange, vitrification

Procedia PDF Downloads 248
66 Time-Domain Nuclear Magnetic Resonance as a Potential Analytical Tool to Assess Thermisation in Ewe's Milk

Authors: Alessandra Pardu, Elena Curti, Marco Caredda, Alessio Dedola, Margherita Addis, Massimo Pes, Antonio Pirisi, Tonina Roggio, Sergio Uzzau, Roberto Anedda

Abstract:

Some of the artisanal cheeses products of European Countries certificated as PDO (Protected Designation of Origin) are made from raw milk. To recognise potential frauds (e.g. pasteurisation or thermisation of milk aimed at raw milk cheese production), the alkaline phosphatase (ALP) assay is currently applied only for pasteurisation, although it is known to have notable limitations for the validation of ALP enzymatic state in nonbovine milk. It is known that frauds considerably impact on customers and certificating institutions, sometimes resulting in a damage of the product image and potential economic losses for cheesemaking producers. Robust, validated, and univocal analytical methods are therefore needed to allow Food Control and Security Organisms, to recognise a potential fraud. In an attempt to develop a new reliable method to overcome this issue, Time-Domain Nuclear Magnetic Resonance (TD-NMR) spectroscopy has been applied in the described work. Daily fresh milk was analysed raw (680.00 µL in each 10-mm NMR glass tube) at least in triplicate. Thermally treated samples were also produced, by putting each NMR tube of fresh raw milk in water pre-heated at temperatures from 68°C up to 72°C and for up to 3 min, with continuous agitation, and quench-cooled to 25°C in a water and ice solution. Raw and thermally treated samples were analysed in terms of 1H T2 transverse relaxation times with a CPMG sequence (Recycle Delay: 6 s, interpulse spacing: 0.05 ms, 8000 data points) and quasi-continuous distributions of T2 relaxation times were obtained by CONTIN analysis. In line with previous data collected by high field NMR techniques, a decrease in the spin-spin relaxation constant T2 of the predominant 1H population was detected in heat-treated milk as compared to raw milk. The decrease of T2 parameter is consistent with changes in chemical exchange and diffusive phenomena, likely associated to changes in milk protein (i.e. whey proteins and casein) arrangement promoted by heat treatment. Furthermore, experimental data suggest that molecular alterations are strictly dependent on the specific heat treatment conditions (temperature/time). Such molecular variations in milk, which are likely transferred to cheese during cheesemaking, highlight the possibility to extend the TD-NMR technique directly on cheese to develop a method for assessing a fraud related to the use of a milk thermal treatment in PDO raw milk cheese. Results suggest that TDNMR assays might pave a new way to the detailed characterisation of heat treatments of milk.

Keywords: cheese fraud, milk, pasteurisation, TD-NMR

Procedia PDF Downloads 213
65 UV-Cured Thiol-ene Based Polymeric Phase Change Materials for Thermal Energy Storage

Authors: M. Vezir Kahraman, Emre Basturk

Abstract:

Energy storage technology offers new ways to meet the demand to obtain efficient and reliable energy storage materials. Thermal energy storage systems provide the potential to acquire energy savings, which in return decrease the environmental impact related to energy usage. For this purpose, phase change materials (PCMs) that work as 'latent heat storage units' which can store or release large amounts of energy are preferred. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. PCMs have found different application areas such as solar energy storage and transfer, HVAC (Heating, Ventilating and Air Conditioning) systems, thermal comfort in vehicles, passive cooling, temperature controlled distributions, industrial waste heat recovery, under floor heating systems and modified fabrics in textiles. Ultraviolet (UV)-curing technology has many advantages, which made it applicable in many different fields. Low energy consumption, high speed, room-temperature operation, low processing costs, high chemical stability, and being environmental friendly are some of its main benefits. UV-curing technique has many applications. One of the many advantages of UV-cured PCMs is that they prevent the interior PCMs from leaking. Shape-stabilized PCM is prepared by blending the PCM with a supporting material, usually polymers. In our study, this problem is minimized by coating the fatty alcohols with a photo-cross-linked thiol-ene based polymeric system. Leakage is minimized because photo-cross-linked polymer acts a matrix. The aim of this study is to introduce a novel thiol-ene based shape-stabilized PCM. Photo-crosslinked thiol-ene based polymers containing fatty alcohols were prepared and characterized for the purpose of phase change materials (PCMs). Different types of fatty alcohols were used in order to investigate their properties as shape-stable PCMs. The structure of the PCMs was confirmed by ATR-FTIR techniques. The phase transition behaviors, thermal stability of the prepared photo-crosslinked PCMs were investigated by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). This work was supported by Marmara University, Commission of Scientific Research Project.

Keywords: differential scanning calorimetry (DSC), Polymeric phase change material, thermal energy storage, UV-curing

Procedia PDF Downloads 204
64 Modelling of Groundwater Resources for Al-Najaf City, Iraq

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

Groundwater is a vital water resource in many areas in the world, particularly in the Middle-East region where the water resources become scarce and depleting. Sustainable management and planning of the groundwater resources become essential and urgent given the impact of the global climate change. In the recent years, numerical models have been widely used to predict the flow pattern and assess the water resources security, as well as the groundwater quality affected by the contaminants transported. In this study, MODFLOW is used to study the current status of groundwater resources and the risk of water resource security in the region centred at Al-Najaf City, which is located in the mid-west of Iraq and adjacent to the Euphrates River. In this study, a conceptual model is built using the geologic and hydrogeologic collected for the region, together with the Digital Elevation Model (DEM) data obtained from the "Global Land Cover Facility" (GLCF) and "United State Geological Survey" (USGS) for the study area. The computer model is also implemented with the distributions of 69 wells in the area with the steady pro-defined hydraulic head along its boundaries. The model is then applied with the recharge rate (from precipitation) of 7.55 mm/year, given from the analysis of the field data in the study area for the period of 1980-2014. The hydraulic conductivity from the measurements at the locations of wells is interpolated for model use. The model is calibrated with the measured hydraulic heads at the locations of 50 of 69 wells in the domain and results show a good agreement. The standard-error-of-estimate (SEE), root-mean-square errors (RMSE), Normalized RMSE and correlation coefficient are 0.297 m, 2.087 m, 6.899% and 0.971 respectively. Sensitivity analysis is also carried out, and it is found that the model is sensitive to recharge, particularly when the rate is greater than (15mm/year). Hydraulic conductivity is found to be another parameter which can affect the results significantly, therefore it requires high quality field data. The results show that there is a general flow pattern from the west to east of the study area, which agrees well with the observations and the gradient of the ground surface. It is found that with the current operational pumping rates of the wells in the area, a dry area is resulted in Al-Najaf City due to the large quantity of groundwater withdrawn. The computed water balance with the current operational pumping quantity shows that the Euphrates River supplies water into the groundwater of approximately 11759 m3/day, instead of gaining water of 11178 m3/day from the groundwater if no pumping from the wells. It is expected that the results obtained from the study can provide important information for the sustainable and effective planning and management of the regional groundwater resources for Al-Najaf City.

Keywords: Al-Najaf city, conceptual modelling, groundwater, unconfined aquifer, visual MODFLOW

Procedia PDF Downloads 178
63 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data

Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder

Abstract:

Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.

Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods

Procedia PDF Downloads 234
62 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study

Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming

Abstract:

Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.

Keywords: binary outcomes, statistical methods, clinical trials, simulation study

Procedia PDF Downloads 84
61 Fast Detection of Local Fiber Shifts by X-Ray Scattering

Authors: Peter Modregger, Özgül Öztürk

Abstract:

Glass fabric reinforced thermoplastic (GFRT) are composite materials, which combine low weight and resilient mechanical properties rendering them especially suitable for automobile construction. However, defects in the glass fabric as well as in the polymer matrix can occur during manufacturing, which may compromise component lifetime or even safety. One type of these defects is local fiber shifts, which can be difficult to detect. Recently, we have experimentally demonstrated the reliable detection of local fiber shifts by X-ray scattering based on the edge-illumination (EI) principle. EI constitutes a novel X-ray imaging technique that utilizes two slit masks, one in front of the sample and one in front of the detector, in order to simultaneously provide absorption, phase, and scattering contrast. The principle of contrast formation is as follows. The incident X-ray beam is split into smaller beamlets by the sample mask, resulting in small beamlets. These are distorted by the interaction with the sample, and the distortions are scaled up by the detector masks, rendering them visible to a pixelated detector. In the experiment, the sample mask is laterally scanned, resulting in Gaussian-like intensity distributions in each pixel. The area under the curves represents absorption, the peak offset refraction, and the width of the curve represents the scattering occurring in the sample. Here, scattering is caused by the numerous glass fiber/polymer matrix interfaces. In our recent publication, we have shown that the standard deviation of the absorption and scattering values over a selected field of view can be used to distinguish between intact samples and samples with local fiber shift defects. The quantification of defect detection performance was done by using p-values (p=0.002 for absorption and p=0.009 for scattering) and contrast-to-noise ratios (CNR=3.0 for absorption and CNR=2.1 for scattering) between the two groups of samples. This was further improved for the scattering contrast to p=0.0004 and CNR=4.2 by utilizing a harmonic decomposition analysis of the images. Thus, we concluded that local fiber shifts can be reliably detected by the X-ray scattering contrasts provided by EI. However, a potential application in, for example, production monitoring requires fast data acquisition times. For the results above, the scanning of the sample masks was performed over 50 individual steps, which resulted in long total scan times. In this paper, we will demonstrate that reliable detection of local fiber shift defects is also possible by using single images, which implies a speed up of total scan time by a factor of 50. Additional performance improvements will also be discussed, which opens the possibility for real-time acquisition. This contributes a vital step for the translation of EI to industrial applications for a wide variety of materials consisting of numerous interfaces on the micrometer scale.

Keywords: defects in composites, X-ray scattering, local fiber shifts, X-ray edge Illumination

Procedia PDF Downloads 37
60 Gendered Mobility: Deep Distributions in Urban Transport Systems in Delhi

Authors: Nidhi Prabha

Abstract:

Transportation as a sector is one of the most significant infrastructural elements of the ‘urban.' The distinctness of an urban life in a city is marked by the dynamic movements that it enables within the city-space. Therefore it is important to study the public-transport systems that enable and foster mobility which characterizes the urban. It is also crucial to underscore the way one is examining the urban transport systems - either as an infrastructural unit in a strict physical-structural sense or as a structural unit which acts as a prism refracting multiple experiences depending on the location of the ‘commuter.' In the proposed paper, the attempt is to uncover and investigate the assumption of the neuter-commuter by looking at urban transportation in the secondary sense i.e. as a structural unit which is experienced differently by different kinds of commuters, thus making transportation deeply distributed with various social structures and locations like class or gender which map onto the transport systems. To this end, the public-transit systems operating in Urban Delhi i.e. the Delhi Metros and the Delhi Transport Corporation run public-buses are looked at as case studies. The study is premised on the knowledge and data gained from both primary and secondary sources. Primary sources include data and knowledge collected from fieldwork, the methodology for which has ranged from adopting ‘mixed-methods’ which is ‘Qualitative-then-Quantitative’ as well as borrowing ethnographic techniques. Apart from fieldwork, other primary sources looked at including Annual Reports and policy documents of the Delhi Metro Rail Corporation (DMRC) and the Delhi Transport Corporation (DTC), Union and Delhi budgets, Economic Survey of Delhi, press releases, etc. Secondary sources include the vast array of literature available on the critical nodes that inform the research like gender, transport geographies, urban-space, etc. The study indicates a deeply-distributed urban transport system wherein the various social-structural locations or different kinds of commuters map onto the way these different commuters experience mobility or movement within the city space. Mobility or movement, therefore, becomes gendered or has class-based ramifications. The neuter-commuter assumption is thus challenged. Such an understanding enables us to challenge the anonymity which the ‘urban’ otherwise claims it provides over the rural. The rural is opposed to the urban wherein urban ushers a modern way of life, breaking ties of traditional social identities. A careful study of the transport systems through the traveling patterns and choices of the commuters, however, indicate that this does not hold true as even the same ‘public-space’ of the transport systems allocates different places to different kinds of commuters. The central argument made though the research done is therefore that infrastructure like urban-transport-systems has to be studied and examined as seen beyond just a physical structure. The various experiences of daily mobility of different kinds of commuters have to be taken into account in order to design and plan more inclusive transport systems.

Keywords: gender, infrastructure, mobility, urban-transport-systems

Procedia PDF Downloads 191
59 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines

Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky

Abstract:

Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.

Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods

Procedia PDF Downloads 85
58 Preparation and Characterization of Poly(L-Lactic Acid)/Oligo(D-Lactic Acid) Grafted Cellulose Composites

Authors: Md. Hafezur Rahaman, Mohd. Maniruzzaman, Md. Shadiqul Islam, Md. Masud Rana

Abstract:

With the growth of environmental awareness, enormous researches are running to develop the next generation materials based on sustainability, eco-competence, and green chemistry to preserve and protect the environment. Due to biodegradability and biocompatibility, poly (L-lactic acid) (PLLA) has a great interest in ecological and medical applications. Also, cellulose is one of the most abundant biodegradable, renewable polymers found in nature. It has several advantages such as low cost, high mechanical strength, biodegradability and so on. Recently, an immense deal of attention has been paid for the scientific and technological development of α-cellulose based composite material. PLLA could be used for grafting of cellulose to improve the compatibility prior to the composite preparation. Here it is quite difficult to form a bond between lower hydrophilic molecules like PLLA and α-cellulose. Dimmers and oligomers can easily be grafted onto the surface of the cellulose by ring opening or polycondensation method due to their low molecular weight. In this research, α-cellulose extracted from jute fiber is grafted with oligo(D-lactic acid) (ODLA) via graft polycondensation reaction in presence of para-toluene sulphonic acid and potassium persulphate in toluene at 130°C for 9 hours under 380 mmHg. Here ODLA is synthesized by ring opening polymerization of D-lactides in the presence of stannous octoate (0.03 wt% of lactide) and D-lactic acids at 140°C for 10 hours. Composites of PLLA with ODLA grafted α-cellulose are prepared by solution mixing and film casting method. Confirmation of grafting was carried out through FTIR spectroscopy and SEM analysis. A strongest carbonyl peak of FTIR spectroscopy at 1728 cm⁻¹ of ODLA grafted α-cellulose confirms the grafting of ODLA onto α-cellulose which is absent in α-cellulose. It is also observed from SEM photographs that there are some white areas (spot) on ODLA grafted α-cellulose as compared to α-cellulose may indicate the grafting of ODLA and consistent with FTIR results. Analysis of the composites is carried out by FTIR, SEM, WAXD and thermal gravimetric analyzer. Most of the FTIR characteristic absorption peak of the composites shifted to higher wave number with increasing peak area may provide a confirmation that PLLA and grafted cellulose have better compatibility in composites via intermolecular hydrogen bonding and this supports previously published results. Grafted α-cellulose distributions in composites are uniform which is observed by SEM analysis. WAXD studied show that only homo-crystalline structures of PLLA present in the composites. Thermal stability of the composites is enhanced with increasing the percentages of ODLA grafted α-cellulose in composites. As a consequence, the resultant composites have a resistance toward the thermal degradation. The effects of length of the grafted chain and biodegradability of the composites will be studied in further research.

Keywords: α-cellulose, composite, graft polycondensation, oligo(D-lactic acid), poly(L-lactic acid)

Procedia PDF Downloads 97