Search results for: light weight algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10744

Search results for: light weight algorithm

424 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 160
423 Rumen Epithelium Development of Bovine Fetuses and Newborn Calves

Authors: Juliana Shimara Pires Ferrão, Letícia Palmeira Pinto, Francisco Palma Rennó, Francisco Javier Hernandez Blazquez

Abstract:

The ruminant stomach is a complex and multi-chambered organ. Although the true stomach (abomasum) is fully differentiated and functional at birth, the same does not occur with the rumen chamber. At this moment, rumen papillae are small or nonexistent. The papillae only fully develop after weaning and during calf growth. Papillae development and ruminal epithelium specialization during the fetus growth and at birth must be two interdependent processes that will prepare the rumen to adapt to ruminant adult feeding. The microscopic study of rumen epithelium at these early phases of life is important to understand how this structure prepares the rumen to deal with the following weaning processes and its functional activation. Samples of ruminal mucosa of bovine fetuses (110- and 150 day-old) and newborn calves were collected (dorsal and ventral portions) and processed for light and electron microscopy and immunohistochemistry. The basal cell layer of the stratified pavimentous epithelium present in different ruminal portions of the fetuses was thicker than the same portions of newborn calves. The superficial and intermediate epithelial layers of 150 day-old fetuses were thicker than those found in the other 2 studied ages. At this age (150 days), dermal papillae begin to invade the intermediate epithelial layer which gradually disappears in newborn calves. At birth, the ruminal papillae project from the epithelial surface, probably by regression of the epithelial cells (transitory cells) surrounding the dermal papillae. The PCNA cell proliferation index (%) was calculated for all epithelial samples. Fetuses 150 day-old showed increased cell proliferation in basal cell layer (Dorsal Portion: 84.2%; Ventral Portion: 89.8%) compared to other ages studied. Newborn calves showed an intermediate index (Dorsal Portion: 65.1%; Ventral Portion: 48.9%), whereas 110 day-old fetuses had the lowest proliferation index (Dorsal Portion: 57.2%; Ventral Portion: 20.6%). Regarding the transitory epithelium, 110 day-old fetuses showed the lowest proliferation index (Dorsal Portion: 44.6%; Ventral Portion: 20.1%), 150 day-old fetuses showed an intermediate proliferation index (Dorsal Portion: 57.5%; Ventral Portion: 71.1%) and newborn calves presented a higher proliferation index (Dorsal Portion: 75.1%; Ventral Portion: 19.6%). Under TEM, the 110- and 150 day-old fetuses presented thicker and poorly organized basal cell layer, with large nuclei and dense cytoplasm. In newborn calves, the basal cell layer was more organized and with fewer layers, but typically similar in both regions of the rumen. For the transitory epithelium, fetuses displayed larger cells than those found in newborn calves with less electrondense cytoplasm than that found in the basal cells. The ruminal dorsal portion has an overall higher cell proliferation rate than the ventral portion. Thus we can infer that the dorsal portion may have a higher cell activity than the ventral portion during ruminal development. Moreover, the basal cell layer is thicker in the 110- and 150 day-old fetuses than in the newborn calves. The transitory epithelium, which is much reduced, at birth may have a structural support function of the developing dermal papillae. When it regresses or is sheared off, the papillae are “carved out” from the surrounding epithelial layer.

Keywords: bovine, calf, epithelium, fetus, hematoxylin-eosin, immunohistochemistry, TEM, Rumen

Procedia PDF Downloads 388
422 ExactData Smart Tool For Marketing Analysis

Authors: Aleksandra Jonas, Aleksandra Gronowska, Maciej Ścigacz, Szymon Jadczak

Abstract:

Exact Data is a smart tool which helps with meaningful marketing content creation. It helps marketers achieve this by analyzing the text of an advertisement before and after its publication on social media sites like Facebook or Instagram. In our research we focus on four areas of natural language processing (NLP): grammar correction, sentiment analysis, irony detection and advertisement interpretation. Our research has identified a considerable lack of NLP tools for the Polish language, which specifically aid online marketers. In light of this, our research team has set out to create a robust and versatile NLP tool for the Polish language. The primary objective of our research is to develop a tool that can perform a range of language processing tasks in this language, such as sentiment analysis, text classification, text correction and text interpretation. Our team has been working diligently to create a tool that is accurate, reliable, and adaptable to the specific linguistic features of Polish, and that can provide valuable insights for a wide range of marketers needs. In addition to the Polish language version, we are also developing an English version of the tool, which will enable us to expand the reach and impact of our research to a wider audience. Another area of focus in our research involves tackling the challenge of the limited availability of linguistically diverse corpora for non-English languages, which presents a significant barrier in the development of NLP applications. One approach we have been pursuing is the translation of existing English corpora, which would enable us to use the wealth of linguistic resources available in English for other languages. Furthermore, we are looking into other methods, such as gathering language samples from social media platforms. By analyzing the language used in social media posts, we can collect a wide range of data that reflects the unique linguistic characteristics of specific regions and communities, which can then be used to enhance the accuracy and performance of NLP algorithms for non-English languages. In doing so, we hope to broaden the scope and capabilities of NLP applications. Our research focuses on several key NLP techniques including sentiment analysis, text classification, text interpretation and text correction. To ensure that we can achieve the best possible performance for these techniques, we are evaluating and comparing different approaches and strategies for implementing them. We are exploring a range of different methods, including transformers and convolutional neural networks (CNNs), to determine which ones are most effective for different types of NLP tasks. By analyzing the strengths and weaknesses of each approach, we can identify the most effective techniques for specific use cases, and further enhance the performance of our tool. Our research aims to create a tool, which can provide a comprehensive analysis of advertising effectiveness, allowing marketers to identify areas for improvement and optimize their advertising strategies. The results of this study suggest that a smart tool for advertisement analysis can provide valuable insights for businesses seeking to create effective advertising campaigns.

Keywords: NLP, AI, IT, language, marketing, analysis

Procedia PDF Downloads 87
421 Development and Evaluation of Economical Self-cleaning Cement

Authors: Anil Saini, Jatinder Kumar Ratan

Abstract:

Now a day, the key issue for the scientific community is to devise the innovative technologies for sustainable control of urban pollution. In urban cities, a large surface area of the masonry structures, buildings, and pavements is exposed to the open environment, which may be utilized for the control of air pollution, if it is built from the photocatalytically active cement-based constructional materials such as concrete, mortars, paints, and blocks, etc. The photocatalytically active cement is formulated by incorporating a photocatalyst in the cement matrix, and such cement is generally known as self-cleaning cement In the literature, self-cleaning cement has been synthesized by incorporating nanosized-TiO₂ (n-TiO₂) as a photocatalyst in the formulation of the cement. However, the utilization of n-TiO₂ for the formulation of self-cleaning cement has the drawbacks of nano-toxicity, higher cost, and agglomeration as far as the commercial production and applications are concerned. The use of microsized-TiO₂ (m-TiO₂) in place of n-TiO₂ for the commercial manufacture of self-cleaning cement could avoid the above-mentioned problems. However, m-TiO₂ is less photocatalytically active as compared to n- TiO₂ due to smaller surface area, higher band gap, and increased recombination rate. As such, the use of m-TiO₂ in the formulation of self-cleaning cement may lead to a reduction in photocatalytic activity, thus, reducing the self-cleaning, depolluting, and antimicrobial abilities of the resultant cement material. So improvement in the photoactivity of m-TiO₂ based self-cleaning cement is the key issue for its practical applications in the present scenario. The current work proposes the use of surface-fluorinated m-TiO₂ for the formulation of self-cleaning cement to enhance its photocatalytic activity. The calcined dolomite, a constructional material, has also been utilized as co-adsorbent along with the surface-fluorinated m-TiO₂ in the formulation of self-cleaning cement to enhance the photocatalytic performance. The surface-fluorinated m-TiO₂, calcined dolomite, and the formulated self-cleaning cement were characterized using diffuse reflectance spectroscopy (DRS), X-ray diffraction analysis (XRD), field emission-scanning electron microscopy (FE-SEM), energy dispersive x-ray spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), BET (Brunauer–Emmett–Teller) surface area, and energy dispersive X-ray fluorescence spectrometry (EDXRF). The self-cleaning property of the as-prepared self-cleaning cement was evaluated using the methylene blue (MB) test. The depolluting ability of the formulated self-cleaning cement was assessed through a continuous NOX removal test. The antimicrobial activity of the self-cleaning cement was appraised using the method of the zone of inhibition. The as-prepared self-cleaning cement obtained by uniform mixing of 87% clinker, 10% calcined dolomite, and 3% surface-fluorinated m-TiO₂ showed a remarkable self-cleaning property by providing 53.9% degradation of the coated MB dye. The self-cleaning cement also depicted a noteworthy depolluting ability by removing 5.5% of NOx from the air. The inactivation of B. subtiltis bacteria in the presence of light confirmed the significant antimicrobial property of the formulated self-cleaning cement. The self-cleaning, depolluting, and antimicrobial results are attributed to the synergetic effect of surface-fluorinated m-TiO₂ and calcined dolomite in the cement matrix. The present study opens an idea and route for further research for acile and economical formulation of self-cleaning cement.

Keywords: microsized-titanium dioxide (m-TiO₂), self-cleaning cement, photocatalysis, surface-fluorination

Procedia PDF Downloads 172
420 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator

Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov

Abstract:

The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.

Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator

Procedia PDF Downloads 378
419 Radiation Stability of Structural Steel in the Presence of Hydrogen

Authors: E. A. Krasikov

Abstract:

As the service life of an operating nuclear power plant (NPP) increases, the potential misunderstanding of the degradation of aging components must receive more attention. Integrity assurance analysis contributes to the effective maintenance of adequate plant safety margins. In essence, the reactor pressure vessel (RPV) is the key structural component determining the NPP lifetime. Environmentally induced cracking in the stainless steel corrosion-preventing cladding of RPV’s has been recognized to be one of the technical problems in the maintenance and development of light-water reactors. Extensive cracking leading to failure of the cladding was found after 13000 net hours of operation in JPDR (Japan Power Demonstration Reactor). Some of the cracks have reached the base metal and further penetrated into the RPV in the form of localized corrosion. Failures of reactor internal components in both boiling water reactors and pressurized water reactors have increased after the accumulation of relatively high neutron fluences (5´1020 cm–2, E>0,5MeV). Therefore, in the case of cladding failure, the problem arises of hydrogen (as a corrosion product) embrittlement of irradiated RPV steel because of exposure to the coolant. At present when notable progress in plasma physics has been obtained practical energy utilization from fusion reactors (FR) is determined by the state of material science problems. The last includes not only the routine problems of nuclear engineering but also a number of entirely new problems connected with extreme conditions of materials operation – irradiation environment, hydrogenation, thermocycling, etc. Limiting data suggest that the combined effect of these factors is more severe than any one of them alone. To clarify the possible influence of the in-service synergistic phenomena on the FR structural materials properties we have studied hydrogen-irradiated steel interaction including alternating hydrogenation and heat treatment (annealing). Available information indicates that the life of the first wall could be expanded by means of periodic in-place annealing. The effects of neutron fluence and irradiation temperature on steel/hydrogen interactions (adsorption, desorption, diffusion, mechanical properties at different loading velocities, post-irradiation annealing) were studied. Experiments clearly reveal that the higher the neutron fluence and the lower the irradiation temperature, the more hydrogen-radiation defects occur, with corresponding effects on the steel mechanical properties. Hydrogen accumulation analyses and thermal desorption investigations were performed to prove the evidence of hydrogen trapping at irradiation defects. Extremely high susceptibility to hydrogen embrittlement was observed with specimens which had been irradiated at relatively low temperature. However, the susceptibility decreases with increasing irradiation temperature. To evaluate methods for the RPV’s residual lifetime evaluation and prediction, more work should be done on the irradiated metal–hydrogen interaction in order to monitor more reliably the status of irradiated materials.

Keywords: hydrogen, radiation, stability, structural steel

Procedia PDF Downloads 273
418 Consumers Attitude toward the Latest Trends in Decreasing Energy Consumption of Washing Machine

Authors: Farnaz Alborzi, Angelika Schmitz, Rainer Stamminger

Abstract:

Reducing water temperatures in the wash phase of a washing programme and increasing the overall cycle durations are the latest trends in decreasing energy consumption of washing programmes. Since the implementation of the new energy efficiency classes in 2010, manufacturers seem to apply the aforementioned washing strategy with lower temperatures combined with longer programme durations extensively to realise energy-savings needed to meet the requirements of the highest energy efficiency class possible. A semi-representative on-line survey in eleven European countries (Czech Republic, Finland, France, Germany, Hungary, Italy, Poland, Romania, Spain, Sweden and the United Kingdom) was conducted by Bonn University in 2015 to shed light on consumer opinion and behaviour regarding the effects of the lower washing temperature and longer cycle duration in laundry washing on consumers’ acceptance of the programme. The risk of the long wash cycle is that consumers might not use the energy efficient Standard programmes and will think of this option as inconvenient and therefore switch to shorter, but more energy consuming programmes. Furthermore, washing in a lower temperature may lead to the problem of cross-contamination. Washing behaviour of over 5,000 households was studied in this survey to provide support and guidance for manufacturers and policy designers. Qualified households were chosen following a predefined quota: -Involvement in laundry washing: substantial, -Distribution of gender: more than 50 % female , -Selected age groups: -20–39 years, -40–59 years, -60–74 years, -Household size: 1, 2, 3, 4 and more than 4 people. Furthermore, Eurostat data for each country were used to calculate the population distribution in the respective age class and household size as quotas for the consumer survey distribution in each country. Before starting the analyses, the validity of each dataset was controlled with the aid of control questions. After excluding the outlier data, the number of the panel diminished from 5,100 to 4,843. The primary outcome of the study is European consumers are willing to save water and energy in a laundry washing but reluctant to use long programme cycles since they don’t believe that the long cycles could be energy-saving. However, the results of our survey don’t confirm that there is a relation between frequency of using Standard cotton (Eco) or Energy-saving programmes and the duration of the programmes. It might be explained by the fact that the majority of washing programmes used by consumers do not take so long, perhaps consumers just choose some additional time reduction option when selecting those programmes and this finding might be changed if the Energy-saving programmes take longer. Therefore, it may be assumed that introducing the programme duration as a new measure on a revised energy label would strongly influence the consumer at the point of sale. Furthermore, results of the survey confirm that consumers are more willing to use lower temperature programmes in order to save energy than accepting longer programme cycles and majority of them accept deviation from the nominal temperature of the programme as long as the results are good.

Keywords: duration, energy-saving, standard programmes, washing temperature

Procedia PDF Downloads 222
417 Right Atrial Tissue Morphology in Acquired Heart Diseases

Authors: Edite Kulmane, Mara Pilmane, Romans Lacis

Abstract:

Introduction: Acquired heart diseases remain one of the leading health care problems in the world. Changes in myocardium of the diseased hearts are complex and pathogenesis is still not fully clear. The aim of this study was to identify appearance and distribution of apoptosis, homeostasis regulating factors, and innervation and ischemia markers in right atrial tissue in different acquired heart diseases. Methods: During elective open heart surgery were taken right atrial tissue fragments from 12 patients. All patients were operated because of acquired heart diseases- aortic valve stenosis (5 patients), coronary heart disease (5 patients), coronary heart disease and secondary mitral insufficiency (1 patient) and mitral disease (1 patient). The mean age was (mean±SD) 70,2±7,0 years (range 58-83 years). The tissues were stained with haematoxylin and eosin methods for routine light-microscopical examination and for immunohistochemical detection of protein gene peptide 9.5 (PGP 9.5), human atrial natriuretic peptide (hANUP), vascular endothelial growth factor (VEGF), chromogranin A and endothelin. Apoptosis was detected by TUNEL method. Results: All specimens showed degeneration of cardiomyocytes with lysis of myofibrils, diffuse vacuolization especially in perinuclear region, different size of cells and their nuclei. The severe invasion of connective tissue was observed in main part of all fragments. The apoptotic index ranged from 24 to 91%. One specimen showed region of newly performed microvessels with cube shaped endotheliocytes that were positive for PGP 9.5, endothelin, chromogranin A and VEGF. From all fragments, taken from patients with coronary heart disease, there were observed numerous PGP 9.5-containing nerve fibres, except in patient with secondary mitral insufficiency, who showed just few PGP 9.5 positive nerves. In majority of specimens there were regions observed with cube shaped mixed -VEGF immunoreactive endocardial and epicardial cells. Only VEGF positive endothelial cells were observed just in few specimens. There was no significant difference of hANUP secreting cells among all specimens. All patients operated due to the coronary heart disease moderate to numerous number of chromogranin A positive cells were seen while in patients with aortic valve stenosis tissue demonstrated just few factor positive cells. Conclusions: Complex detection of different factors may indicate selectively disordered morphopathogenetical event of heart disease: decrease of PGP 9.5 nerves suggests the decreased innervation of organ; increased apoptosis indicates the cell death without ingrowth of connective tissue; persistent presence of hANUP proves the unchanged homeostasis of cardiomyocytes probably supported by expression of chromogranins. Finally, decrease of VEGF detects the regions of affected blood vessels in heart affected by acquired heart disease.

Keywords: heart, apoptosis, protein-gene peptide 9.5, atrial natriuretic peptide, vascular endothelial growth factor, chromogranin A, endothelin

Procedia PDF Downloads 295
416 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 107
415 Association between Physical Inactivity and Sedentary Behaviours with Risk of Hypertension among Sedentary Occupation Workers: A Cross-Sectional Study

Authors: Hanan Badr, Fahad Manee, Rao Shashidhar, Omar Bayoumy

Abstract:

Introduction: Hypertension is the major risk factor for cardiovascular diseases and stroke and a universe leading cause of disability-adjusted life years and mortality. Adopting an unhealthy lifestyle is thought to be associated with developing hypertension regardless of predisposing genetic factors. This study aimed to examine the association between recreational physical activity (RPA), and sedentary behaviors with a risk of hypertension among ministry employees, where there is no role for occupational physical activity (PA), and to scrutinize participants’ time spent in RPA and sedentary behaviors on the working and weekend days. Methods: A cross-sectional study was conducted among randomly selected 2562 employees working at ten randomly selected ministries in Kuwait. To have a representative sample, the proportional allocation technique was used to define the number of participants in each ministry. A self-administered questionnaire was used to collect data about participants' socio-demographic characteristics, health status, and their 24 hours’ time use during a regular working day and a weekend day. The time use covered a list of 20 different activities practiced by a person daily. The New Zealand Physical Activity Questionnaire-Short Form (NZPAQ-SF) was used to assess the level of RPA. The scale generates three categories according to the number of hours spent in RPA/week: relatively inactive, relatively active, and highly active. Gender-matched trained nurses performed anthropometric measurements (weight and height) and measuring blood pressure (two readings) using an automatic blood pressure monitor (95% accuracy level compared to a calibrated mercury sphygmomanometer). Results: Participants’ mean age was 35.3±8.4 years, with almost equal gender distribution. About 13% of the participants were smokers, and 75% were overweight. Almost 10% reported doctor-diagnosed hypertension. Among those who did not, the mean systolic blood pressure was 119.9±14.2 and the mean diastolic blood pressure was 80.9±7.3. Moreover, 73.9% of participants were relatively physically inactive and 18% were highly active. Mean systolic and diastolic blood pressure showed a significant inverse association with the level of RPA (means of blood pressure measures were: 123.3/82.8 among relatively inactive, 119.7/80.4 among relatively active, and 116.6/79.6 among highly active). Furthermore, RPA occupied 1.6% and 1.8% of working and weekend days, respectively, while sedentary behaviors (watching TV, using electronics for social media or entertaining, etc.) occupied 11.2% and 13.1%, respectively. Sedentary behaviors were significantly associated with high levels of systolic and diastolic blood pressure. Binary logistic regression revealed that physical inactivity (OR=3.13, 95% CI: 2.25-4.35) and sedentary behaviors (OR=2.25, CI: 1.45-3.17) were independent risk factors for high systolic and diastolic blood pressure after adjustment for other covariates. Conclusions: Physical inactivity and sedentary lifestyle were associated with a high risk of hypertension. Further research to examine the independent role of RPA in improving blood pressure levels and cultural and occupational barriers for practicing RPA are recommended. Policies should be enacted in promoting PA in the workplace that might help in decreasing the risk of hypertension among sedentary occupation workers.

Keywords: physical activity, sedentary behaviors, hypertension, workplace

Procedia PDF Downloads 178
414 Photo-Fenton Degradation of Organic Compounds by Iron(II)-Embedded Composites

Authors: Marius Sebastian Secula, Andreea Vajda, Benoit Cagnon, Ioan Mamaliga

Abstract:

One of the most important classes of pollutants is represented by dyes. The synthetic character and complex molecular structure make them more stable and difficult to be biodegraded in water. The treatment of wastewaters containing dyes in order to separate/degrade dyes is of major importance. Various techniques have been employed to remove and/or degrade dyes in water. Advanced oxidation processes (AOPs) are known as among the most efficient ones towards dye degradation. The aim of this work is to investigate the efficiency of a cheap Iron-impregnated activated carbon Fenton-like catalyst in order to degrade organic compounds in aqueous solutions. In the presented study an anionic dye, Indigo Carmine, is considered as a model pollutant. Various AOPs are evaluated for the degradation of Indigo Carmine to establish the effect of the prepared catalyst. It was found that the Iron(II)-embedded activated carbon composite enhances significantly the degradation process of Indigo Carmine. Using the wet impregnation procedure, 5 g of L27 AC material were contacted with Fe(II) solutions of FeSO4 precursor at a theoretical iron content in the resulted composite of 1 %. The L27 AC was impregnated for 3h at 45°C, then filtered, washed several times with water and ethanol and dried at 55 °C for 24 h. Thermogravimetric analysis, Fourier transform infrared, X-ray diffraction, and transmission electron microscopy were employed to investigate the structural, textural, and micromorphology of the catalyst. Total iron content in the obtained composites and iron leakage were determined by spectrophotometric method using phenantroline. Photo-catalytic tests were performed using an UV - Consulting Peschl Laboratory Reactor System. UV light irradiation tests were carried out to determine the performance of the prepared Iron-impregnated composite towards the degradation of Indigo Carmine in aqueous solution using different conditions (17 W UV lamps, with and without in-situ generation of O3; different concentrations of H2O2, different initial concentrations of Indigo Carmine, different values of pH, different doses of NH4-OH enhancer). The photocatalytic tests were performed after the adsorption equilibrium has been established. The obtained results emphasize an enhancement of Indigo Carmine degradation in case of the heterogeneous photo-Fenton process conducted with an O3 generating UV lamp in the presence of hydrogen peroxide. The investigated process obeys the pseudo-first order kinetics. The photo-Fenton degradation of IC was tested at different values of initial concentration. The obtained results emphasize an enhancement of Indigo Carmine degradation in case of the heterogeneous photo-Fenton process conducted with an O3 generating UV lamp in the presence of hydrogen peroxide. Acknowledgments: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.

Keywords: photodegradation, heterogeneous Fenton, anionic dye, carbonaceous composite, screening factorial design

Procedia PDF Downloads 257
413 Inclusion Body Refolding at High Concentration for Large-Scale Applications

Authors: J. Gabrielczyk, J. Kluitmann, T. Dammeyer, H. J. Jördening

Abstract:

High-level expression of proteins in bacteria often causes production of insoluble protein aggregates, called inclusion bodies (IB). They contain mainly one type of protein and offer an easy and efficient way to get purified protein. On the other hand, proteins in IB are normally devoid of function and therefore need a special treatment to become active. Most refolding techniques aim at diluting the solubilizing chaotropic agents. Unfortunately, optimal refolding conditions have to be found empirically for every protein. For large-scale applications, a simple refolding process with high yields and high final enzyme concentrations is still missing. The constructed plasmid pASK-IBA63b containing the sequence of fructosyltransferase (FTF, EC 2.4.1.162) from Bacillus subtilis NCIMB 11871 was transformed into E. coli BL21 (DE3) Rosetta. The bacterium was cultivated in a fed-batch bioreactor. The produced FTF was obtained mainly as IB. For refolding experiments, five different amounts of IBs were solubilized in urea buffer with protein concentration of 0.2-8.5 g/L. Solubilizates were refolded with batch or continuous dialysis. The refolding yield was determined by measuring the protein concentration of the clear supernatant before and after the dialysis. Particle size was measured by dynamic light scattering. We tested the solubilization properties of fructosyltransferase IBs. The particle size measurements revealed that the solubilization of the aggregates is achieved at urea concentration of 5M or higher and confirmed by absorption spectroscopy. All results confirm previous investigations that refolding yields are dependent upon initial protein concentration. In batch dialysis, the yields dropped from 67% to 12% and 72% to 19% for continuous dialysis, in relation to initial concentrations from 0.2 to 8.5 g/L. Often used additives such as sucrose and glycerol had no effect on refolding yields. Buffer screening indicated a significant increase in activity but also temperature stability of FTF with citrate/phosphate buffer. By adding citrate to the dialysis buffer, we were able to increase the refolding yields to 82-47% in batch and 90-74% in the continuous process. Further experiments showed that in general, higher ionic strength of buffers had major impact on refolding yields; doubling the buffer concentration increased the yields up to threefold. Finally, we achieved corresponding high refolding yields by reducing the chamber volume by 75% and the amount of buffer needed. The refolded enzyme had an optimal activity of 12.5±0.3 x104 units/g. However, detailed experiments with native FTF revealed a reaggregation of the molecules and loss in specific activity depending on the enzyme concentration and particle size. For that reason, we actually focus on developing a process of simultaneous enzyme refolding and immobilization. The results of this study show a new approach in finding optimal refolding conditions for inclusion bodies at high concentrations. Straightforward buffer screening and increase of the ionic strength can optimize the refolding yield of the target protein by 400%. Gentle removal of chaotrope with continuous dialysis increases the yields by an additional 65%, independent of the refolding buffer applied. In general time is the crucial parameter for successful refolding of solubilized proteins.

Keywords: dialysis, inclusion body, refolding, solubilization

Procedia PDF Downloads 294
412 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 133
411 Modeling of Foundation-Soil Interaction Problem by Using Reduced Soil Shear Modulus

Authors: Yesim Tumsek, Erkan Celebi

Abstract:

In order to simulate the infinite soil medium for soil-foundation interaction problem, the essential geotechnical parameter on which the foundation stiffness depends, is the value of soil shear modulus. This parameter directly affects the site and structural response of the considered model under earthquake ground motions. Strain-dependent shear modulus under cycling loads makes difficult to estimate the accurate value in computation of foundation stiffness for the successful dynamic soil-structure interaction analysis. The aim of this study is to discuss in detail how to use the appropriate value of soil shear modulus in the computational analyses and to evaluate the effect of the variation in shear modulus with strain on the impedance functions used in the sub-structure method for idealizing the soil-foundation interaction problem. Herein, the impedance functions compose of springs and dashpots to represent the frequency-dependent stiffness and damping characteristics at the soil-foundation interface. Earthquake-induced vibration energy is dissipated into soil by both radiation and hysteretic damping. Therefore, flexible-base system damping, as well as the variability in shear strengths, should be considered in the calculation of impedance functions for achievement a more realistic dynamic soil-foundation interaction model. In this study, it has been written a Matlab code for addressing these purposes. The case-study example chosen for the analysis is considered as a 4-story reinforced concrete building structure located in Istanbul consisting of shear walls and moment resisting frames with a total height of 12m from the basement level. The foundation system composes of two different sized strip footings on clayey soil with different plasticity (Herein, PI=13 and 16). In the first stage of this study, the shear modulus reduction factor was not considered in the MATLAB algorithm. The static stiffness, dynamic stiffness modifiers and embedment correction factors of two rigid rectangular foundations measuring 2m wide by 17m long below the moment frames and 7m wide by 17m long below the shear walls are obtained for translation and rocking vibrational modes. Afterwards, the dynamic impedance functions of those have been calculated for reduced shear modulus through the developed Matlab code. The embedment effect of the foundation is also considered in these analyses. It can easy to see from the analysis results that the strain induced in soil will depend on the extent of the earthquake demand. It is clearly observed that when the strain range increases, the dynamic stiffness of the foundation medium decreases dramatically. The overall response of the structure can be affected considerably because of the degradation in soil stiffness even for a moderate earthquake. Therefore, it is very important to arrive at the corrected dynamic shear modulus for earthquake analysis including soil-structure interaction.

Keywords: clay soil, impedance functions, soil-foundation interaction, sub-structure approach, reduced shear modulus

Procedia PDF Downloads 270
410 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU

Authors: Ali Abdul Kadhim, Fue Lien

Abstract:

Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.

Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model

Procedia PDF Downloads 207
409 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises

Procedia PDF Downloads 253
408 In-situ Acoustic Emission Analysis of a Polymer Electrolyte Membrane Water Electrolyser

Authors: M. Maier, I. Dedigama, J. Majasan, Y. Wu, Q. Meyer, L. Castanheira, G. Hinds, P. R. Shearing, D. J. L. Brett

Abstract:

Increasing the efficiency of electrolyser technology is commonly seen as one of the main challenges on the way to the Hydrogen Economy. There is a significant lack of understanding of the different states of operation of polymer electrolyte membrane water electrolysers (PEMWE) and how these influence the overall efficiency. This in particular means the two-phase flow through the membrane, gas diffusion layers (GDL) and flow channels. In order to increase the efficiency of PEMWE and facilitate their spread as commercial hydrogen production technology, new analytic approaches have to be found. Acoustic emission (AE) offers the possibility to analyse the processes within a PEMWE in a non-destructive, fast and cheap in-situ way. This work describes the generation and analysis of AE data coming from a PEM water electrolyser, for, to the best of our knowledge, the first time in literature. Different experiments are carried out. Each experiment is designed so that only specific physical processes occur and AE solely related to one process can be measured. Therefore, a range of experimental conditions is used to induce different flow regimes within flow channels and GDL. The resulting AE data is first separated into different events, which are defined by exceeding the noise threshold. Each acoustic event consists of a number of consequent peaks and ends when the wave diminishes under the noise threshold. For all these acoustic events the following key attributes are extracted: maximum peak amplitude, duration, number of peaks, peaks before the maximum, average intensity of a peak and time till the maximum is reached. Each event is then expressed as a vector containing the normalized values for all criteria. Principal Component Analysis is performed on the resulting data, which orders the criteria by the eigenvalues of their covariance matrix. This can be used as an easy way of determining which criteria convey the most information on the acoustic data. In the following, the data is ordered in the two- or three-dimensional space formed by the most relevant criteria axes. By finding spaces in the two- or three-dimensional space only occupied by acoustic events originating from one of the three experiments it is possible to relate physical processes to certain acoustic patterns. Due to the complex nature of the AE data modern machine learning techniques are needed to recognize these patterns in-situ. Using the AE data produced before allows to train a self-learning algorithm and develop an analytical tool to diagnose different operational states in a PEMWE. Combining this technique with the measurement of polarization curves and electrochemical impedance spectroscopy allows for in-situ optimization and recognition of suboptimal states of operation.

Keywords: acoustic emission, gas diffusion layers, in-situ diagnosis, PEM water electrolyser

Procedia PDF Downloads 157
407 Economic Valuation of Emissions from Mobile Sources in the Urban Environment of Bogotá

Authors: Dayron Camilo Bermudez Mendoza

Abstract:

Road transportation is a significant source of externalities, notably in terms of environmental degradation and the emission of pollutants. These emissions adversely affect public health, attributable to criteria pollutants like particulate matter (PM2.5 and PM10) and carbon monoxide (CO), and also contribute to climate change through the release of greenhouse gases, such as carbon dioxide (CO2). It is, therefore, crucial to quantify the emissions from mobile sources and develop a methodological framework for their economic valuation, aiding in the assessment of associated costs and informing policy decisions. The forthcoming congress will shed light on the externalities of transportation in Bogotá, showcasing methodologies and findings from the construction of emission inventories and their spatial analysis within the city. This research focuses on the economic valuation of emissions from mobile sources in Bogotá, employing methods like hedonic pricing and contingent valuation. Conducted within the urban confines of Bogotá, the study leverages demographic, transportation, and emission data sourced from the Mobility Survey, official emission inventories, and tailored estimates and measurements. The use of hedonic pricing and contingent valuation methodologies facilitates the estimation of the influence of transportation emissions on real estate values and gauges the willingness of Bogotá's residents to invest in reducing these emissions. The findings are anticipated to be instrumental in the formulation and execution of public policies aimed at emission reduction and air quality enhancement. In compiling the emission inventory, innovative data sources were identified to determine activity factors, including information from automotive diagnostic centers and used vehicle sales websites. The COPERT model was utilized to ascertain emission factors, requiring diverse inputs such as data from the national transit registry (RUNT), OpenStreetMap road network details, climatological data from the IDEAM portal, and Google API for speed analysis. Spatial disaggregation employed GIS tools and publicly available official spatial data. The development of the valuation methodology involved an exhaustive systematic review, utilizing platforms like the EVRI (Environmental Valuation Reference Inventory) portal and other relevant sources. The contingent valuation method was implemented via surveys in various public settings across the city, using a referendum-style approach for a sample of 400 residents. For the hedonic price valuation, an extensive database was developed, integrating data from several official sources and basing analyses on the per-square meter property values in each city block. The upcoming conference anticipates the presentation and publication of these results, embodying a multidisciplinary knowledge integration and culminating in a master's thesis.

Keywords: economic valuation, transport economics, pollutant emissions, urban transportation, sustainable mobility

Procedia PDF Downloads 59
406 Mixed Monolayer and PEG Linker Approaches to Creating Multifunctional Gold Nanoparticles

Authors: D. Dixon, J. Nicol, J. A. Coulter, E. Harrison

Abstract:

The ease with which they can be functionalized, combined with their excellent biocompatibility, make gold nanoparticles (AuNPs) ideal candidates for various applications in nanomedicine. Indeed several promising treatments are currently undergoing human clinical trials (CYT-6091 and Auroshell). A successful nanoparticle treatment must first evade the immune system, then accumulate within the target tissue, before enter the diseased cells and delivering the payload. In order to create a clinically relevant drug delivery system, contrast agent or radiosensitizer, it is generally necessary to functionalize the AuNP surface with multiple groups; e.g. Polyethylene Glycol (PEG) for enhanced stability, targeting groups such as antibodies, peptides for enhanced internalization, and therapeutic agents. Creating and characterizing the biological response of such complex systems remains a challenge. The two commonly used methods to attach multiple groups to the surface of AuNPs are the creation of a mixed monolayer, or by binding groups to the AuNP surface using a bi-functional PEG linker. While some excellent in-vitro and animal results have been reported for both approaches further work is necessary to directly compare the two methods. In this study AuNPs capped with both PEG and a Receptor Mediated Endocytosis (RME) peptide were prepared using both mixed monolayer and PEG linker approaches. The PEG linker used was SH-PEG-SGA which has a thiol at one end for AuNP attachment, and an NHS ester at the other to bind to the peptide. The work builds upon previous studies carried out at the University of Ulster which have investigated AuNP synthesis, the influence of PEG on stability in a range of media and investigated intracellular payload release. 18-19nm citrate capped AuNPs were prepared using the Turkevich method via the sodium citrate reduction of boiling 0.01wt% Chloroauric acid. To produce PEG capped AuNPs, the required amount of PEG-SH (5000Mw) or SH-PEG-SGA (3000Mw Jenkem Technologies) was added, and the solution stirred overnight at room temperature. The RME (sequence: CKKKKKKSEDEYPYVPN, Biomatik) co-functionalised samples were prepared by adding the required amount of peptide to the PEG capped samples and stirring overnight. The appropriate amounts of PEG-SH and RME peptide were added to the AuNP to produce a mixed monolayer consisting of approximately 50% PEG and 50% RME. The PEG linker samples were first fully capped with bi-functional PEG before being capped with RME peptide. An increase in diameter from 18-19mm for the ‘as synthesized’ AuNPs to 40-42nm after PEG capping was observed via DLS. The presence of PEG and RME peptide on both the mixed monolayer and PEG linker co-functionalized samples was confirmed by both FTIR and TGA. Bi-functional PEG linkers allow the entire AuNP surface to be capped with PEG, enabling in-vitro stability to be achieved using a lower molecular weight PEG. The approach also allows the entire outer surface to be coated with peptide or other biologically active groups, whilst also offering the promise of enhanced biological availability. The effect of mixed monolayer versus PEG linker attachment on both stability and non-specific protein corona interactions was also studied.

Keywords: nanomedicine, gold nanoparticles, PEG, biocompatibility

Procedia PDF Downloads 341
405 Charged Amphiphilic Polypeptide Based Micelle Hydrogel Composite for Dual Drug Release

Authors: Monika Patel, Kazuaki Matsumura

Abstract:

Synthetic hydrogels, with their unique properties such as porosity, strength, and swelling in aqueous environment, are being used in many fields from food additives to regenerative medicines, from diagnostic and pharmaceuticals to drug delivery systems (DDS). But, hydrogels also have some limitations in terms of homogeneity of drug distribution and quantity of loaded drugs. As an alternate, polymeric micelles are extensively used as DDS. With the ease of self-assembly, and distinct stability they remarkably improve the solubility of hydrophobic drugs. However, presently, combinational therapy is the need of time and so are systems which are capable of releasing more than one drug. And it is one of the major challenges towards DDS to control the release of each drug independently, which simple DDS cannot meet. In this work, we present an amphiphilic polypeptide based micelle hydrogel composite to study the dual drug release for wound healing purposes using Amphotericin B (AmpB) and Curcumin as model drugs. Firstly, two differently charged amphiphilic polypeptide chains were prepared namely, poly L-Lysine-b-poly phenyl alanine (PLL-PPA) and poly Glutamic acid-b-poly phenyl alanine (PGA-PPA) through ring opening polymerization of amino acid N-carboxyanhydride. These polymers readily self-assemble to form micelles with hydrophobic PPA block as core and hydrophilic PLL/PGA as shell with an average diameter of about 280nm. The thus formed micelles were loaded with the model drugs. The PLL-PPA micelle was loaded with curcumin and PGA-PPA was loaded with AmpB by dialysis method. Drug loaded micelles showed a slight increase in the mean diameter and were fairly stable in solution and lyophilized forms. For forming the micelles hydrogel composite, the drug loaded micelles were dissolved and were cross linked using genipin. Genipin uses the free –NH2 groups in the PLL-PPA micelles to form a hydrogel network with free PGA-PPA micelles trapped in between the 3D scaffold formed. Different composites were tested by changing the weight ratios of the both micelles and were seen to alter its resulting surface charge from positive to negative with increase in PGA-PPA ratio. The composites with high surface charge showed a burst release of drug in initial phase, were as the composites with relatively low net charge showed a sustained release. Thus the resultant surface charge of the composite can be tuned to tune its drug release profile. Also, while studying the degree of cross linking among the PLL-PPA particles for effect on dual drug release, it was seen that as the degree of crosslinking increases, an increase in the tendency to burst release the drug (AmpB) is seen in PGA-PPA particle, were as on the contrary the PLL-PPA particles showed a slower release of Curcumin with increasing the cross linking density. Thus, two different pharmacokinetic profile of drugs were seen by changing the cross linking degree. In conclusion, a unique charged amphiphilic polypeptide based micelle hydrogel composite for dual drug delivery. This composite can be finely tuned on the basis of need of drug release profiles by changing simple parameters such as composition, cross linking and pH.

Keywords: amphiphilic polypeptide, dual drug release, micelle hydrogel composite, tunable DDS

Procedia PDF Downloads 208
404 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach

Authors: Jianli Jiang, Bai-Chen Xie

Abstract:

The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.

Keywords: spatial network DEA, environmental efficiency, sustainable development, power system

Procedia PDF Downloads 110
403 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution

Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko

Abstract:

Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.

Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking

Procedia PDF Downloads 73
402 Numerical Study of Leisure Home Chassis under Various Loads by Using Finite Element Analysis

Authors: Asem Alhnity, Nicholas Pickett

Abstract:

The leisure home industry is experiencing an increase in sales due to the rise in popularity of staycations. However, there is also a demand for improvements in thermal and structural behaviour from customers. Existing standards and codes of practice outline the requirements for leisure home design. However, there is a lack of expertise in applying Finite Element Analysis (FEA) to complex structures in this industry. As a result, manufacturers rely on standardized design approaches, which often lead to excessively engineered or inadequately designed products. This study aims to address this issue by investigating the impact of the habitation structure on chassis performance in leisure homes. The aim of this research is to comprehensively analyse the impact of the habitation structure on chassis performance in leisure homes. By employing FEA on the entire unit, including both the habitation structure and the chassis, this study seeks to develop a novel framework for designing and analysing leisure homes. The objectives include material reduction, enhancing structural stability, resolving existing design issues, and developing innovative modular and wooden chassis designs. The methodology used in this research is quantitative in nature. The study utilizes FEA to analyse the performance of leisure home chassis under various loads. The analysis procedures involve running the FEA simulations on the numerical model of the leisure home chassis. Different load scenarios are applied to assess the stress and deflection performance of the chassis under various conditions. FEA is a numerical method that allows for accurate analysis of complex systems. The research utilizes flexible mesh sizing to calculate small deflections around doors and windows, with large meshes used for macro deflections. This approach aims to minimize run-time while providing meaningful stresses and deflections. Moreover, it aims to investigate the limitations and drawbacks of the popular approach of applying FEA only to the chassis and replacing the habitation structure with a distributed load. The findings of this study indicate that the popular approach of applying FEA only to the chassis and replacing the habitation structure with a distributed load overlooks the strengthening generated from the habitation structure. By employing FEA on the entire unit, it is possible to optimize stress and deflection performance while achieving material reduction and enhanced structural stability. The study also introduces innovative modular and wooden chassis designs, which show promising weight reduction compared to the existing heavily fabricated lattice chassis. In conclusion, this research provides valuable insights into the impact of the habitation structure on chassis performance in leisure homes. By employing FEA on the entire unit, the study demonstrates the importance of considering the strengthening generated from the habitation structure in chassis design. The research findings contribute to advancements in material reduction, structural stability, and overall performance optimization. The novel framework developed in this study promotes sustainability, cost-efficiency, and innovation in leisure home design.

Keywords: static homes, caravans, motor homes, holiday homes, finite element analysis (FEA)

Procedia PDF Downloads 102
401 Physical Activity Based on Daily Step-Count in Inpatient Setting in Stroke and Traumatic Brain Injury Patients in Subacute Stage Follow Up: A Cross-Sectional Observational Study

Authors: Brigitte Mischler, Marget Hund, Hilfiker Roger, Clare Maguire

Abstract:

Background: Brain injury is one of the main causes of permanent physical disability, and improving walking ability is one of the most important goals for patients. After inpatient rehabilitation, most do not receive long-term rehabilitation services. Physical activity is important for the health prevention of the musculoskeletal system, circulatory system and the psyche. Objective: This follow-up study measured physical activity in subacute patients after traumatic brain injury and stroke. The difference in the number of steps in the inpatient setting was compared to the number of steps 1 year after the event in the outpatient setting. Methods: This follow-up study is a cross-sectional observational study with 29 participants. The measurement of daily step count over a seven-day period one year after the event was evaluated with the StepWatch™ ankle sensor. The number of steps taken one year after the event in the outpatient setting was compared with the number of steps taken during the inpatient stay and evaluated if they reached the recommended target value. Correlations between steps-count and exit domain, FAC level, walking speed, light touch, joint position sense, cognition, and fear of falling were calculated. Results: The median (IQR) daily step count of all patients was 2512 (568.5, 4070.5). During follow-up, the number of steps improved to 3656(1710,5900). The average difference was 1159(-2825, 6840) steps per day. Participants who were unable to walk independently (FAC 1) improved from 336(5-705) to 1808(92, 5354) steps per day. Participants able to walk with assistance (FAC 2-3) walked 700(31-3080) and at follow-up 3528(243,6871). Independent walkers (FAC 4-5) walked 4093(2327-5868) and achieved 3878(777,7418) daily steps at follow-up. This value is significantly below the recommended guideline. Step-count at follow-up showed moderate to high and statistically significant correlations: positive for FAC score, positive for FIM total score, positive for walking speed, and negative for fear of falling. Conclusions: Only 17% of all participants achieved the recommended daily step count one year after the event. We need better inpatient and outpatient strategies to improve physical activity. In everyday clinical practice, pedometers and diaries with objectives should be used. A concrete weekly schedule should be drawn up together with the patient, relatives, or nursing staff after discharge. This should include daily self-training, which was instructed during the inpatient stay. A good connection to social life (professional connection or a daily task/activity) can be an important part of improving daily activity. Further research should evaluate strategies to increase daily step counts in inpatient settings as well as in outpatient settings.

Keywords: neurorehabilitation, stroke, traumatic brain injury, steps, stepcount

Procedia PDF Downloads 16
400 Effect Of Selected Food And Nutrition Environments On Prevalence Of Cardio-Metabolic Risk Factors With Emphasis On Worksite Environment In Urban Delhi

Authors: Deepa Shokeen, Bani Tamber Aeri

Abstract:

Food choice is a complex process influenced by the interplay of multiple factors, including physical, socio-cultural and economic factors comprising macro or micro level food environments. While a clear understanding of the relationship between what we eat and the environmental context in which these food choices are made is still needed; it has however now been shown that food environments do play a significant role in the obesity epidemic and increasing cardio-metabolic risk factors. Evidence in other countries indicates that the food environment may strongly influence the prevalence of obesity and cardio-metabolic risk factors among young adults. Although in the Indian context, data does indicate the associations between sedentary lifestyle, stress, faulty diets but very little evidence supports the role of food environment in influencing cardio-metabolic health among employed adults. Thus, this research is required to establish how different environments affect different individuals as individuals interact with the environment on a number of levels. Methodology: The objective of the present study is to assess the effect of selected food and nutrition environments with emphasis on worksite environment and to analyse its impact on the food choices and dietary behaviour of the employees (25-45 years of age) of the organizations under study. In the proposed study an attempt will be made to randomly select various worksite environments from Delhi and NCR. The study will be conducted in two phases. In phase I, Information will be obtained on their socio-demographic profile and various factors influencing their food choices including most commonly consumed foods and most frequently visited eating outlets in and around the work place. Data will also be gathered on anthropometry (height, weight, waist circumference), biochemical parameters (lipid profile and fasting glucose), blood pressure and dietary intake. Based on the findings of phase I, a list of the most frequently visited eating outlets in and around the workplace will be prepared in Phase II. These outlets will then be subjected to nutrition environment assessment survey (NEMS). On the basis of the information gathered from phase I and phase II, influence of selected food and nutrition environments on food choice, dietary behaviour and prevalence of cardio-metabolic risk factors among employed adults will be assessed. Expected outcomes: The proposed study will try to ascertain the impact of selected food and nutrition environments on food choice and dietary intake of the working adults as it is important to learn how these food environments influence the eating perceptions and health behavior of the adults. In addition to this, anthropometry blood pressure and biochemical assessment of the subjects will be done to assess the prevalence of cardio-metabolic risk factors. If the findings indicate that the work environment, where most of these young adults spend their productive hours of the day, influence their health, than perhaps steps maybe needed to make these environments more conducive to health.

Keywords: food and nutrition environment, cardio-metabolic risk factors, India, worksite environment

Procedia PDF Downloads 282
399 Participatory Action Research with Social Workers: The World Café Method to Share Critical Reflections and Possible Solutions on Working Practices in Migration Contexts

Authors: Ilaria Coppola, Davide Lacqua, Nadia Ranìa

Abstract:

Over the past two decades, migration has gained central importance in the global landscape. Europe hosts the largest number of migrants, totaling 92.9 million people, approximately 37.4 million of whom are regular residents within the European Union's borders. Reception services and different modes of management have received increasing attention precisely because of the complexity of the phenomenon, which necessarily impacts the wider community. Indeed, opening a reception center in an area entails major challenges for that context, for the community that inhabits it, and for the people who use that service. Questioning the strategies needed to offer a functional reception service means listening to the different actors involved who daily face the difficulties involved in working in the field. Recognizing the importance of the professional figures who work closely with migrant people, each with their own specific experiences has led researchers to study and analyze the different types of reception centers and their management. This has led to the development of intervention models and best practices in various countries. However, research from this perspective is still limited, especially in Italy. From this theoretical framework, this study aims to bring out an innovative qualitative tool, such as the world café, the work experiences of 29 social workers working in shelters in the Italian context. Most of the participants were female and lived in the Northwest regions of Italy. Through this tool, the aim was to bring out and share reflections on the critical issues encountered in working in reception centers, with a view to identifying possible solutions for better management of services. The World café represents a tool used in participatory action research that promotes dialogue among participants through the sharing of reflections and ideas. In fact, from critical reflections, participants are invited to identify and share possible solutions to provide a more functional service with benefits to the entire community. Therefore, this research, through the innovative technique of the World café, aims to promote critical thinking processes that can help participants find solutions that can be introduced into their work contexts or proposed to decision-makers. Specifically, the findings shed light on several issues, including complex bureaucratic procedures, insufficient project planning, and inefficiencies in the services provided to migrants. These concerns collectively contribute to what participants perceive as a disorganized and uncoordinated system. In addition, the study explores potential solutions that promote more efficient networking practices, coordinated project management, and a more positive approach to cultural diversity. The main results obtained will be discussed with a focus on critical reflections and possible solutions identified.

Keywords: participatory action research, world café method, reception services, migration contexts, social workers, Italy

Procedia PDF Downloads 67
398 Structural and Functional Correlates of Reaction Time Variability in a Large Sample of Healthy Adolescents and Adolescents with ADHD Symptoms

Authors: Laura O’Halloran, Zhipeng Cao, Clare M. Kelly, Hugh Garavan, Robert Whelan

Abstract:

Reaction time (RT) variability on cognitive tasks provides the index of the efficiency of executive control processes (e.g. attention and inhibitory control) and is considered to be a hallmark of clinical disorders, such as attention-deficit disorder (ADHD). Increased RT variability is associated with structural and functional brain differences in children and adults with various clinical disorders, as well as poorer task performance accuracy. Furthermore, the strength of functional connectivity across various brain networks, such as the negative relationship between the task-negative default mode network and task-positive attentional networks, has been found to reflect differences in RT variability. Although RT variability may provide an index of attentional efficiency, as well as being a useful indicator of neurological impairment, the brain substrates associated with RT variability remain relatively poorly defined, particularly in a healthy sample. Method: Firstly, we used the intra-individual coefficient of variation (ICV) as an index of RT variability from “Go” responses on the Stop Signal Task. We then examined the functional and structural neural correlates of ICV in a large sample of 14-year old healthy adolescents (n=1719). Of these, a subset had elevated symptoms of ADHD (n=80) and was compared to a matched non-symptomatic control group (n=80). The relationship between brain activity during successful and unsuccessful inhibitions and gray matter volume were compared with the ICV. A mediation analysis was conducted to examine if specific brain regions mediated the relationship between ADHD symptoms and ICV. Lastly, we looked at functional connectivity across various brain networks and quantified both positive and negative correlations during “Go” responses on the Stop Signal Task. Results: The brain data revealed that higher ICV was associated with increased structural and functional brain activation in the precentral gyrus in the whole sample and in adolescents with ADHD symptoms. Lower ICV was associated with lower activation in the anterior cingulate cortex (ACC) and medial frontal gyrus in the whole sample and in the control group. Furthermore, our results indicated that activation in the precentral gyrus (Broadman Area 4) mediated the relationship between ADHD symptoms and behavioural ICV. Conclusion: This is the first study first to investigate the functional and structural correlates of ICV collectively in a large adolescent sample. Our findings demonstrate a concurrent increase in brain structure and function within task-active prefrontal networks as a function of increased RT variability. Furthermore, structural and functional brain activation patterns in the ACC, and medial frontal gyrus plays a role-optimizing top-down control in order to maintain task performance. Our results also evidenced clear differences in brain morphometry between adolescents with symptoms of ADHD but without clinical diagnosis and typically developing controls. Our findings shed light on specific functional and structural brain regions that are implicated in ICV and yield insights into effective cognitive control in healthy individuals and in clinical groups.

Keywords: ADHD, fMRI, reaction-time variability, default mode, functional connectivity

Procedia PDF Downloads 257
397 Assessment of Environmental Impact for Rice Mills in Burdwan District: Special Emphasis on Groundwater, Surface Water, Soil, Vegetation and Human Health

Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhay

Abstract:

Rice milling is an important activity in agricultural economy of India, particularly the Burdwan district. However, the environmental impact of rice mills is frequently underestimated. The environmental impact of rice mills in the Burdwan district is a major source of concern, given the importance of rice milling in the local economy and food supply. In the Burdwan district, more than fifty (50) rice mills are in operation. The goal of this study is to investigate the effects of rice mills on several environmental components, with a particular emphasis on groundwater, surface water, soil, and vegetation. The research comprises a thorough review of numerous rice mills located around the district, utilising both qualitative and quantitative approaches. Water samples taken from wells near rice mills will be tested for groundwater quality, with an emphasis on factors such as heavy metal pollution and pollutant concentrations. Monitoring rice mill discharge into neighbouring bodies of water and studying the potential impact on aquatic ecosystems will be part of surface water evaluations. Furthermore, soil samples from the surrounding areas will be taken to examine changes in soil characteristics, nutrient content, and potential contamination from milling waste disposal. Vegetation studies will be conducted to investigate the effects of emissions and effluents on plant health and biodiversity in the region. The findings will provide light on the extent of environmental degradation caused by rice mills in the Burdwan district, as well as valuable insight into the effects of such operations on water, soil, and vegetation. The findings will aid in the development of appropriate legislation and regulations to reduce negative environmental repercussions and promote sustainable practises in the rice milling business. In some cases, heavy metals have been related to health problems. Heavy metals (As, Cd, Cu, Pb, Cr, Hg) are linked to skin, lung, brain, kidney, liver, metabolic, spleen, cardiovascular, haematological, immunological, gastrointestinal, testes, pancreatic, metabolic, and bone problems. As a result, this study contributes to a better knowledge of industrial environmental impacts and establishes the framework for future studies aimed at developing a more ecologically balanced and resilient Burdwan district. The following recommendations are offered for reducing the rice mill's environmental impact: To keep untreated effluents out of bodies of water, adequate waste management systems must be established. Use environmentally friendly rice milling processes to reduce pollution. To avoid soil pollution, rice mill by-products should be used as fertiliser in a controlled and appropriate manner. Groundwater, surface water, soil, and vegetation are all regularly monitored in order to study and adapt to environmental changes. By adhering to these principles, the rice milling industry of Burdwan district may achieve long-term growth while lowering its environmental effect and safeguarding the environment for future generations.

Keywords: groundwater, environmental analysis, biodiversity, rice mill, waste management, diseases, industrial impact

Procedia PDF Downloads 97
396 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal

Authors: C. Bateira, J. Fernandes, A. Costa

Abstract:

The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.

Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards

Procedia PDF Downloads 178
395 The Academic Experience of Vocational Training Teachers

Authors: Andréanne Gagné, Jo Anni Joncas, Éric Tendon

Abstract:

Teaching in vocational training requires an excellent mastery of the trade being taught, but also solid professional skills in pedagogy. Teachers are typically recruited on the basis of their trade expertise, and they do not necessarily have training or experience in pedagogy. In order to counter this lack, the Ministry of Education (Québec, Canada) requires them to complete a 120-credit university program to obtain their teaching certificate. They must complete this training in addition to their teaching duties. This training was rarely planned in the teacher’s life course, and each teacher approaches it differently: some are enthusiastic, but many feel reluctant discouragement and even frustration at the idea of committing to a training program lasting an average of 10 years to completion. However, Quebec is experiencing an unprecedented shortage of teachers, and the perseverance of vocational teachers in their careers requires special attention because of the conditions of their specific integration conditions. Our research examines the perceptions that vocational teachers in training have of their academic experience in pre-service teaching. It differs from previous research in that it focuses on the influence of the academic experience on the teaching employment experience. The goal is that by better understanding the university experience of teachers in vocational education, we can identify support strategies to support their school experience and their teaching. To do this, the research is based on the theoretical framework of the sociology of experience, which allows us to study the way in which these “teachers-students” give meaning to their university program in articulation with their jobs according to three logics of action. The logic of integration is based on the process of socialization, where the action is preceded by the internalization of values, norms, and cultural models associated with the training context. The logic of strategy refers to the usefulness of this experience where the individual constructs a form of rationality according to his objectives, resources, social position, and situational constraints. The logic of subjectivation refers to reflexivity activities aimed at solving problems and making choices. These logics served as a framework for the development of an online questionnaire. Three hundred respondents, newly enrolled in an undergraduate teaching program (bachelor's degree in vocational education), expressed themselves about their academic experience. This paper relates qualitative data (open-ended questions) subjected to an interpretive repertory analysis approach to descriptive data (closed-ended questions) that emerged. The results shed light on how the respondents perceive themselves as teachers and students, their perceptions of university training and the support offered, and the place that training occupies in their professional path. Indeed, their professional and academic paths are inextricably linked, and it seems essential to take them into account simultaneously to better meet their needs and foster the development of their expertise in pedagogy. The discussion focuses on the strengths and limitations of university training from the perspective of the logic of action. The results also suggest support strategies that can be implemented to better support the integration and retention of student teachers in professional education.

Keywords: teacher, vocational training, pre-service training, academic experience

Procedia PDF Downloads 115