Search results for: component
476 Implementing Mindfulness into Wellness Plans: Assisting Individuals with Substance Abuse and Addiction
Authors: Michele M. Mahr
Abstract:
The purpose of this study is to educate, inform, and facilitate scholarly conversation and discussion regarding the implementation of mindfulness techniques when working with individuals with substance use disorder (SUD) or addictive behaviors in mental health. Mindfulness can be recognized as the present moment, non-judgmental awareness, initiated by concentrated attention that is non-reactive and as openheartedly as possible. Individuals with SUD or addiction typically are challenged with triggers, environmental situations, cravings, or social pressures which may deter them from remaining abstinent from their drug of choice or addictive behavior. Also, mindfulness is recognized as one of the cognitive and behavioral treatment approaches and is both a physical and mental practice that encompasses individuals to become aware of internal situations and experiences with undivided attention. That said, mindfulness may be an effective strategy for individuals to employ during these experiences. This study will reveal how mental health practitioners and addiction counselors may find mindfulness to be an essential component of increasing wellness when working with individuals seeking mental health treatment. To this end, mindfulness is simply the ability individuals have to know what is actually happening as it is occurring and what they are experiencing at the moment. In the context of substance abuse and addiction, individuals may employ breathing techniques, meditation, and cognitive restructuring of the mind to become aware of present moment experiences. Furthermore, the notion of mindfulness has been directly connected to the development of neuropathways. The creation of the neural pathways then leads to creating thoughts which leads to developing new coping strategies and adaptive behaviors. Mindfulness strategies can assist individuals in connecting the mind with the body, allowing the individual to remain centered and focused. All of these mentioned above are vital components to recovery during substance abuse and addiction treatment. There are a variety of therapeutic modalities applying the key components of mindfulness, such as Mindfulness-Based Stress Reduction (MBSR) and Mindfulness-Based Cognitive Therapy for depression (MBCT). This study will provide an overview of both MBSR and MBCT in relation to treating individuals with substance abuse and addiction. The author will also provide strategies for readers to employ when working with clients. Lastly, the author will create and foster a safe space for discussion and engaging conversation among participants to ask questions, share perspectives, and be educated on the numerous benefits of mindfulness within wellness.Keywords: mindfulness, wellness, substance abuse, mental health
Procedia PDF Downloads 77475 Algorithm for Modelling Land Surface Temperature and Land Cover Classification and Their Interaction
Authors: Jigg Pelayo, Ricardo Villar, Einstine Opiso
Abstract:
The rampant and unintended spread of urban areas resulted in increasing artificial component features in the land cover types of the countryside and bringing forth the urban heat island (UHI). This paved the way to wide range of negative influences on the human health and environment which commonly relates to air pollution, drought, higher energy demand, and water shortage. Land cover type also plays a relevant role in the process of understanding the interaction between ground surfaces with the local temperature. At the moment, the depiction of the land surface temperature (LST) at city/municipality scale particularly in certain areas of Misamis Oriental, Philippines is inadequate as support to efficient mitigations and adaptations of the surface urban heat island (SUHI). Thus, this study purposely attempts to provide application on the Landsat 8 satellite data and low density Light Detection and Ranging (LiDAR) products in mapping out quality automated LST model and crop-level land cover classification in a local scale, through theoretical and algorithm based approach utilizing the principle of data analysis subjected to multi-dimensional image object model. The paper also aims to explore the relationship between the derived LST and land cover classification. The results of the presented model showed the ability of comprehensive data analysis and GIS functionalities with the integration of object-based image analysis (OBIA) approach on automating complex maps production processes with considerable efficiency and high accuracy. The findings may potentially lead to expanded investigation of temporal dynamics of land surface UHI. It is worthwhile to note that the environmental significance of these interactions through combined application of remote sensing, geographic information tools, mathematical morphology and data analysis can provide microclimate perception, awareness and improved decision-making for land use planning and characterization at local and neighborhood scale. As a result, it can aid in facilitating problem identification, support mitigations and adaptations more efficiently.Keywords: LiDAR, OBIA, remote sensing, local scale
Procedia PDF Downloads 282474 Genetic Advance versus Environmental Impact toward Sustainable Protein, Wet Gluten and Zeleny Sedimentation in Bread and Durum Wheat
Authors: Gordana Branković, Dejan Dodig, Vesna Pajić, Vesna Kandić, Desimir Knežević, Nenad Đurić
Abstract:
The wheat grain quality properties are influenced by genotype, environmental conditions and genotype × environment interaction (GEI). The increasing request of more nutritious wheat products will direct future breeding programmes. Therefore, the aim of investigation was to determine: i) variability of the protein content (PC), wet gluten content (WG) and Zeleny sedimentation volume (ZS); ii) components of variance, heritability in a broad sense (hb2), and expected genetic advance as percent of mean (GAM) for PC, WG, and ZS; iii) correlations between PC, WG, ZS, and most important agronomic traits; in order to assess expected breeding success versus environmental impact for these quality traits. The plant material consisted of 30 genotypes of bread wheat (Triticum aestivum L. ssp. aestivum) and durum wheat (Triticum durum Desf.). The trials were sown at the three test locations in Serbia: Rimski Šančevi, Zemun Polje and Padinska Skela during 2010-2011 and 2011-2012. The experiments were set as randomized complete block design with four replications. The plot consisted of five rows of 1 m2 (5 × 0.2 m × 1 m). PC, WG and ZS were determined by the use of Near infrared spectrometry (NIRS) with the Infraneo analyser (Chopin Technologies, France). PC, WG and ZS, in bread wheat, were in the range 13.4-16.4%, 22.8-30.3%, and 39.4-67.1 mL, respectively, and in durum wheat, in the range 15.3-18.1%, 28.9-36.3%, 37.4-48.3 mL, respectively. The dominant component of variance for PC, WG, and ZS, in bread wheat, was genotype with the genetic variance/GEI variance (VG/VG × E) relation of 3.2, 2.9 and 1.0, respectively, and in durum wheat was GEI with the VG/VG × E relation of 0.70, 0.69 and 0.49, respectively. hb2 and GAM values for PC, WG and ZS, in bread wheat, were 94.9% and 12.6%, 93.7% and 18.4%, and 86.2% and 28.1%, respectively, and in durum wheat, 80.7% and 7.6%, 79.7% and 10.2%, and 74% and 11.2%, respectively. The most consistent through six environments, statistically significant correlations, for bread wheat, were between PC and spike length (-0.312 to -0.637); PC, WG, ZS and grain number per spike (-0.320 to -0.620; -0.369 to -0.567; -0.301 to -0.378, respectively); PC and grain thickness (0.338 to 0.566), and for durum wheat, were between PC, WG, ZS and yield (-0.290 to -0.690; -0.433 to -0.753; -0.297 to -0.660, respectively); PC and plant height (-0.314 to -0.521); PC, WG and spike length (-0.298 to -0.597; -0.293 to -0.627, respectively); PC, WG and grain thickness (0.260 to 0.575; 0.269 to 0.498, respectively); PC, WG and grain vitreousness (0.278 to 0.665; 0.357 to 0.690, respectively). Breeding success can be anticipated for ZS in bread wheat due to coupled high values for hb2 and GAM, suggesting existence of additive genetic effects, and also for WG in bread wheat, due to very high hb2 and medium high GAM. The small, and medium, negative correlations between PC, WG, ZS, and yield or yield components, indicate difficulties to select simultaneously for high quality and yield, depending on linkage for particular genetic arrangements to be broken by recombination.Keywords: bread and durum wheat, genetic advance, protein and wet gluten content, Zeleny sedimentation volume
Procedia PDF Downloads 254473 Dimensionality Control of Li Transport by MOFs Based Quasi-Solid to Solid Electrolyte
Authors: Manuel Salado, Mikel Rincón, Arkaitz Fidalgo, Roberto Fernandez, Senentxu Lanceros-Méndez
Abstract:
Lithium-ion batteries (LIBs) are a promising technology for energy storage, but they suffer from safety concerns due to the use of flammable organic solvents in their liquid electrolytes. Solid-state electrolytes (SSEs) offer a potential solution to this problem, but they have their own limitations, such as poor ionic conductivity and high interfacial resistance. The aim of this research was to develop a new type of SSE based on metal-organic frameworks (MOFs) and ionic liquids (ILs). MOFs are porous materials with high surface area and tunable electronic properties, making them ideal for use in SSEs. ILs are liquid electrolytes that are non-flammable and have high ionic conductivity. A series of MOFs were synthesized, and their electrochemical properties were evaluated. The MOFs were then infiltrated with ILs to form a quasi-solid gel and solid xerogel SSEs. The ionic conductivity, interfacial resistance, and electrochemical performance of the SSEs were characterized. The results showed that the MOF-IL SSEs had significantly higher ionic conductivity and lower interfacial resistance than conventional SSEs. The SSEs also exhibited excellent electrochemical performance, with high discharge capacity and long cycle life. The development of MOF-IL SSEs represents a significant advance in the field of solid-state electrolytes. The high ionic conductivity and low interfacial resistance of the SSEs make them promising candidates for use in next-generation LIBs. The data for this research was collected using a variety of methods, including X-ray diffraction, scanning electron microscopy, and electrochemical impedance spectroscopy. The data was analyzed using a variety of statistical and computational methods, including principal component analysis, density functional theory, and molecular dynamics simulations. The main question addressed by this research was whether MOF-IL SSEs could be developed that have high ionic conductivity, low interfacial resistance, and excellent electrochemical performance. The results of this research demonstrate that MOF-IL SSEs are a promising new type of solid-state electrolyte for use in LIBs. The SSEs have high ionic conductivity, low interfacial resistance, and excellent electrochemical performance. These properties make them promising candidates for use in next-generation LIBs that are safer and have higher energy densities.Keywords: energy storage, solid-electrolyte, ionic liquid, metal-organic-framework, electrochemistry, organic inorganic plastic crystal
Procedia PDF Downloads 83472 Improving the Dielectric Strength of Transformer Oil for High Health Index: An FEM Based Approach Using Nanofluids
Authors: Fatima Khurshid, Noor Ul Ain, Syed Abdul Rehman Kashif, Zainab Riaz, Abdullah Usman Khan, Muhammad Imran
Abstract:
As the world is moving towards extra-high voltage (EHV) and ultra-high voltage (UHV) power systems, the performance requirements of power transformers are becoming crucial to the system reliability and security. With the transformers being an essential component of a power system, low health index of transformers poses greater risks for safe and reliable operation. Therefore, to meet the rising demands of the power system and transformer performance, researchers are being prompted to provide solutions for enhanced thermal and electrical properties of transformers. This paper proposes an approach to improve the health index of a transformer by using nano-technology in conjunction with bio-degradable oils. Vegetable oils can serve as potential dielectric fluid alternatives to the conventional mineral oils, owing to their numerous inherent benefits; namely, higher fire and flashpoints, and being environment-friendly in nature. Moreover, the addition of nanoparticles in the dielectric fluid further serves to improve the dielectric strength of the insulation medium. In this research, using the finite element method (FEM) in COMSOL Multiphysics environment, and a 2D space dimension, three different oil samples have been modelled, and the electric field distribution is computed for each sample at various electric potentials, i.e., 90 kV, 100 kV, 150 kV, and 200 kV. Furthermore, each sample has been modified with the addition of nanoparticles of different radii (50 nm and 100 nm) and at different interparticle distance (5 mm and 10 mm), considering an instant of time. The nanoparticles used are non-conductive and have been modelled as alumina (Al₂O₃). The geometry has been modelled according to IEC standard 60897, with a standard electrode gap distance of 25 mm. For an input supply voltage of 100 kV, the maximum electric field stresses obtained for the samples of synthetic vegetable oil, olive oil, and mineral oil are 5.08 ×10⁶ V/m, 5.11×10⁶ V/m and 5.62×10⁶ V/m, respectively. It is observed that for the unmodified samples, vegetable oils have a greater dielectric strength as compared to the conventionally used mineral oils because of their higher flash points and higher values of relative permittivity. Also, for the modified samples, the addition of nanoparticles inhibits the streamer propagation inside the dielectric medium and hence, serves to improve the dielectric properties of the medium.Keywords: dielectric strength, finite element method, health index, nanotechnology, streamer propagation
Procedia PDF Downloads 141471 Changes in Chromatographically Assessed Fatty Acid Profile during Technology of Dairy Products
Authors: Lina Lauciene, Vaida Andruleviciute, Ingrida Sinkeviciene, Mindaugas Malakauskas, Loreta Serniene
Abstract:
Dairy product manufacturers constantly are looking for new markets for their production. And in most cases, the problem of product compliance with the composition requirements of foreign products is highlighted. This is especially true of the composition of milk fat in dairy products. It is well known that there are many factors such as feeding ratio, season, cow breed, stage of lactation that affect the fatty acid composition in milk. However, there is less evidence on the impact of the technological process on the composition of fatty acids in raw milk and products made from it. In this study the influence of the technological process on fat composition in 82% fat butter, 15% fat curd, 3.6% fat yogurt and 2.5% fat UHT milk was determined. The samples were collected at each stage of production, starting with raw milk and ending with the final product in the Lithuanian milk-processing company. Fatty acids methyl esters were quantified using a GC (Clarus 680, Perkin Elmer) equipped with flame ionization detector (FID) and a capillary column SP-2560, 100 m x 0.25 mm id x 0.20 µm. Fatty acids peaks were identified using Supelco® 37 Component FAME Mix. The concentration of each fatty acid was expressed in percent of the total fatty acid amount. In the case of UHT milk production, it was compared raw milk, cream, milk mixture, and UHT milk but significant differences were not estimated between these stages. Analyzing stages of the yogurt production (raw milk, pasteurized milk, and milk with a starter culture and yogurt), no significant changes were detected between stages as well. A slight difference was observed with C4:0 - a percentage of this fatty acid was less (p=0.053) in the final stage than in milk with the starter culture. During butter production, the composition of fatty acids in raw cream, buttermilk, and butter did not change significantly. Only C14:0 decreased in the butter then compared to buttermilk. The curd fatty acid analysis showed the increase of C6:0, C8:0, C10:0, C11:0, C12:0 C14:0 and C17:0 at the final stage when compared to raw milk, cream, milk mixture, and whey. Meantime the increase of C18:1n9c (in comparison with milk mixture and curd) and C18:2n6c (in comparison with raw milk, milk mixture, and curd) was estimated in cream. The results of this study suggest that the technological process did not affect the composition of fatty acids in UHT milk, yogurt, butter, and curd but had the impact on the concentration of individual fatty acids. In general, all of the fatty acids from the raw milk were converted into the final product, only some of them slightly changed the concentration. Therefore, in order to ensure an appropriate composition of certain fatty acids in the final product, producers must carefully choose the raw milk. Acknowledgment: This research was funded by Lithuanian Ministry of Agriculture (No. MT-17-13).Keywords: dairy products, fat composition, fatty acids, technological process
Procedia PDF Downloads 172470 Metabolic Profiling in Breast Cancer Applying Micro-Sampling of Biological Fluids and Analysis by Gas Chromatography – Mass Spectrometry
Authors: Mónica P. Cala, Juan S. Carreño, Roland J.W. Meesters
Abstract:
Recently, collection of biological fluids on special filter papers has become a popular micro-sampling technique. Especially, the dried blood spot (DBS) micro-sampling technique has gained much attention and is momently applied in various life sciences reserach areas. As a result of this popularity, DBS are not only intensively competing with the venous blood sampling method but are at this moment widely applied in numerous bioanalytical assays. In particular, in the screening of inherited metabolic diseases, pharmacokinetic modeling and in therapeutic drug monitoring. Recently, microsampling techniques were also introduced in “omics” areas, whereunder metabolomics. For a metabolic profiling study we applied micro-sampling of biological fluids (blood and plasma) from healthy controls and from women with breast cancer. From blood samples, dried blood and plasma samples were prepared by spotting 8uL sample onto pre-cutted 5-mm paper disks followed by drying of the disks for 100 minutes. Dried disks were then extracted by 100 uL of methanol. From liquid blood and plasma samples 40 uL were deproteinized with methanol followed by centrifugation and collection of supernatants. Supernatants and extracts were evaporated until dryness by nitrogen gas and residues derivated by O-methyxyamine and MSTFA. As internal standard C17:0-methylester in heptane (10 ppm) was used. Deconvolution and alignment of and full scan (m/z 50-500) MS data were done by AMDIS and SpectConnect (http://spectconnect.mit.edu) software, respectively. Statistical Data analysis was done by Principal Component Analysis (PCA) using R software. The results obtained from our preliminary study indicate that the use of dried blood/plasma on paper disks could be a powerful new tool in metabolic profiling. Many of the metabolites observed in plasma (liquid/dried) were also positively identified in whole blood samples (liquid/dried). Whole blood could be a potential substitute matrix for plasma in Metabolomic profiling studies as well also micro-sampling techniques for the collection of samples in clinical studies. It was concluded that the separation of the different sample methodologies (liquid vs. dried) as observed by PCA was due to different sample treatment protocols applied. More experiments need to be done to confirm obtained observations as well also a more rigorous validation .of these micro-sampling techniques is needed. The novelty of our approach can be found in the application of different biological fluid micro-sampling techniques for metabolic profiling.Keywords: biofluids, breast cancer, metabolic profiling, micro-sampling
Procedia PDF Downloads 411469 Terrestrial Laser Scans to Assess Aerial LiDAR Data
Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani
Abstract:
The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy
Procedia PDF Downloads 100468 The Effect of Emotional Stimuli Related to Body Imbalance in Postural Control and the Phenomenological Experience of Young Healthy Adults
Authors: David Martinez-Pernia, Alvaro Rivera-Rei, Alejandro Troncoso, Gonzalo Forno, Andrea Slachevsky, David Huepe, Victoria Silva-Mack, Jorge Calderon, Mayte Vergara, Valentina Carrera
Abstract:
Background: Recent theories in the field of emotions have taken the relevance of motor control beyond a system related to personal autonomy (walking, running, grooming), and integrate it into the emotional dimension. However, to our best knowledge, there are no studies that specifically investigate how emotional stimuli related to motor control modify emotional states in terms of postural control and phenomenological experience. Objective: The main aim of this work is to investigate the emotions produced by stimuli of bodily imbalance (neutral, pleasant and unpleasant) in the postural control and the phenomenological experience of young, healthy adults. Methodology: 46 healthy young people are shown emotional videos (neutral, pleasant, motor unpleasant, and non-motor unpleasant) related to the body imbalance. During the period of stimulation of each of the videos (60 seconds) the participant is standing on a force platform to collect temporal and spatial data of postural control. In addition, the electrophysiological activity of the heart and electrodermal activity is recorded. In relation to the two unpleasant conditions (motor versus non-motor), a phenomenological interview is carried out to collect the subjective experience of emotion and body perception. Results: Pleasant and unpleasant emotional videos have significant changes with respect to the neutral condition in terms of greater area, higher mean velocity, and greater mean frequency power on the anterior-posterior axis. The results obtained with respect to the electrodermal response was that the pleasurable and unpleasant conditions produced a significant increase in the phasic component with respect to the neutral condition. Regarding the electrophysiology of the heart, no significant change was found in any condition. Phenomenological experiences in the two unpleasant conditions differ in body perception and the emotional meaning of the experience. Conclusion: Emotional stimuli related to bodily imbalance produce changes in postural control, electrodermal activity, and phenomenological experience. This experimental setting could be relevant to be implemented in people with motor disorders (Parkinson, Stroke, TBI) to know how emotions affect motor control.Keywords: body imbalance stimuli, emotion, phenomenological experience, postural control
Procedia PDF Downloads 174467 National Branding through Education: South Korean Image in Romania through the Language Textbooks for Foreigners
Authors: Raluca-Ioana Antonescu
Abstract:
The paper treats about the Korean public diplomacy and national branding strategies, and how the Korean language textbooks were used in order to construct the Korean national image. The field research of the paper stands at the intersection between Linguistics and Political Science, while the problem of the research is the role of language and culture in national branding process. The research goal is to contribute to the literature situated at the intersection between International Relations and Applied Linguistics, while the objective is to conceptualize the idea of national branding by emphasizing a dimension which is not much discussed, and that would be the education as an instrument of the national branding and public diplomacy strategies. In order to examine the importance of language upon the national branding strategies, the paper will answer one main question, How is the Korean language used in the construction of national branding?, and two secondary questions, How are explored in literature the relations between language and national branding construction? and What kind of image of South Korea the language textbooks for foreigners transmit? In order to answer the research questions, the paper starts from one main hypothesis, that the language is an essential component of the culture, which is used in the construction of the national branding influenced by traditional elements (like Confucianism) but also by modern elements (like Western influence), and from two secondary hypothesis, the first one is that in the International Relations literature there are little explored the connections between language and national branding, while the second hypothesis is that the South Korean image is constructed through the promotion of a traditional society, but also a modern one. In terms of methodology, the paper will analyze the textbooks used in Romania at the universities which provide Korean Language classes during the three years program B.A., following the dialogs, the descriptive texts and the additional text about the Korean culture. The analysis will focus on the rank status difference, the individual in relation to the collectivity, the respect for the harmony, and the image of the foreigner. The results of the research show that the South Korean image projected in the textbooks convey the Confucian values and it does not emphasize the changes suffered by the society due to the modernity and globalization. The Westernized aspect of the Korean society is conveyed more in an informative way about the Korean international companies, Korean internal development (like the transport or other services), but it does not show the cultural changed the society underwent. Even if the paper is using the textbooks which are used in Romania as a teaching material, it could be used and applied at least to other European countries, since the textbooks are the ones issued by the South Korean language schools, which other European countries are using also.Keywords: confucianism, modernism, national branding, public diplomacy, traditionalism
Procedia PDF Downloads 241466 Articles, Delimitation of Speech and Perception
Authors: Nataliya L. Ogurechnikova
Abstract:
The paper aims to clarify the function of articles in the English speech and specify their place and role in the English language, taking into account the use of articles for delimitation of speech. A focus of the paper is the use of the definite and the indefinite articles with different types of noun phrases which comprise either one noun with or without attributes, such as the King, the Queen, the Lion, the Unicorn, a dimple, a smile, a new language, an unknown dialect, or several nouns with or without attributes, such as the King and Queen of Hearts, the Lion and Unicorn, a dimple or smile, a completely isolated language or dialect. It is stated that the function of delimitation is related to perception: the number of speech units in a text correlates with the way the speaker perceives and segments the denotation. The two following combinations of words the house and garden and the house and the garden contain different numbers of speech units, one and two respectively, and reveal two different perception modes which correspond to the use of the definite article in the examples given. Thus, the function of delimitation is twofold, it is related to perception and cognition, on the one hand, and, on the other hand, to grammar, if the subject of grammar is the structure of speech. Analysis of speech units in the paper is not limited by noun phrases and is amplified by discussion of peripheral phenomena which are nevertheless important because they enable to qualify articles as a syntactic phenomenon whereas they are not infrequently described in terms of noun morphology. With this regard attention is given to the history of linguistic studies, specifically to the description of English articles by Niels Haislund, a disciple of Otto Jespersen. A discrepancy is noted between the initial plan of Jespersen who intended to describe articles as a syntactic phenomenon in ‘A Modern English Grammar on Historical Principles’ and the interpretation of articles in terms of noun morphology, finally given by Haislund. Another issue of the paper is correlation between description and denotation, being a traditional aspect of linguistic studies focused on articles. An overview of relevant studies, given in the paper, goes back to the works of G. Frege, which gave rise to a series of scientific works where the meaning of articles was described within the scope of logical semantics. Correlation between denotation and description is treated in the paper as the meaning of article, i.e. a component in its semantic structure, which differs from the function of delimitation and is similar to the meaning of other quantifiers. The paper further explains why the relation between description and denotation, i.e. the meaning of English article, is irrelevant for noun morphology and has nothing to do with nominal categories of the English language.Keywords: delimitation of speech, denotation, description, perception, speech units, syntax
Procedia PDF Downloads 240465 Gender and Total Compensation, in an ‘Age’ of Disruption
Authors: Daniel J. Patricio Jiménez
Abstract:
The term 'total compensation’ refers to salary, training, innovation, and development, and of course, motivation; total compensation is an open and flexible system which must facilitate personal and family conciliation and therefore cannot be isolated from social reality. Today, the challenge for any company that wants to have a future is to be sustainable, and women play a ‘special’ role in this. Spain, in its statutory and conventional development, has not given sufficient response to new phenomena such as ‘bonuses’, ‘stock options’ or ‘fringe benefits’ (constructed dogmatically and by court decisions), the new digital reality, where cryptocurrency, new collaborative models and service provision -such as remote work-, are always ahead of the law. To talk about compensation is to talk about the gender gap, and with the entry into force of RD.902 /2020 on 14 April 2021, certain measures are necessary under the principle of salary transparency; the valuation of jobs, the pay register (Rd. 6/2019) and the pay audit, are an example of this. Analyzing the methodologies, and in particular the determination and weight of the factors -so that the system itself is not discriminatory- is essential. The wage gap in Spain is smaller than in Europe, but the sources do not reflect the reality, and since the beginning of the pandemic, there has been a clear stagnation. A living wage is not the minimum wage; it is identified with rights and needs; it is that which, based on internal equity, reflects the competitiveness of the company in terms of human capital. Spain has lost and has not recovered the relative weight of its wages; this is having a direct impact on our competitiveness, consequently on the precariousness of employment and undoubtedly on the levels of extreme poverty. Training is becoming more than ever a strategic factor; the new digital reality requires that each component of the system is connected, the transversality is imposed on us, this forces us to redefine content, to give answers to the new demands that the new normality requires because technology and robotization are changing the concept of employability. The presence of women in this context is necessary, and there is a long way to go. The so-called emotional compensation becomes particularly relevant at a time when pandemics, silence, and disruption, are leaving after-effects; technostress (in all its manifestations) is just one of them. Talking about motivation today makes no sense without first being aware that mental health is a priority, that it must be treated and communicated in an inclusive way because it increases satisfaction, productivity, and engagement. There is a clear conclusion to all this: compensation systems do not respond to the ‘new normality’: diversity, and in particular women, cannot be invisible in human resources policies if the company wants to be sustainable.Keywords: diversity, gender gap, human resources, sustainability.
Procedia PDF Downloads 168464 Establishing a Sustainable Construction Industry: Review of Barriers That Inhibit Adoption of Lean Construction in Lesotho
Authors: Tsepiso Mofolo, Luna Bergh
Abstract:
The Lesotho construction industry fails to embrace environmental practices, which has then lead to excessive consumption of resources, land degradation, air and water pollution, loss of habitats, and high energy usage. The industry is highly inefficient, and this undermines its capability to yield the optimum contribution to social, economic and environmental developments. Sustainable construction is, therefore, imperative to ensure the cultivation of benefits from all these intrinsic themes of sustainable development. The development of a sustainable construction industry requires a holistic approach that takes into consideration the interaction between Lean Construction principles, socio-economic and environmental policies, technological advancement and the principles of construction or project management. Sustainable construction is a cutting-edge phenomenon, forming a component of a subjectively defined concept called sustainable development. Sustainable development can be defined in terms of attitudes and judgments to assist in ensuring long-term environmental, social and economic growth in society. The key concept of sustainable construction is Lean Construction. Lean Construction emanates from the principles of the Toyota Production System (TPS), namely the application and adaptation of the fundamental concepts and principles that focus on waste reduction, the increase in value to the customer, and continuous improvement. The focus is on the reduction of socio-economic waste, and protestation of environmental degradation by reducing carbon dioxide emission footprint. Lean principles require a fundamental change in the behaviour and attitudes of the parties involved in order to overcome barriers to cooperation. Prevalent barriers to adoption of Lean Construction in Lesotho are mainly structural - such as unavailability of financing, corruption, operational inefficiency or wastage, lack of skills and training and inefficient construction legislation and political interferences. The consequential effects of these problems trigger down to quality, cost and time of the project - which then result in an escalation of operational costs due to the cost of rework or material wastage. Factor and correlation analysis of these barriers indicate that they are highly correlated, which then poses a detrimental potential to the country’s welfare, environment and construction safety. It is, therefore, critical for Lesotho’s construction industry to develop a robust governance through bureaucracy reforms and stringent law enforcement.Keywords: construction industry, sustainable development, sustainable construction industry, lean construction, barriers to sustainable construction
Procedia PDF Downloads 294463 Analysis of a Faience Enema Found in the Assasif Tomb No. -28- of the Vizier Amenhotep Huy: Contributions to the Study of the Mummification Ritual Practiced in the Theban Necropolis
Authors: Alberto Abello Moreno-Cid
Abstract:
Mummification was the process through which immortality was granted to the deceased, so it was of extreme importance to the Egyptians. The techniques of embalming had evolved over the centuries, and specialists created increasingly sophisticated tools. However, due to its eminently religious nature, knowledge about everything related to this practice was jealously preserved, and the testimonies that have survived to our time are scarce. For this reason, embalming instruments found in archaeological excavations are uncommon. The tomb of the Vizier Amenhotep Huy (AT No. -28-), located in the el-Assasif necropolis that is being excavated since 2009 by the team of the Institute of Ancient Egyptian Studies, has been the scene of some discoveries of this type that evidences the existence of mummification practices in this place after the New Kingdom. The clysters or enemas are the fundamental tools in the second type of mummification described by the historian Herodotus to introduce caustic solutions inside the body of the deceased. Nevertheless, such objects only have been found in three locations: the tomb of Ankh-Hor in Luxor, where a copper enema belonged to the prophet of Ammon Uah-ib-Ra came to light; the excavation of the tomb of Menekh-ib-Nekau in Abusir, where was also found one made of copper; and the excavations in the Bucheum, where two more artifacts were discovered, also made of copper but in different shapes and sizes. Both of them were used for the mummification of sacred animals and this is the reason they vary significantly. Therefore, the object found in the tomb No. -28-, is the first known made of faience of all these peculiar tools and the oldest known until now, dated in the Third Intermediate Period (circa 1070-650 B.C.). This paper bases its investigation on the study of those parallelisms, the material, the current archaeological context and the full analysis and reconstruction of the object in question. The key point is the use of faience in the production of this item: creating a device intended to be in constant use seems to be a first illogical compared to other samples made of copper. Faience around the area of Deir el-Bahari had a strong religious component, associated with solar myths and principles of the resurrection, connected to the Osirian that characterises the mummification procedure. The study allows to refute some of the premises which are held unalterable in Egyptology, verifying the utilization of these sort of pieces, understanding its way of use and showing that this type of mummification was also applied to the highest social stratum, in which case the tools were thought out of an exceptional quality and religious symbolism.Keywords: clyster, el-Assasif, embalming, faience enema mummification, Theban necropolis
Procedia PDF Downloads 111462 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning
Authors: Yangzhi Li
Abstract:
Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.Keywords: robotic construction, robotic assembly, visual guidance, machine learning
Procedia PDF Downloads 86461 Study into the Interactions of Primary Limbal Epithelial Stem Cells and HTCEPI Using Tissue Engineered Cornea
Authors: Masoud Sakhinia, Sajjad Ahmad
Abstract:
Introduction: Though knowledge of the compositional makeup and structure of the limbal niche has progressed exponentially during the past decade, much is yet to be understood. Identifying the precise profile and role of the stromal makeup which spans the ocular surface may inform researchers of the most optimum conditions needed to effectively expand LESCs in vitro, whilst preserving their differentiation status and phenotype. Limbal fibroblasts, as opposed to corneal fibroblasts are thought to form an important component of the microenvironment where LESCs reside. Methods: The corneal stroma was tissue engineered in vitro using both limbal and corneal fibroblasts embedded within a tissue engineered 3D collagen matrix. The effect of these two different fibroblasts on LESCs and hTCEpi corneal epithelial cell line were then subsequently determined using phase contrast microscopy, histolological analysis and PCR for specific stem cell markers. The study aimed to develop an in vitro model which could be used to determine whether limbal, as opposed to corneal fibroblasts, maintained the stem cell phenotype of LESCs and hTCEpi cell line. Results: Tissue culture analysis was inconclusive and required further quantitative analysis for remarks on cell proliferation within the varying stroma. Histological analysis of the tissue-engineered cornea showed a comparable structure to that of the human cornea, though with limited epithelial stratification. PCR results for epithelial cell markers of cells cultured on limbal fibroblasts showed reduced expression of CK3, a negative marker for LESC’s, whilst also exhibiting a relatively low expression level of P63, a marker for undifferentiated LESCs. Conclusion: We have shown the potential for the construction of a tissue engineered human cornea using a 3D collagen matrix and described some preliminary results in the analysis of the effects of varying stroma consisting of limbal and corneal fibroblasts, respectively, on the proliferation of stem cell phenotype of primary LESCs and hTCEpi corneal epithelial cells. Although no definitive marker exists to conclusively illustrate the presence of LESCs, the combination of positive and negative stem cell markers in our study were inconclusive. Though it is less traslational to the human corneal model, the use of conditioned medium from that of limbal and corneal fibroblasts may provide a more simple avenue. Moreover, combinations of extracellular matrices could be used as a surrogate in these culture models.Keywords: cornea, Limbal Stem Cells, tissue engineering, PCR
Procedia PDF Downloads 278460 Investigation of Mechanical and Tribological Property of Graphene Reinforced SS-316L Matrix Composite Prepared by Selective Laser Melting
Authors: Ajay Mandal, Jitendar Kumar Tiwari, N. Sathish, A. K. Srivastava
Abstract:
A fundamental investigation is performed on the development of graphene (Gr) reinforced stainless steel 316L (SS 316L) metal matrix composite via selective laser melting (SLM) in order to improve specific strength and wear resistance property of SS 316L. Firstly, SS 316L powder and graphene were mixed in a fixed ratio using low energy planetary ball milling. The milled powder is then subjected to the SLM process to fabricate composite samples at a laser power of 320 W and exposure time of 100 µs. The prepared composite was mechanically tested (hardness and tensile test) at ambient temperature, and obtained results indicate that the properties of the composite increased significantly with the addition of 0.2 wt. % Gr. Increment of about 25% (from 194 to 242 HV) and 70% (from 502 to 850 MPa) is obtained in hardness and yield strength of composite, respectively. Raman mapping and XRD were performed to see the distribution of Gr in the matrix and its effect on the formation of carbide, respectively. Results of Raman mapping show the uniform distribution of graphene inside the matrix. Electron back scatter diffraction (EBSD) map of the prepared composite was analyzed under FESEM in order to understand the microstructure and grain orientation. Due to thermal gradient, elongated grains were observed along the building direction, and grains get finer with the addition of Gr. Most of the mechanical components are subjected to several types of wear conditions. Therefore, it is very necessary to improve the wear property of the component, and hence apart from strength and hardness, a tribological property of composite was also measured under dry sliding condition. Solid lubrication property of Gr plays an important role during the sliding process due to which the wear rate of composite reduces up to 58%. Also, the surface roughness of worn surface reduces up to 70% as measured by 3D surface profilometry. Finally, it can be concluded that SLM is an efficient method of fabricating cutting edge metal matrix nano-composite having Gr like reinforcement, which was very difficult to fabricate through conventional manufacturing techniques. Prepared composite has superior mechanical and tribological properties and can be used for a wide variety of engineering applications. However, due to the unavailability of a considerable amount of literature in a similar domain, more experimental works need to perform, such as thermal property analysis, and is a part of ongoing study.Keywords: selective laser melting, graphene, composite, mechanical property, tribological property
Procedia PDF Downloads 136459 Modelling High Strain Rate Tear Open Behavior of a Bilaminate Consisting of Foam and Plastic Skin Considering Tensile Failure and Compression
Authors: Laura Pytel, Georg Baumann, Gregor Gstrein, Corina Klug
Abstract:
Premium cars often coat the instrument panels with a bilaminate consisting of a soft foam and a plastic skin. The coating is torn open during the passenger airbag deployment under high strain rates. Characterizing and simulating the top coat layer is crucial for predicting the attenuation that delays the airbag deployment, effecting the design of the restrain system and to reduce the demand of simulation adjustments through expensive physical component testing.Up to now, bilaminates used within cars either have been modelled by using a two-dimensional shell formulation for the whole coating system as one which misses out the interaction of the two layers or by combining a three-dimensional formulation foam layer with a two-dimensional skin layer but omitting the foam in the significant parts like the expected tear line area and the hinge where high compression is expected. In both cases, the properties of the coating causing the attenuation are not considered. Further, at present, the availability of material information, as there are failure dependencies of the two layers, as well as the strain rate of up to 200 1/s, are insufficient. The velocity of the passenger airbag flap during an airbag shot has been measured with about 11.5 m/s during first ripping; the digital image correlation evaluation showed resulting strain rates of above 1500 1/s. This paper provides a high strain rate material characterization of a bilaminate consisting of a thin polypropylene foam and a thermoplasctic olefins (TPO) skin and the creation of validated material models. With the help of a Split Hopkinson tension bar, strain rates of 1500 1/s were within reach. The experimental data was used to calibrate and validate a more physical modelling approach of the forced ripping of the bilaminate. In the presented model, the three-dimensional foam layer is continuously tied to the two-dimensional skin layer, allowing failure in both layers at any possible position. The simulation results show a higher agreement in terms of the trajectory of the flaps and its velocity during ripping. The resulting attenuation of the airbag deployment measured by the contact force between airbag and flaps increases and serves usable data for dimensioning modules of an airbag system.Keywords: bilaminate ripping behavior, High strain rate material characterization and modelling, induced material failure, TPO and foam
Procedia PDF Downloads 69458 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer
Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi
Abstract:
Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales
Procedia PDF Downloads 124457 The Influence of Destination Image on Tourists' Experience at Osun Osogbo World Heritage Site
Authors: Bola Adeleke, Kayode Ogunsusi
Abstract:
Heritage sites have evolved to preserve culture and heritage and also to educate and entertain tourists. Tourist travel decisions and behavior are influenced by destination image and value of the experience of tourists. Perceived value is one of the important tools for securing a competitive edge in tourism destinations. The model of Ritchie and Crouch distinguished 36 attributes of competitiveness which are classified into five factors which are quality of experience, touristic attractiveness, environment and infrastructure, entertainment/outdoor activities and cultural traditions. The study extended this model with a different grouping of the determinants of destination competitiveness. The theoretical framework used for this study assumes that apart from attractions already situated in the grove, satisfaction with destination common service, and entertainment and events, can all be used in creating a positive image for/and in attracting customers (destination selection) to visit Osun Sacred Osogbo Grove during and after annual celebrations. All these will impact positively on travel experience of customers as well as their spiritual fulfillment. Destination image has a direct impact on tourists’ satisfaction which consequently impacts on tourists’ likely future behavior on whether to revisit a cultural destination or not. The study investigated the variables responsible for destination image competitiveness of the Heritage Site; assessed the factors enhancing the destination image; and evaluated the perceived value realized by tourists from their cultural experience at the grove. A complete enumeration of tourists above 18 years of age who visited the Heritage Site within the month of March and April 2017 was taken. 240 respondents, therefore, were used for the study. The structured questionnaire with 5 Likert scales was administered. Five factors comprising 63 variables were used to determine the destination image competitiveness through principal component analysis, while multiple regressions were used to evaluate perceived value of tourists at the grove. Results revealed that 11 out of the 12 variables determining the destination image competitiveness were significant in attracting tourists to the grove. From the R-value, all factors predicted tourists’ value of experience strongly (R= 0.936). The percentage variance of customer value was explained by 87.70% of the variance of destination common service, entertainment and event satisfaction, travel environment satisfaction and spiritual satisfaction, with F-value being significant at 0.00. Factors with high alpha value contributed greatly to adding value to enhancing destination and tourists’ experience. 11 variables positively predicted tourist value with significance. Managers of Osun World Heritage Site should improve on variables critical to adding values to tourists’ experience.Keywords: competitiveness, destination image, Osun Osogbo world heritage site, tourists
Procedia PDF Downloads 187456 Effect of Planting Date on Quantitative and Qualitative Characteristics of Different Bread Wheat and Durum Cultivars
Authors: Mahdi Nasiri Tabrizi, A. Dadkhah, M. Khirkhah
Abstract:
In order to study the effect of planting on yield, yield components and quality traits in bread and durum wheat varieties, a field split-plot experiment based on complete randomized design with three replications was conducted in Agricultural and Natural Resources Research Center of Razavi Khorasan located in city of Mashhad during 2013-2014. Main factor were consisted of five sowing dates (first October, fifteenth December, first March, tenth March, twentieth March) and as sub-factors consisted of different bread wheat (Bahar, Pishgam, Pishtaz, Mihan, Falat and Karim) and two durum wheat (Dena and Dehdasht). According to results of analysis variance the effect of planting date was significant on all examined traits (grain yield, biological yield, harvest index, number of grain per spike, thousands kernel weight, number of spike per square meter, plant height, the number of days to heading, the number of days to maturity, during the grain filling period, percentage of wet gluten, percentage of dry gluten, gluten index, percentage of protein). By delay in planting, majority of traits significantly decreased, except quality traits (percentage of wet gluten, percentage of dry gluten and percentage of protein). Results of means comparison showed, among planting date the highest grain yield and biological yield were related to first planting date (Octobr) with mean of production of 5/6 and 1/17 tons per hectare respectively and the highest bread quality (gluten index) with mean of 85 and percentage of protein with mean of 13% to fifth planting date also the effect of genotype was significant on all traits. The highest grain yield among of studied wheat genotypes was related to Dehdasht cultivar with an average production of 4.4 tons per hectare. The highest protein percentage and bread quality (gluten index) were related to Dehdasht cultivar with 13.4% and Falat cultivar with number of 90 respectively. The interaction between cultivar and planting date was significant on all traits and different varieties had different trend for these traits. The highest grain yield was related to first planting date (October) and Falat cultivar with an average of production of 6/7 tons per hectare while in grain yield did not show a significant different with Pishtas and Mihan cultivars also the most of gluten index (bread quality index) and protein percentage was belonged to the third planting date and Karim cultivar with 7.98 and Dena cultivar with 7.14% respectively.Keywords: yield component, yield, planting date, cultivar, quality traits, wheat
Procedia PDF Downloads 430455 DC Bus Voltage Ripple Control of Photo Voltaic Inverter in Low Voltage Ride-Trough Operation
Authors: Afshin Kadri
Abstract:
Using Renewable Energy Resources (RES) as a type of DG unit is developing in distribution systems. The connection of these generation units to existing AC distribution systems changes the structure and some of the operational aspects of these grids. Most of the RES requires to power electronic-based interfaces for connection to AC systems. These interfaces consist of at least one DC/AC conversion unit. Nowadays, grid-connected inverters must have the required feature to support the grid under sag voltage conditions. There are two curves in these conditions that show the magnitude of the reactive component of current as a function of voltage drop value and the required minimum time value, which must be connected to the grid. This feature is named low voltage ride-through (LVRT). Implementing this feature causes problems in the operation of the inverter that increases the amplitude of high-frequency components of the injected current and working out of maximum power point in the photovoltaic panel connected inverters are some of them. The important phenomenon in these conditions is ripples in the DC bus voltage that affects the operation of the inverter directly and indirectly. The losses of DC bus capacitors which are electrolytic capacitors, cause increasing their temperature and decreasing its lifespan. In addition, if the inverter is connected to the photovoltaic panels directly and has the duty of maximum power point tracking, these ripples cause oscillations around the operating point and decrease the generating energy. Using a bidirectional converter in the DC bus, which works as a buck and boost converter and transfers the ripples to its DC bus, is the traditional method to eliminate these ripples. In spite of eliminating the ripples in the DC bus, this method cannot solve the problem of reliability because it uses an electrolytic capacitor in its DC bus. In this work, a control method is proposed which uses the bidirectional converter as the fourth leg of the inverter and eliminates the DC bus ripples using an injection of unbalanced currents into the grid. Moreover, the proposed method works based on constant power control. In this way, in addition, to supporting the amplitude of grid voltage, it stabilizes its frequency by injecting active power. Also, the proposed method can eliminate the DC bus ripples in deep voltage drops, which cause increasing the amplitude of the reference current more than the nominal current of the inverter. The amplitude of the injected current for the faulty phases in these conditions is kept at the nominal value and its phase, together with the phase and amplitude of the other phases, are adjusted, which at the end, the ripples in the DC bus are eliminated, however, the generated power decreases.Keywords: renewable energy resources, voltage drop value, DC bus ripples, bidirectional converter
Procedia PDF Downloads 76454 H2 Permeation Properties of a Catalytic Membrane Reactor in Methane Steam Reforming Reaction
Authors: M. Amanipour, J. Towfighi, E. Ganji Babakhani, M. Heidari
Abstract:
Cylindrical alumina microfiltration membrane (GMITM Corporation, inside diameter=9 mm, outside diameter=13 mm, length= 50 mm) with an average pore size of 0.5 micrometer and porosity of about 0.35 was used as the support for membrane reactor. This support was soaked in boehmite sols, and the mean particle size was adjusted in the range of 50 to 500 nm by carefully controlling hydrolysis time, and calcined at 650 °C for two hours. This process was repeated with different boehmite solutions in order to achieve an intermediate layer with an average pore size of about 50 nm. The resulting substrate was then coated with a thin and dense layer of silica by counter current chemical vapour deposition (CVD) method. A boehmite sol with 10 wt.% of nickel which was prepared by a standard procedure was used to make the catalytic layer. BET, SEM, and XRD analysis were used to characterize this layer. The catalytic membrane reactor was placed in an experimental setup to evaluate the permeation and hydrogen separation performance for a steam reforming reaction. The setup consisted of a tubular module in which the membrane was fixed, and the reforming reaction occurred at the inner side of the membrane. Methane stream, diluted with nitrogen, and deionized water with a steam to carbon (S/C) ratio of 3.0 entered the reactor after the reactor was heated up to 500 °C with a specified rate of 2 °C/ min and the catalytic layer was reduced at presence of hydrogen for 2.5 hours. Nitrogen flow was used as sweep gas through the outer side of the reactor. Any liquid produced was trapped and separated at reactor exit by a cold trap, and the produced gases were analyzed by an on-line gas chromatograph (Agilent 7890A) to measure total CH4 conversion and H2 permeation. BET analysis indicated uniform size distribution for catalyst with average pore size of 280 nm and average surface area of 275 m2.g-1. Single-component permeation tests were carried out for hydrogen, methane, and carbon dioxide at temperature range of 500-800 °C, and the results showed almost the same permeance and hydrogen selectivity values for hydrogen as the composite membrane without catalytic layer. Performance of the catalytic membrane was evaluated by applying membranes as a membrane reactor for methane steam reforming reaction at gas hourly space velocity (GHSV) of 10,000 h−1 and 2 bar. CH4 conversion increased from 50% to 85% with increasing reaction temperature from 600 °C to 750 °C, which is sufficiently above equilibrium curve at reaction conditions, but slightly lower than membrane reactor with packed nickel catalytic bed because of its higher surface area compared to the catalytic layer.Keywords: catalytic membrane, hydrogen, methane steam reforming, permeance
Procedia PDF Downloads 256453 Delving into the Concept of Social Capital in the Smart City Research
Authors: Atefe Malekkhani, Lee Beattie, Mohsen Mohammadzadeh
Abstract:
Unprecedented growth of megacities and urban areas all around the world have resulted in numerous risks, concerns, and problems across various aspects of urban life, including environmental, social, and economic domains like climate change, spatial and social inequalities. In this situation, ever-increasing progress of technology has created a hope for urban authorities that the negative effects of various socio-economic and environmental crises can potentially be mitigated with the use of information and communication technologies. The concept of 'smart city' represents an emerging solution to urban challenges arising from increased urbanization using ICTs. However, smart cities are often perceived primarily as technological initiatives and are implemented without considering the social and cultural contexts of cities and the needs of their residents. The implementation of smart city projects and initiatives has the potential to (un)intentionally exacerbate pre-existing social, spatial, and cultural segregation. Investigating the impact of smart city on social capital of people who are users of smart city systems and with governance as policymakers is worth exploring. The importance of inhabitants to the existence and development of smart cities cannot be overlooked. This concept has gained different perspectives in the smart city studies. Reviewing the literature about social capital and smart city show that social capital play three different roles in smart city development. Some research indicates that social capital is a component of a smart city and has embedded in its dimensions, definitions, or strategies, while other ones see it as a social outcome of smart city development and point out that the move to smart cities improves social capital; however, in most cases, it remains an unproven hypothesis. Other studies show that social capital can enhance the functions of smart cities, and the consideration of social capital in planning smart cities should be promoted. Despite the existing theoretical and practical knowledge, there is a significant research gap reviewing the knowledge domain of smart city studies through the lens of social capital. To shed light on this issue, this study aims to explore the domain of existing research in the field of smart city through the lens of social capital. This research will use the 'Preferred Reporting Items for Systematic Reviews and Meta-Analyses' (PRISMA) method to review relevant literature, focusing on the key concepts of 'Smart City' and 'Social Capital'. The studies will be selected Web of Science Core Collection, using a selection process that involves identifying literature sources, screening and filtering studies based on titles, abstracts, and full-text reading.Keywords: smart city, urban digitalisation, ICT, social capital
Procedia PDF Downloads 14452 The Importance of the Phases of Information, Diagnosis, Planning, Intervention and Management in a Historic Center
Authors: Giovanni Duran Polo
Abstract:
Demonstrate the importance of the stages such as Information, Diagnosis, Management, and Intervention is fundamental to have a historical, live, and quality inhabited center. One of the major actions to take is to promote the concept of the management of a historic center with harmonious development. For that, concerned actors should strengthen the concept that said historic center may be the neighborhood of all and for all. The centers of historical cities, presented as any other urban area, social, environmental issues etc; yet they get added value that have no other city neighborhoods. The equity component, either by the urban plan, or environmental quality offered properties of architectural, landscape or some land uses are the differentiating element, while the tool that makes them attractive face pressure exerted by new housing developments or shopping centers. That's why through the experience of working in historical centers, they are declared the actions in heritage areas. This paper will show how the encounter with each of these places are trying to take the phases of information, to gather all the data needed to be closer to the territory with specific data, diagnosis; which allowed the actors to see what state they were, felt how the heart is related to the rest of the city, show what problems affected the situation and what potential it had to compete in a global market. Also, to discuss the importance of the organization, as it is legal and normative basis for it have an order and a concept, when you know what can and what cannot, in an area where the citizen has many myth or history, when he wanted to intervene in protected buildings. It is also appropriate to show how it could develop the intervention phase, where the shares on the tangible elements and intervention for the protection of the heritage property are executed. The management is the final phase which will carry out all that was raised on paper, it's time to orient, explain, persuade, promote, and encourage citizens to take care of the heritage. It is profitable and also an obligation and it is not an insurmountable burden. It has to be said this is the time to pull all the cards to make the historical center and heritage becoming more alive today. It is the moment to make it more inhabited and to transformer it into a quality place, so citizens will cherish and understand the importance of such a place. Inhabited historical centers, endowments and equipment required, with trade quality, with constant cultural offer, with well-preserved buildings and tidy, modern and safe public spaces are always attractive for tourism, but first of all, the place should be conceived for citizens, otherwise everything will be doomed to failure.Keywords: development, diagnosis, heritage historic center, intervention, management, patrimony
Procedia PDF Downloads 396451 Dexamethasone Treatment Deregulates Proteoglycans Expression in Normal Brain Tissue
Authors: A. Y. Tsidulko, T. M. Pankova, E. V. Grigorieva
Abstract:
High-grade gliomas are the most frequent and most aggressive brain tumors which are characterized by active invasion of tumor cells into the surrounding brain tissue, where the extracellular matrix (ECM) plays a crucial role. Disruption of ECM can be involved in anticancer drugs effectiveness, side-effects and also in tumor relapses. The anti-inflammatory agent dexamethasone is a common drug used during high-grade glioma treatment for alleviating cerebral edema. Although dexamethasone is widely used in the clinic, its effects on normal brain tissue ECM remain poorly investigated. It is known that proteoglycans (PGs) are a major component of the extracellular matrix in the central nervous system. In our work, we studied the effects of dexamethasone on the ECM proteoglycans (syndecan-1, glypican-1, perlecan, versican, brevican, NG2, decorin, biglican, lumican) using RT-PCR in the experimental animal model. It was shown that proteoglycans in rat brain have age-specific expression patterns. In early post-natal rat brain (8 days old rat pups) overall PGs expression was quite high and mainly expressed PGs were biglycan, decorin, and syndecan-1. The overall transcriptional activity of PGs in adult rat brain is 1.5-fold decreased compared to post-natal brain. The expression pattern was changed as well with biglycan, decorin, syndecan-1, glypican-1 and brevican becoming almost equally expressed. PGs expression patterns create a specific tissue microenvironment that differs in developing and adult brain. Dexamethasone regimen close to the one used in the clinic during high-grade glioma treatment significantly affects proteoglycans expression. It was shown that overall PGs transcription activity is 1.5-2-folds increased after dexamethasone treatment. The most up-regulated PGs were biglycan, decorin, and lumican. The PGs expression pattern in adult brain changed after treatment becoming quite close to the expression pattern in developing brain. It is known that microenvironment in developing tissues promotes cells proliferation while in adult tissues proliferation is usually suppressed. The changes occurring in the adult brain after dexamethasone treatment may lead to re-activation of cell proliferation due to signals from changed microenvironment. Taken together obtained data show that dexamethasone treatment significantly affects the normal brain ECM, creating the appropriate microenvironment for tumor cells proliferation and thus can reduce the effectiveness of anticancer treatment and promote tumor relapses. This work has been supported by a Russian Science Foundation (RSF Grant 16-15-10243)Keywords: dexamthasone, extracellular matrix, glioma, proteoglycan
Procedia PDF Downloads 199450 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model
Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho
Abstract:
Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem
Procedia PDF Downloads 294449 Effects of Forest Therapy on Depression among Healthy Adults
Authors: Insook Lee, Heeseung Choi, Kyung-Sook Bang, Sungjae Kim, Minkyung Song, Buhyun Lee
Abstract:
Backgrounds: A clearer and comprehensive understanding of the effects of forest therapy on depression is needed for further refinements of forest therapy programs. The purpose of this study was to review the literature on forest therapy programs designed to decrease the level of depression among adults to evaluate current forest therapy programs. Methods: This literature review was conducted using various databases including PubMed, EMBASE, CINAHL, PsycArticle, KISS, RISS, and DBpia to identify relevant studies published up to January 2016. The two authors independently screened the full text articles using the following criteria: 1) intervention studies assessing the effects of forest therapy on depression among healthy adults ages 18 and over; 2) including at least one control group or condition; 3) being peer-reviewed; and 4) being published either in English. The Scottish Intercollegiate Guideline Network (SIGN) measurement tool was used to assess the risk of bias in each trial. Results: After screening current literature, a total of 14 articles (English: 6, Korean: 8) were included in the present review. None of the studies used randomized controlled (RCT) study design and the sample size ranged from 11 to 300. Walking in the forest and experiencing the forest using the five senses was the key component of the forest therapy that was included in all studies. The majority of studies used one-time intervention that usually lasted a few hours or half-day. The most widely used measure for depression was Profile of Mood States (POMS). Most studies used self-reported, paper-and-pencil tests, and only 5 studies used both paper-and-pencil tests and physiological measures. Regarding the quality assessment based on the SIGN criteria, only 3 articles were rated ‘acceptable’ and the rest of the 14 articles were rated ‘low quality.’ Regardless of the diversity in format and contents of forest therapies, most studies showed a significant effect of forest therapy in curing depression. Discussions: This systematic review showed that forest therapy is one of the emerging and effective intervention approaches for decreasing the level of depression among adults. Limitations of the current programs identified from the review were as follows; 1) small sample size; 2) a lack of objective and comprehensive measures for depression; and 3) inadequate information about research process. Futures studies assessing the long-term effect of forest therapy on depression using rigorous study designs are needed.Keywords: forest therapy, systematic review, depression, adult
Procedia PDF Downloads 292448 Monitoring of Quantitative and Qualitative Changes in Combustible Material in the Białowieża Forest
Authors: Damian Czubak
Abstract:
The Białowieża Forest is a very valuable natural area, included in the World Natural Heritage at UNESCO, where, due to infestation by the bark beetle (Ips typographus), norway spruce (Picea abies) have deteriorated. This catastrophic scenario led to an increase in fire danger. This was due to the occurrence of large amounts of dead wood and grass cover, as light penetrated to the bottom of the stands. These factors in a dry state are materials that favour the possibility of fire and the rapid spread of fire. One of the objectives of the study was to monitor the quantitative and qualitative changes of combustible material on the permanent decay plots of spruce stands from 2012-2022. In addition, the size of the area with highly flammable vegetation was monitored and a classification of the stands of the Białowieża Forest by flammability classes was made. The key factor that determines the potential fire hazard of a forest is combustible material. Primarily its type, quantity, moisture content, size and spatial structure. Based on the inventory data on the areas of forest districts in the Białowieża Forest, the average fire load and its changes over the years were calculated. The analysis was carried out taking into account the changes in the health status of the stands and sanitary operations. The quantitative and qualitative assessment of fallen timber and fire load of ground cover used the results of the 2019 and 2021 inventories. Approximately 9,000 circular plots were used for the study. An assessment was made of the amount of potential fuel, understood as ground cover vegetation and dead wood debris. In addition, monitoring of areas with vegetation that poses a high fire risk was conducted using data from 2019 and 2021. All sub-areas were inventoried where vegetation posing a specific fire hazard represented at least 10% of the area with species characteristic of that cover. In addition to the size of the area with fire-prone vegetation, a very important element is the size of the fire load on the indicated plots. On representative plots, the biomass of the land cover was measured on an area of 10 m2 and then the amount of biomass of each component was determined. The resulting element of variability of ground covers in stands was their flammability classification. The classification developed made it possible to track changes in the flammability classes of stands over the period covered by the measurements.Keywords: classification, combustible material, flammable vegetation, Norway spruce
Procedia PDF Downloads 93447 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case
Authors: Lukas Reznak, Maria Reznakova
Abstract:
Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany
Procedia PDF Downloads 248