Search results for: Developed country
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13963

Search results for: Developed country

1633 Understanding the Relationship between Community and the Preservation of Cultural Landscape - Focusing on Organically Evolved Landscapes

Authors: Adhithy Menon E., Biju C. A.

Abstract:

Heritage monuments were first introduced to the public in the 1960s when the concept of preserving them was introduced. As a result of the 1990s, the concept of cultural landscapes gained importance, emphasizing the importance of culture and heritage in the context of the landscape. It is important to note that this paper is primarily concerned with the second category of ecological landscapes, which is organically evolving landscapes, as they represent a complex network of tangible, intangible, and environment, and the connections they share with the communities in which they are situated. The United Nations Educational, Scientific, and Cultural Organization has identified 39 cultural sites as being in danger, including the Iranian city of Bam and the historic city of Zabid in Yemen. To ensure its protection in the future, it is necessary to conduct a detailed analysis of the factors contributing to this degradation. An analysis of selected cultural landscapes from around the world is conducted to determine which parameters cause their degradation. The paper follows the objectives of understanding cultural landscapes and their importance for development, followed by examining various criteria for identifying cultural landscapes, their various classifications, as well as agencies that focus on their protection. To identify and analyze the parameters contributing to the deterioration of cultural landscapes based on literature and case studies (cultural landscape of Sintra, Rio de Janeiro, and Varanasi). As a final step, strategies should be developed to enhance deteriorating cultural landscapes based on these parameters. The major findings of the study are the impact of community in the parameters derived - integrity (natural factors, natural disasters, demolition of structures, deterioration of materials), authenticity (living elements, sense of place, building techniques, religious context, artistic expression) public participation (revenue, dependence on locale), awareness (demolition of structures, resource management) disaster management, environmental impact, maintenance of cultural landscape (linkages with other sites, dependence on locale, revenue, resource management). The parameters of authenticity, public participation, awareness, and maintenance of the cultural landscape are directly related to the community in which the cultural landscape is located. Therefore, by focusing on the community and addressing the parameters identified, the deterioration curve of cultural landscapes can be altered.

Keywords: community, cultural landscapes, heritage, organically evolved, public participation

Procedia PDF Downloads 83
1632 The Response of Mammal Populations to Abrupt Changes in Fire Regimes in Montane Landscapes of South-Eastern Australia

Authors: Jeremy Johnson, Craig Nitschke, Luke Kelly

Abstract:

Fire regimes, climate and topographic gradients interact to influence ecosystem structure and function across fire-prone, montane landscapes worldwide. Biota have developed a range of adaptations to historic fire regime thresholds, which allow them to persist in these environments. In south-eastern Australia, a signal of fire regime changes is emerging across these landscapes, and anthropogenic climate change is likely to be one of the main drivers of an increase in burnt area and more frequent wildfire over the last 25 years. This shift has the potential to modify vegetation structure and composition at broad scales, which may lead to landscape patterns to which biota are not adapted, increasing the likelihood of local extirpation of some mammal species. This study aimed to address concerns related to the influence of abrupt changes in fire regimes on mammal populations in montane landscapes. It first examined the impact of climate, topography, and vegetation on fire patterns and then explored the consequences of these changes on mammal populations and their habitats. Field studies were undertaken across diverse vegetation, fire severity and fire frequency gradients, utilising camera trapping and passive acoustic monitoring methodologies and the collection of fine-scale vegetation data. Results show that drought is a primary contributor to fire regime shifts at the landscape scale, while topographic factors have a variable influence on wildfire occurrence at finer scales. Frequent, high severity wildfire influenced forest structure and composition at broad spatial scales, and at fine scales, it reduced occurrence of hollow-bearing trees and promoted coarse woody debris. Mammals responded differently to shifts in forest structure and composition depending on their habitat requirements. This study highlights the complex interplay between fire regimes, environmental gradients, and biotic adaptations across temporal and spatial scales. It emphasizes the importance of understanding complex interactions to effectively manage fire-prone ecosystems in the face of climate change.

Keywords: fire, ecology, biodiversity, landscape ecology

Procedia PDF Downloads 69
1631 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 221
1630 A Systematic Map of the Research Trends in Wildfire Management in Mediterranean-Climate Regions

Authors: Renata Martins Pacheco, João Claro

Abstract:

Wildfires are becoming an increasing concern worldwide, causing substantial social, economic, and environmental disruptions. This situation is especially relevant in Mediterranean-climate regions, present in all the five continents of the world, in which fire is not only a natural component of the environment but also perhaps one of the most important evolutionary forces. The rise in wildfire occurrences and their associated impacts suggests the need for identifying knowledge gaps and enhancing the basis of scientific evidence on how managers and policymakers may act effectively to address them. Considering that the main goal of a systematic map is to collate and catalog a body of evidence to describe the state of knowledge for a specific topic, it is a suitable approach to be used for this purpose. In this context, the aim of this study is to systematically map the research trends in wildfire management practices in Mediterranean-climate regions. A total of 201 wildfire management studies were analyzed and systematically mapped in terms of their: Year of publication; Place of study; Scientific outlet; Research area (Web of Science) or Research field (Scopus); Wildfire phase; Central research topic; Main objective of the study; Research methods; and Main conclusions or contributions. The results indicate that there is an increasing number of studies being developed on the topic (most from the last 10 years), but more than half of them are conducted in few Mediterranean countries (60% of the analyzed studies were conducted in Spain, Portugal, Greece, Italy or France), and more than 50% are focused on pre-fire issues, such as prevention and fuel management. In contrast, only 12% of the studies focused on “Economic modeling” or “Human factors and issues,” which suggests that the triple bottom line of the sustainability argument (social, environmental, and economic) is not being fully addressed by fire management research. More than one-fourth of the studies had their objective related to testing new approaches in fire or forest management, suggesting that new knowledge is being produced on the field. Nevertheless, the results indicate that most studies (about 84%) employed quantitative research methods, and only 3% of the studies used research methods that tackled social issues or addressed expert and practitioner’s knowledge. Perhaps this lack of multidisciplinary studies is one of the factors hindering more progress from being made in terms of reducing wildfire occurrences and their impacts.

Keywords: wildfire, Mediterranean-climate regions, management, policy

Procedia PDF Downloads 121
1629 Understanding the Effect of Material and Deformation Conditions on the “Wear Mode Diagram”: A Numerical Study

Authors: A. Mostaani, M. P. Pereira, B. F. Rolfe

Abstract:

The increasing application of Advanced High Strength Steel (AHSS) in the automotive industry to fulfill crash requirements has introduced higher levels of wear in stamping dies and parts. Therefore, understanding wear behaviour in sheet metal forming is of great importance as it can help to reduce the high costs currently associated with tool wear. At the contact between the die and the sheet, the tips of hard tool asperities interact with the softer sheet material. Understanding the deformation that occurs during this interaction is important for our overall understanding of the wear mechanisms. For these reasons, the scratching of a perfectly plastic material by a rigid indenter has been widely examined in the literature; with finite element modelling (FEM) used in recent years to further understand the behaviour. The ‘wear mode diagram’ has been commonly used to classify the deformation regime of the soft work-piece during scratching, into three modes: ploughing, wedge formation, and cutting. This diagram, which is based on 2D slip line theory and upper bound method for perfectly plastic work-piece and rigid indenter, relates different wear modes to attack angle and interfacial strength. This diagram has been the basis for many wear studies and wear models to date. Additionally, it has been concluded that galling is most likely to occur during the wedge formation mode. However, there has been little analysis in the literature of how the material behaviour and deformation conditions associated with metal forming processes influence the wear behaviour. Therefore, the first aim of this work is first to use a commercial FEM package (Abaqus/Explicit) to build a 3D model to capture wear modes during scratching with indenters with different attack angles and different interfacial strengths. The second goal is to utilise the developed model to understand how wear modes might change in the presence of bulk deformation of the work-piece material as a result of the metal forming operation. Finally, the effect of the work-piece material properties, including strain hardening, will be examined to understand how these influence the wear modes and wear behaviour. The results show that both strain hardening and substrate deformation can change the critical attack angle at which the wedge formation regime is activated.

Keywords: finite element, pile-up, scratch test, wear mode

Procedia PDF Downloads 326
1628 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 402
1627 Development of Coir Reinforced Composite for Automotive Parts Application

Authors: Okpala Charles Chikwendu, Ezeanyim Okechukwu Chiedu, Onukwuli Somto Kenneth

Abstract:

The demand for lightweight and fuel-efficient automobiles has led to the use of fiber-reinforced polymer composites in place of traditional metal parts. Coir, a natural fiber, offers qualities such as low cost, good tensile strength, and biodegradability, making it a potential filler material for automotive components. However, poor interfacial adhesion between coir and polymeric matrices has been a challenge. To address poor interfacial adhesion with polymeric matrices due to their moisture content and method of preparation, the extracted coir was chemically treated using NaOH. To develop a side view mirror encasement by investigating the mechanical effect of fiber percentage composition, fiber length and percentage composition of Epoxy in a coir fiber reinforced composite, polyester was adopted as the resin for the mold, while that of the product is Epoxy. Coir served as the filler material for the product. Specimens with varied compositions of fiber loading (15, 30 and 45) %, length (10, 15, 20, 30 and 45) mm, and (55, 70, 85) % weight of epoxy resin were fabricated using hand lay-up technique, while those specimens were later subjected to mechanical tests (Tensile, Flexural and Impact test). The results of the mechanical test showed that the optimal solution for the input factors is coir at 45%, epoxy at 54.543%, and 45mm coir length, which was used for the development of a vehicle’s side view mirror encasement. The optimal solutions for the response parameters are 49.333 Mpa for tensile strength, flexural for 57.118 Mpa, impact strength for 34.787 KJ/M2, young modulus for 4.788 GPa, stress for 4.534 KN, and 20.483 mm for strain. The models that were developed using Design Expert software revealed that the input factors can achieve the response parameters in the system with 94% desirability. The study showed that coir is quite durable for filler material in an epoxy composite for automobile applications and that fiber loading and length have a significant effect on the mechanical behavior of coir fiber-reinforced epoxy composites. The coir's low density, considerable tensile strength, and bio-degradability contribute to its eco-friendliness and potential for reducing the environmental hazards of synthetic automotive components.

Keywords: coir, composite, coir fiber, coconut husk, polymer, automobile, mechanical test

Procedia PDF Downloads 61
1626 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy

Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren

Abstract:

Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.

Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment

Procedia PDF Downloads 32
1625 Diverse High-Performing Teams: An Interview Study on the Balance of Demands and Resources

Authors: Alana E. Jansen

Abstract:

With such a large proportion of organisations relying on the use of team-based structures, it is surprising that so few teams would be classified as high-performance teams. While the impact of team composition on performance has been researched frequently, there have been conflicting findings as to the effects, particularly when examined alongside other team factors. To broaden the theoretical perspectives on this topic and potentially explain some of the inconsistencies in research findings left open by other various models of team effectiveness and high-performing teams, the present study aims to use the Job-Demands-Resources model, typically applied to burnout and engagement, as a framework to examine how team composition factors (particularly diversity in team member characteristics) can facilitate or hamper team effectiveness. This study used a virtual interview design where participants were asked to both rate and describe their experiences, in one high-performing and one low-performing team, over several factors relating to demands, resources, team composition, and team effectiveness. A semi-structured interview protocol was developed, which combined the use of the Likert style and exploratory questions. A semi-targeted sampling approach was used to invite participants ranging in age, gender, and ethnic appearance (common surface-level diversity characteristics) and those from different specialties, roles, educational and industry backgrounds (deep-level diversity characteristics). While the final stages of data analyses are still underway, thematic analysis using a grounded theory approach was conducted concurrently with data collection to identify the point of thematic saturation, resulting in 35 interviews being completed. Analyses examine differences in perceptions of demands and resources as they relate to perceived team diversity. Preliminary results suggest that high-performing and low-performing teams differ in perceptions of the type and range of both demands and resources. The current research is likely to offer contributions to both theory and practice. The preliminary findings suggest there is a range of demands and resources which vary between high and low-performing teams, factors which may play an important role in team effectiveness research going forward. Findings may assist in explaining some of the more complex interactions between factors experienced in the team environment, making further progress towards understanding the intricacies of why only some teams achieve high-performance status.

Keywords: diversity, high-performing teams, job demands and resources, team effectiveness

Procedia PDF Downloads 185
1624 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings

Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian

Abstract:

Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.

Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM

Procedia PDF Downloads 103
1623 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 126
1622 The Reasons for Food Losses and Waste and the Trends of Their Management in Basic Vegetal Production in Poland

Authors: Krystian Szczepanski, Sylwia Łaba

Abstract:

Production of fruit and vegetables, food cereals or oilseeds affects the natural environment via intake of nutrients being contained in the soil, use of the resources of water, fertilizers and food protection products, and energy. The limitation of the mentioned effects requires the introduction of techniques and methods for cultivation being friendly to the environment and counteracting losses and waste of agricultural raw materials as well as the appropriate management of food waste in every stage of the agri-food supply chain. The link to basic production includes obtaining a vegetal raw material and its storage in agricultural farm and transport to a collecting point. When the plants are ready to be harvested is the initial point; the stage before harvesting is not considered in the system of measuring and monitoring the food losses. The moment at which the raw material enters the stage of processing, i.e., its receipt at the gate of the processing plant, is considered as a final point of basic production. According to the Regulation (EC) No 178/2002 of the European Parliament and of the Council of 28 January 2002, Art. 2, “food” means any substance or product, intended to be, or reasonably expected to be consumed by humans. For the needs of the studies and their analysis, it was determined when raw material is considered as food – the plants (fruit, vegetables, cereals, oilseeds), after being harvested, arrive at storehouses. The aim of the studies was to determine the reasons for loss generation and to analyze the directions of their management in basic vegetal production in Poland in the years 2017 and 2018. The studies on food losses and waste in basic vegetal production were carried out in three sectors – fruit and vegetables, cereals and oilseeds. The studies of the basic production were conducted during the period of March-May 2019 at the territory of the whole country on a representative trail of 250 farms in each sector. The surveys were carried out using the questionnaires by the PAP method; the pollsters conducted the direct questionnaire interviews. From the conducted studies, it is followed that in 19% of the examined farms, any losses were not recorded during preparation, loading, and transport of the raw material to the manufacturing plant. In the farms, where the losses were indicated, the main reason in production of fruit and vegetables was rotting and it constituted more than 20% of the reported reasons, while in the case of cereals and oilseeds’ production, the respondents identified damages, moisture and pests as the most frequent reason. The losses and waste, generated in vegetal production as well as in processing and trade of fruit and vegetables, or cereal products should be appropriately managed or recovered. The respondents indicated composting (more than 60%) as the main direction of waste management in all categories. Animal feed and landfill sites were the other indicated directions of management. Prevention and minimization of loss generation are important in every stage of production as well as in basic production. When possessing the knowledge on the reasons for loss generation, we may introduce the preventive measures, mainly connected with the appropriate conditions and methods of the storage. Production of fruit and vegetables, food cereals or oilseeds affects the natural environment via intake of nutrients being contained in the soil, use of the resources of water, fertilizers and food protection products, and energy. The limitation of the mentioned effects requires the introduction of techniques and methods for cultivation being friendly to the environment and counteracting losses and waste of agricultural raw materials as well as the appropriate management of food waste in every stage of the agri-food supply chain. The link to basic production includes obtaining a vegetal raw material and its storage in agricultural farm and transport to a collecting point. The starting point is when the plants are ready to be harvested; the stage before harvesting is not considered in the system of measuring and monitoring the food losses. The successive stage is the transport of the collected crops to the collecting point or its storage and transport. The moment, at which the raw material enters the stage of processing, i.e. its receipt at the gate of the processing plant, is considered as a final point of basic production. Processing is understood as the change of the raw material into food products. According to the Regulation (EC) No 178/2002 of the European Parliament and of the Council of 28 January 2002, Art. 2, “food” means any substance or product, intended to be, or reasonably expected to be consumed by humans. It was determined (for the needs of the present studies) when raw material is considered as a food; it is the moment when the plants (fruit, vegetables, cereals, oilseeds), after being harvested, arrive at storehouses. The aim of the studies was to determine the reasons for loss generation and to analyze the directions of their management in basic vegetal production in Poland in the years 2017 and 2018. The studies on food losses and waste in basic vegetal production were carried out in three sectors – fruit and vegetables, cereals and oilseeds. The studies of the basic production were conducted during the period of March-May 2019 at the territory of the whole country on a representative trail of 250 farms in each sector. The surveys were carried out using the questionnaires by the PAPI (Paper & Pen Personal Interview) method; the pollsters conducted the direct questionnaire interviews. From the conducted studies, it is followed that in 19% of the examined farms, any losses were not recorded during preparation, loading, and transport of the raw material to the manufacturing plant. In the farms, where the losses were indicated, the main reason in production of fruit and vegetables was rotting and it constituted more than 20% of the reported reasons, while in the case of cereals and oilseeds’ production, the respondents identified damages, moisture, and pests as the most frequent reason. The losses and waste, generated in vegetal production as well as in processing and trade of fruit and vegetables, or cereal products should be appropriately managed or recovered. The respondents indicated composting (more than 60%) as the main direction of waste management in all categories. Animal feed and landfill sites were the other indicated directions of management. Prevention and minimization of loss generation are important in every stage of production as well as in basic production. When possessing the knowledge on the reasons for loss generation, we may introduce the preventive measures, mainly connected with the appropriate conditions and methods of the storage. ACKNOWLEDGEMENT The article was prepared within the project: "Development of a waste food monitoring system and an effective program to rationalize losses and reduce food waste", acronym PROM implemented under the STRATEGIC SCIENTIFIC AND LEARNING PROGRAM - GOSPOSTRATEG financed by the National Center for Research and Development in accordance with the provisions of Gospostrateg1 / 385753/1/2018

Keywords: food losses, food waste, PAP method, vegetal production

Procedia PDF Downloads 112
1621 Standardizing and Achieving Protocol Objectives for ChestWall Radiotherapy Treatment Planning Process using an O-ring Linac in High-, Low- and Middle-income Countries

Authors: Milton Ixquiac, Erick Montenegro, Francisco Reynoso, Matthew Schmidt, Thomas Mazur, Tianyu Zhao, Hiram Gay, Geoffrey Hugo, Lauren Henke, Jeff Michael Michalski, Angel Velarde, Vicky de Falla, Franky Reyes, Osmar Hernandez, Edgar Aparicio Ruiz, Baozhou Sun

Abstract:

Purpose: Radiotherapy departments in low- and middle-income countries (LMICs) like Guatemala have recently introduced intensity-modulated radiotherapy (IMRT). IMRT has become the standard of care in high-income countries (HIC) due to reduced toxicity and improved outcomes in some cancers. The purpose of this work is to show the agreement between the dosimetric results shown in the Dose Volume Histograms (DVH) to the objectives proposed in the adopted protocol. This is the initial experience with an O-ring Linac. Methods and Materials: An O-Linac Linac was installed at our clinic in Guatemala in 2019 and has been used to treat approximately 90 patients daily with IMRT. This Linac is a completely Image Guided Device since to deliver each radiotherapy session must take a Mega Voltage Cone Beam Computerized Tomography (MVCBCT). In each MVCBCT, the Linac deliver 9 UM, and they are taken into account while performing the planning. To start the standardization, the TG263 was employed in the nomenclature and adopted a hypofractionated protocol to treat ChestWall, including supraclavicular nodes achieving 40.05Gy in 15 fractions. The planning was developed using 4 semiarcs from 179-305 degrees. The planner must create optimization volumes for targets and Organs at Risk (OARs); the difficulty for the planner was the dose base due to the MVCBCT. To evaluate the planning modality, we used 30 chestwall cases. Results: The plans created manually achieve the protocol objectives. The protocol objectives are the same as the RTOG1005, and the DHV curves look clinically acceptable. Conclusions: Despite the O-ring Linac doesn´t have the capacity to obtain kv images, the cone beam CT was created using MV energy, the dose delivered by the daily image setup process still without affect the dosimetric quality of the plans, and the dose distribution is acceptable achieving the protocol objectives.

Keywords: hypofrationation, VMAT, chestwall, radiotherapy planning

Procedia PDF Downloads 112
1620 From Abraham to Average Man: Game Theoretic Analysis of Divine Social Relationships

Authors: Elizabeth Latham

Abstract:

Billions of people worldwide profess some feeling of psychological or spiritual connection with the divine. The majority of them attribute this personal connection to the God of the Christian Bible. The objective of this research was to discover what could be known about the exact social nature of these relationships and to see if they mimic the interactions recounted in the bible; if a worldwide majority believes that the Christian Bible is a true account of God’s interactions with mankind, it is reasonable to assume that the interactions between God and the aforementioned people would be similar to the ones in the bible. This analysis required the employment of an unusual method of biblical analysis: Game Theory. Because the research focused on documented social interaction between God and man in scripture, it was important to go beyond text-analysis methods. We used stories from the New Revised Standard Version of the bible to set up “games” using economics-style matrices featuring each player’s motivations and possible courses of action, modeled after interactions in the Old and New Testaments between the Judeo-Christian God and some mortal person. We examined all relevant interactions for the objectives held by each party and their strategies for obtaining them. These findings were then compared to similar “games” created based on interviews with people subscribing to different levels of Christianity who ranged from barely-practicing to clergymen. The range was broad so as to look for a correlation between scriptural knowledge and game-similarity to the bible. Each interview described a personal experience someone believed they had with God and matrices were developed to describe each one as social interaction: a “game” to be analyzed quantitively. The data showed that in most cases, the social features of God-man interactions in the modern lives of people were like those present in the “games” between God and man in the bible. This similarity was referred to in the study as “biblical faith” and it alone was a fascinating finding with many implications. The even more notable finding, however, was that the amount of game-similarity present did not correlate with the amount of scriptural knowledge. Each participant was also surveyed on family background, political stances, general education, scriptural knowledge, and those who had biblical faith were not necessarily the ones that knew the bible best. Instead, there was a high degree of correlation between biblical faith and family religious observance. It seems that to have a biblical psychological relationship with God, it is more important to have a religious family than to have studied scripture, a surprising insight with massive implications on the practice and preservation of religion.

Keywords: bible, Christianity, game theory, social psychology

Procedia PDF Downloads 149
1619 Character Development Outcomes: A Predictive Model for Behaviour Analysis in Tertiary Institutions

Authors: Rhoda N. Kayongo

Abstract:

As behavior analysts in education continue to debate on how higher institutions can continue to benefit from their social and academic related programs, higher education is facing challenges in the area of character development. This is manifested in the percentages of college completion rates, teen pregnancies, drug abuse, sexual abuse, suicide, plagiarism, lack of academic integrity, and violence among their students. Attending college is a perceived opportunity to positively influence the actions and behaviors of the next generation of society; thus colleges and universities have to provide opportunities to develop students’ values and behaviors. Prior studies were mainly conducted in private institutions and more so in developed countries. However, with the complexity of the nature of student body currently due to the changing world, a multidimensional approach combining multiple factors that enhance character development outcomes is needed to suit the changing trends. The main purpose of this study was to identify opportunities in colleges and develop a model for predicting character development outcomes. A survey questionnaire composed of 7 scales including in-classroom interaction, out-of-classroom interaction, school climate, personal lifestyle, home environment, and peer influence as independent variables and character development outcomes as the dependent variable was administered to a total of five hundred and one students of 3rd and 4th year level in selected public colleges and universities in the Philippines and Rwanda. Using structural equation modelling, a predictive model explained 57% of the variance in character development outcomes. Findings from the results of the analysis showed that in-classroom interactions have a substantial direct influence on character development outcomes of the students (r = .75, p < .05). In addition, out-of-classroom interaction, school climate, and home environment contributed to students’ character development outcomes but in an indirect way. The study concluded that in the classroom are many opportunities for teachers to teach, model and integrate character development among their students. Thus, suggestions are made to public colleges and universities to deliberately boost and implement experiences that cultivate character within the classroom. These may contribute tremendously to the students' character development outcomes and hence render effective models of behaviour analysis in higher education.

Keywords: character development, tertiary institutions, predictive model, behavior analysis

Procedia PDF Downloads 133
1618 Effects of Using a Recurrent Adverse Drug Reaction Prevention Program on Safe Use of Medicine among Patients Receiving Services at the Accident and Emergency Department of Songkhla Hospital Thailand

Authors: Thippharat Wongsilarat, Parichat tuntilanon, Chonlakan Prataksitorn

Abstract:

Recurrent adverse drug reactions are harmful to patients with mild to fatal illnesses, and affect not only patients but also their relatives, and organizations. To compare safe use of medicine among patients before and after using the recurrent adverse drug reaction prevention program . Quasi-experimental research with the target population of 598 patients with drug allergy history. Data were collected through an observation form tested for its validity by three experts (IOC = 0.87), and analyzed with a descriptive statistic (percentage). The research was conducted jointly with a multidisciplinary team to analyze and determine the weak points and strong points in the recurrent adverse drug reaction prevention system during the past three years, and 546, 329, and 498 incidences, respectively, were found. Of these, 379, 279, and 302 incidences, or 69.4; 84.80; and 60.64 percent of the patients with drug allergy history, respectively, were found to have caused by incomplete warning system. In addition, differences in practice in caring for patients with drug allergy history were found that did not cover all the steps of the patient care process, especially a lack of repeated checking, and a lack of communication between the multidisciplinary team members. Therefore, the recurrent adverse drug reaction prevention program was developed with complete warning points in the information technology system, the repeated checking step, and communication among related multidisciplinary team members starting from the hospital identity card room, patient history recording officers, nurses, physicians who prescribe the drugs, and pharmacists. Including in the system were surveillance, nursing, recording, and linking the data to referring units. There were also training concerning adverse drug reactions by pharmacists, monthly meetings to explain the process to practice personnel, creating safety culture, random checking of practice, motivational encouragement, supervising, controlling, following up, and evaluating the practice. The rate of prescribing drugs to which patients were allergic per 1,000 prescriptions was 0.08, and the incidence rate of recurrent drug reaction per 1,000 prescriptions was 0. Surveillance of recurrent adverse drug reactions covering all service providing points can ensure safe use of medicine for patients.

Keywords: recurrent drug, adverse reaction, safety, use of medicine

Procedia PDF Downloads 453
1617 The importance of Clinical Pharmacy and Computer Aided Drug Design

Authors: Peter Edwar Mortada Nasif

Abstract:

The use of CAD (Computer Aided Design) technology is ubiquitous in the architecture, engineering and construction (AEC) industry. This has led to its inclusion in the curriculum of architecture schools in Nigeria as an important part of the training module. This article examines the ethical issues involved in implementing CAD (Computer Aided Design) content into the architectural education curriculum. Using existing literature, this study begins with the benefits of integrating CAD into architectural education and the responsibilities of different stakeholders in the implementation process. It also examines issues related to the negative use of information technology and the perceived negative impact of CAD use on design creativity. Using a survey method, data from the architecture department of Chukwuemeka Odumegwu Ojukwu Uli University was collected to serve as a case study on how the issues raised were being addressed. The article draws conclusions on what ensures successful ethical implementation. Millions of people around the world suffer from hepatitis C, one of the world's deadliest diseases. Interferon (IFN) is treatment options for patients with hepatitis C, but these treatments have their side effects. Our research focused on developing an oral small molecule drug that targets hepatitis C virus (HCV) proteins and has fewer side effects. Our current study aims to develop a drug based on a small molecule antiviral drug specific for the hepatitis C virus (HCV). Drug development using laboratory experiments is not only expensive, but also time-consuming to conduct these experiments. Instead, in this in silicon study, we used computational techniques to propose a specific antiviral drug for the protein domains of found in the hepatitis C virus. This study used homology modeling and abs initio modeling to generate the 3D structure of the proteins, then identifying pockets in the proteins. Acceptable lagans for pocket drugs have been developed using the de novo drug design method. Pocket geometry is taken into account when designing ligands. Among the various lagans generated, a new specific for each of the HCV protein domains has been proposed.

Keywords: drug design, anti-viral drug, in-silicon drug design, hepatitis C virus, computer aided design, CAD education, education improvement, small-size contractor automatic pharmacy, PLC, control system, management system, communication

Procedia PDF Downloads 15
1616 Learners' Attitudes and Expectations towards Digital Learning Paths

Authors: Eirini Busack

Abstract:

Since the outbreak of the Covid-19 pandemic and the sudden transfer to online teaching, teachers have struggled to reconstruct their teaching and learning materials to adapt them to the new reality of online teaching and learning. Consequently, the pupils’ learning was disrupted during this orientation phase. Due to the above situation, teachers from all fields concluded that it is vital that their pupils should be able to continue their learning even without the teacher being physically present. Various websites and applications have been in use since then in hope that pupils will still enjoy a qualitative education; unfortunately, this was often not the case. To address this issue, it was therefore decided to focus the research on the development of digital learning paths. The fundamentals of these learning paths include the implementation of scenario-based learning (digital storytelling), the integration of media-didactic theory to make it pedagogically appropriate for learners, alongside instructional design knowledge and the drive to promote autonomous learners. This particular research is being conducted within the frame of the research project “Sustainable integration of subject didactic digital teaching-learning concepts” (InDiKo, 2020-2023), which is currently conducted at the University of Education Karlsruhe and investigates how pre-service teachers can acquire the necessary interdisciplinary and subject-specific media-didactic competencies to provide their future learners with digitally enhanced learning opportunities, and how these competencies can be developed continuously and sustainably. As English is one of the subjects involved in this project, the English Department prepared a seminar for the pre-service secondary teachers: “Media-didactic competence development: Developing learning paths & Digital Storytelling for English grammar teaching.” During this seminar, the pre-service teachers plan and design a Moodle-based differentiated lesson sequence on an English grammar topic that is to be tested by secondary school pupils. The focus of the present research is to assess the secondary school pupils’ expectations from an English grammar-focused digital learning path created by pre-service English teachers. The nine digital learning paths that are to be distributed to 25 pupils were produced over the winter and the current summer semester as the artifact of the seminar. Finally, the data to be quantitatively analysed and interpreted derive from the online questionnaires that the secondary school pupils fill in so as to reveal their expectations on what they perceive as a stimulating and thus effective grammar-focused digital learning path.

Keywords: digital storytelling, learning paths, media-didactics, autonomous learning

Procedia PDF Downloads 77
1615 Carbon Based Wearable Patch Devices for Real-Time Electrocardiography Monitoring

Authors: Hachul Jung, Ahee Kim, Sanghoon Lee, Dahye Kwon, Songwoo Yoon, Jinhee Moon

Abstract:

We fabricated a wearable patch device including novel patch type flexible dry electrode based on carbon nanofibers (CNFs) and silicone-based elastomer (MED 6215) for real-time ECG monitoring. There are many methods to make flexible conductive polymer by mixing metal or carbon-based nanoparticles. In this study, CNFs are selected for conductive nanoparticles because carbon nanotubes (CNTs) are difficult to disperse uniformly in elastomer compare with CNFs and silver nanowires are relatively high cost and easily oxidized in the air. Wearable patch is composed of 2 parts that dry electrode parts for recording bio signal and sticky patch parts for mounting on the skin. Dry electrode parts were made by vortexer and baking in prepared mold. To optimize electrical performance and diffusion degree of uniformity, we developed unique mixing and baking process. Secondly, sticky patch parts were made by patterning and detaching from smooth surface substrate after spin-coating soft skin adhesive. In this process, attachable and detachable strengths of sticky patch are measured and optimized for them, using a monitoring system. Assembled patch is flexible, stretchable, easily skin mountable and connectable directly with the system. To evaluate the performance of electrical characteristics and ECG (Electrocardiography) recording, wearable patch was tested by changing concentrations of CNFs and thickness of the dry electrode. In these results, the CNF concentration and thickness of dry electrodes were important variables to obtain high-quality ECG signals without incidental distractions. Cytotoxicity test is conducted to prove biocompatibility, and long-term wearing test showed no skin reactions such as itching or erythema. To minimize noises from motion artifacts and line noise, we make the customized wireless, light-weight data acquisition system. Measured ECG Signals from this system are stable and successfully monitored simultaneously. To sum up, we could fully utilize fabricated wearable patch devices for real-time ECG monitoring easily.

Keywords: carbon nanofibers, ECG monitoring, flexible dry electrode, wearable patch

Procedia PDF Downloads 183
1614 Teaching Kindness as Moral Virtue in Preschool Children: The Effectiveness of Picture-Storybook Reading and Hand-Puppet Storytelling

Authors: Rose Mini Agoes Salim, Shahnaz Safitri

Abstract:

The aim of this study is to test the effectiveness of teaching kindness in preschool children by using several techniques. Kindness is a physical act or emotional support aimed to build or maintain relationships with others. Kindness is known to be essential in the development of moral reasoning to distinguish between the good and bad things. In this study, kindness is operationalized as several acts including helping friends, comforting sad friends, inviting friends to play, protecting others, sharing, saying hello, saying thank you, encouraging others, and apologizing. It is mentioned that kindness is crucial to be developed in preschool children because this is the time the children begin to interact with their social environment through play. Furthermore, preschool children's cognitive development makes them begin to represent the world with words, which then allows them to interact with others. On the other hand, preschool children egocentric thinking makes them still need to learn to consider another person's perspective. In relation to social interaction, preschool children need to be stimulated and assisted by adult to be able to pay attention to other and act with kindness toward them. On teaching kindness to children, the quality of interaction between children and their significant others is the key factor. It is known that preschool children learn about kindness by imitating adults on their two way interaction. Specifically, this study examines two types of teaching techniques that can be done by parents as a way to teach kindness, namely the picture-storybook reading and hand-puppet storytelling. These techniques were examined because both activities are easy to do and both also provide a model of behavior for the child based on the character in the story. To specifically examine those techniques effectiveness in teaching kindness, two studies were conducted. Study I involves 31 children aged 5-6 years old with picture-storybook reading technique, where the intervention is done by reading 8 picture books for 8 days. In study II, hand-puppet storytelling technique is examined to 32 children aged 3-5 years old. The treatments effectiveness are measured using an instrument in the form of nine colored cards that describe the behavior of kindness. Data analysis using Wilcoxon Signed-rank test shows a significant difference on the average score of kindness (p < 0.05) before and after the intervention has been held. For daily observation, a ‘kindness tree’ and observation sheets are used which are filled out by the teacher. Two weeks after interventions, an improvement on all kindness behaviors measured is intact. The same result is also gained from both ‘kindness tree’ and observational sheets.

Keywords: kindness, moral teaching, storytelling, hand puppet

Procedia PDF Downloads 249
1613 Risk Assessment on New Bio-Composite Materials Made from Water Resource Recovery

Authors: Arianna Nativio, Zoran Kapelan, Jan Peter van der Hoek

Abstract:

Bio-composite materials are becoming increasingly popular in various applications, such as the automotive industry. Usually, bio-composite materials are made from natural resources recovered from plants, now, a new type of bio-composite material has begun to be produced in the Netherlands. This material is made from resources recovered from drinking water treatments (calcite), wastewater treatment (cellulose), and material from surface water management (aquatic plants). Surface water, raw drinking water, and wastewater can be contaminated with pathogens and chemical compounds. Therefore, it would be valuable to develop a framework to assess, monitor, and control the potential risks. Indeed, the goal is to define the major risks in terms of human health, quality of materials, and environment associated with the production and application of these new materials. This study describes the general risk assessment framework, starting with a qualitative risk assessment. The qualitative risk analysis was carried out by using the HAZOP methodology for the hazard identification phase. The HAZOP methodology is logical and structured and able to identify the hazards in the first stage of the design when hazards and associated risks are not well known. The identified hazards were analyzed to define the potential associated risks, and then these were evaluated by using the qualitative Event Tree Analysis. ETA is a logical methodology used to define the consequences for a specific hazardous incidents, evaluating the failure modes of safety barriers and dangerous intermediate events that lead to the final scenario (risk). This paper shows the effectiveness of combining of HAZOP and qualitative ETA methodologies for hazard identification and risk mapping. Then, key risks were identified, and a quantitative framework was developed based on the type of risks identified, such as QMRA and QCRA. These two models were applied to assess human health risks due to the presence of pathogens and chemical compounds such as heavy metals into the bio-composite materials. Thus, due to these contaminations, the bio-composite product, during its application, might release toxic substances into the environment leading to a negative environmental impact. Therefore, leaching tests are going to be planned to simulate the application of these materials into the environment and evaluate the potential leaching of inorganic substances, assessing environmental risk.

Keywords: bio-composite, risk assessment, water reuse, resource recovery

Procedia PDF Downloads 105
1612 Reconceptualizing “Best Practices” in Public Sector

Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani

Abstract:

Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.

Keywords: benchmarking, action research, critical realism, best practices, public sector

Procedia PDF Downloads 122
1611 Development of a Journal over 20 Years: Citation Analysis

Authors: Byung Lee, Charles Perschau

Abstract:

This study analyzes the development of a communication journal, the Journal of Advertising Education (JAE) over the past 20 years by examining citations of all research articles there. The purpose of a journal is to offer a stable and transparent forum for the presentation, scrutiny, and discussion of research in a targeted domain. This study asks whether JAE has fulfilled this purpose. The authors and readers who are involved in a journal need to have common research topics of their interest. In the case of the discipline of communication, scholars have a variety of backgrounds beyond communication itself since the social scientific study of communication is a relatively recent development, one that emerged after World War II, and the discipline has been heavily indebted to other social sciences, such as psychology, sociology, social psychology, and political science. When authors impart their findings and knowledge to others, their work is not done in isolation. They have to stand on previous studies, which are listed as sources in the bibliography. Since communication has heavily piggybacked on other disciplines, cited sources should be as diverse as the resources it taps into. This paper analyzes 4,244 articles that were cited by JAE articles in the past 36 issues. Since journal article authors reveal their intellectual linkage by using bibliographic citations, the analysis of citations in journal articles will reveal various networks of relationships among authors, journal types, and fields in an objective and quantitative manner. The study found that an easier access to information sources because of the development of electronic databases and the growing competition among scholars for publication seemed to influence authors to increase the number of articles cited even though some variations existed during the examined period. The types of articles cited have also changed. Authors have more often cited journal articles, periodicals (most of them available online), and web site sources, while decreased their dependence on books, conference papers, and reports. To provide a forum for discussion, a journal needs a common topic or theme. This can be realized when an author writes an article about a topic, and that article is cited and discussed in another article. Thus, the citation of articles in the same journal is vital for a journal to form a forum for discussion. JAE has gradually increased the citations of in-house articles with a few fluctuations over the years. The study also examines not only specific articles that are often cited, but also specific authors often cited. The analysis of citations in journal articles shows how JAE has developed into a full academic journal while offering a communal forum even though the speed of its formation is not as fast as desired probably because of its interdisciplinary nature.

Keywords: citation, co-citation, the Journal of Advertising Education, development of a journal

Procedia PDF Downloads 152
1610 Forecasting Regional Data Using Spatial Vars

Authors: Taisiia Gorshkova

Abstract:

Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regions

Keywords: forecasting, regional data, spatial econometrics, vector autoregression

Procedia PDF Downloads 140
1609 Composite Materials from Beer Bran Fibers and Polylactic Acid: Characterization and Properties

Authors: Camila Hurtado, Maria A. Morales, Diego Torres, L.H. Reyes, Alejandro Maranon, Alicia Porras

Abstract:

This work presents the physical and chemical characterization of beer brand fibers and the properties of novel composite materials made of these fibers and polylactic acid (PLA). Treated and untreated fibers were physically characterized in terms of their moisture content (ASTM D1348), density, and particle size (ASAE S319.2). A chemical analysis following TAPPI standards was performed to determine ash, extractives, lignin, and cellulose content on fibers. Thermal stability was determined by TGA analysis, and an FTIR was carried out to check the influence of the alkali treatment in fiber composition. An alkali treatment with NaOH (5%) of fibers was performed for 90 min, with the objective to improve the interfacial adhesion with polymeric matrix in composites. Composite materials based on either treated or untreated beer brand fibers and polylactic acid (PLA) were developed characterized in tension (ASTM D638), bending (ASTM D790) and impact (ASTM D256). Before composites manufacturing, PLA and brand beer fibers (10 wt.%) were mixed in a twin extruder with a temperature profile between 155°C and 180°C. Coupons were manufactured by compression molding (110 bar) at 190°C. Physical characterization showed that alkali treatment does not affect the moisture content (6.9%) and the density (0.48 g/cm³ for untreated fiber and 0.46 g/cm³ for the treated one). Chemical and FTIR analysis showed a slight decrease in ash and extractives. Also, a decrease of 47% and 50% for lignin and hemicellulose content was observed, coupled with an increase of 71% for cellulose content. Fiber thermal stability was improved with the alkali treatment at about 10°C. Tensile strength of composites was found to be between 42 and 44 MPa with no significant statistical difference between coupons with either treated or untreated fibers. However, compared to neat PLA, composites with beer bran fibers present a decrease in tensile strength of 27%. Young modulus increases by 10% with treated fiber, compared to neat PLA. Flexural strength decreases in coupons with treated fiber (67.7 MPa), while flexural modulus increases (3.2 GPa) compared to neat PLA (83.3 MPa and 2.8 GPa, respectively). Izod impact test results showed an improvement of 99.4% in coupons with treated fibers - compared with neat PLA.

Keywords: beer bran, characterization, green composite, polylactic acid, surface treatment

Procedia PDF Downloads 123
1608 Professional Development in EFL Classroom: Motivation and Reflection

Authors: Iman Jabbar

Abstract:

Within the scope of professionalism and in order to compete with the modern world, teachers, are expected to develop their teaching skills and activities in addition to their professional knowledge. At the college level, the teacher should be able to face classroom challenges through his engagement with the learning situation to understand the students and their needs. In our field of TESOL, the role of the English teacher is no longer restricted to teaching English texts, but rather he should endeavor to enhance the students’ skills such as communication and critical analysis. Within the literature of professionalism, there are certain strategies and tools that an English teacher should adopt to develop his competence and performance. Reflective practice, which is an exploratory process, is one of these strategies. Another strategy contributing to classroom development is motivation. It is crucial in students’ learning as it affects the quality of learning English in the classroom in addition to determining success or failure as well as language achievement. This is a qualitative study grounded on interpretive perspectives of teachers and students regarding the process of professional development. This study aims at (a) understanding how teachers at the college level conceptualize reflective practice and motivation inside EFL classroom, and (b) exploring the methods and strategies that they implement to practice reflection and motivation. This study and is based on two questions: 1. How do EFL teachers perceive and view reflection and motivation in relation to their teaching and professional development? 2. How can reflective practice and motivation be developed into practical strategies and actions in EFL teachers’ professional context? The study is organized into two parts, theoretical and practical. The theoretical part reviews the literature on the concept of reflective practice and motivation in relation to professional development through providing certain definitions, theoretical models, and strategies. The practical part draws on the theoretical one, however; it is the core of the study since it deals with two issues. It involves the research design, methodology, and methods of data collection, sampling, and data analysis. It ends up with an overall discussion of findings and the researcher's reflections on the investigated topic. In terms of significance, the study is intended to contribute to the field of TESOL at the academic level through the selection of the topic and investigating it from theoretical and practical perspectives. Professional development is the path that leads to enhancing the quality of teaching English as a foreign or second language in a way that suits the modern trends of globalization and advanced technology.

Keywords: professional development, motivation, reflection, learning

Procedia PDF Downloads 443
1607 The Effect of Combined Fluid Shear Stress and Cyclic Stretch on Endothelial Cells

Authors: Daphne Meza, Louie Abejar, David A. Rubenstein, Wei Yin

Abstract:

Endothelial cell (ECs) morphology and function is highly impacted by the mechanical stresses these cells experience in vivo. Any change in the mechanical environment can trigger pathological EC responses. A detailed understanding of EC morphological response and function upon subjection to individual and simultaneous mechanical stimuli is needed for advancement in mechanobiology and preventive medicine. To investigate this, a programmable device capable of simultaneously applying physiological fluid shear stress (FSS) and cyclic strain (CS) has been developed, characterized and validated. Its validation was performed both experimentally, through tracer tracking, and theoretically, through the use of a computational fluid dynamics model. The effectiveness of the device was evaluated through EC morphology changes under mechanical loading conditions. Changes in cell morphology were evaluated through: cell and nucleus elongation, cell alignment and junctional actin production. The results demonstrated that the combined FSS-CS stimulation induced visible changes in EC morphology. Upon simultaneous fluid shear stress and biaxial tensile strain stimulation, cells were elongated and generally aligned with the flow direction, with stress fibers highlighted along the cell junctions. The concurrent stimulation from shear stress and biaxial cyclic stretch led to a significant increase in cell elongation compared to untreated cells. This, however, was significantly lower than that induced by shear stress alone, indicating that the biaxial tensile strain may counteract the elongating effect of shear stress to maintain the shape of ECs. A similar trend was seen in alignment, where the alignment induced by the concurrent application of shear stress and cyclic stretch fell in between that induced by shear stress and tensile stretch alone, indicating the opposite role shear stress and tensile strain may play in cell alignment. Junctional actin accumulation was increased upon shear stress alone or simultaneously with tensile stretch. Tensile stretch alone did not change junctional actin accumulation, indicating the dominant role of shear stress in damaging EC junctions. These results demonstrate that the shearing-stretching device is capable of applying well characterized dynamic shear stress and tensile strain to cultured ECs. Using this device, EC response to altered mechanical environment in vivo can be characterized in vitro.

Keywords: cyclic stretch, endothelial cells, fluid shear stress, vascular biology

Procedia PDF Downloads 374
1606 Women Right to Land Entitlement for Gender Equality: Critical Review

Authors: A. Yousuf, M. Iqbal, A. Mir, S. Aziz

Abstract:

This study deals with the women’s right to land for gender equality. Economic Transformation Initiative, Gilgit-Baltistan (ETI-GB), an ambitious program supported by International Fund for Agricultural Development United Nation (IFAD, UN), aims to strengthen land reforms process in disputed area of Gilgit-Baltistan (GB) Pakistan, that is taking place first time in the history. This project is a brick to build the foundation of land reforms and land policies in GB. The ETI-GB provides substantive support to government of GB in developing policy measures and initiatives to promote women’s right to have and to own land is kind of unconventional step in a very traditional society. It would be interesting to have discussion and document the people’s response regarding this project. The study has used mixed method for data collection. For qualitative data, content analysis is used to have a thorough understanding of different types of land reforms across the globe particularly in South Asia. Theoretical understanding of the literature is essential which provides the basis why land reforms are important and how far it plays an important role when it comes to eliminating inequality. Focused group discussion was carried out for verification and triangulation of data. For quantitative, survey was conducted to take responses from the people of the region and analyzed. The program is implemented in Ghizer district of GB. 2340 households were identified as beneficiaries of newly developed land. Among them, 2285 were men households, and 55 were women households. There is a significant difference between men and women households. In spite of great difference, it is a great achievement of the donor that in history of GB, first time women are going to be entitled to land ownership. GB is a patriarchal society, many social factors like cultural, religious play role for gender inequality. In developing countries, such as Pakistan, the awareness of land property rights has not been given proper attention to gender equality development frameworks. It is argued that land property rights of women have not been taken into mainstream policymaking in the development of nation building process. Consequently, this has generated deprivation of women’s property rights, low income level, lack of education and poor health. This paper emphasises that there should have proper land property right of women in Gilgit-Baltistan Pakistan, provided that the gender empowerment could be increased in terms of women’s property rights.

Keywords: gender equality, women right to land ownership, property rights, women empowerment

Procedia PDF Downloads 148
1605 Concept of Using an Indicator to Describe the Quality of Fit of Clothing to the Body Using a 3D Scanner and CAD System

Authors: Monika Balach, Iwona Frydrych, Agnieszka Cichocka

Abstract:

The objective of this research is to develop an algorithm, taking into account material type and body type that will describe the fabric properties and quality of fit of a garment to the body. One of the objectives of this research is to develop a new algorithm to simulate cloth draping within CAD/CAM software. Existing virtual fitting does not accurately simulate fabric draping behaviour. Part of the research into virtual fitting will focus on the mechanical properties of fabrics. Material behaviour depends on many factors including fibre, yarn, manufacturing process, fabric weight, textile finish, etc. For this study, several different fabric types with very different mechanical properties will be selected and evaluated for all of the above fabric characteristics. These fabrics include woven thick cotton fabric which is stiff and non-bending, woven with elastic content, which is elastic and bends on the body. Within the virtual simulation, the following mechanical properties can be specified: shear, bending, weight, thickness, and friction. To help calculate these properties, the KES system (Kawabata) can be used. This system was originally developed to calculate the mechanical properties of fabric. In this research, the author will focus on three properties: bending, shear, and roughness. This study will consider current research using the KES system to understand and simulate fabric folding on the virtual body. Testing will help to determine which material properties have the largest impact on the fit of the garment. By developing an algorithm which factors in body type, material type, and clothing function, it will be possible to determine how a specific type of clothing made from a particular type of material will fit on a specific body shape and size. A fit indicator will display areas of stress on the garment such as shoulders, chest waist, hips. From this data, CAD/CAM software can be used to develop garments that fit with a very high degree of accuracy. This research, therefore, aims to provide an innovative solution for garment fitting which will aid in the manufacture of clothing. This research will help the clothing industry by cutting the cost of the clothing manufacturing process and also reduce the cost spent on fitting. The manufacturing process can be made more efficient by virtual fitting of the garment before the real clothing sample is made. Fitting software could be integrated into clothing retailer websites allowing customers to enter their biometric data and determine how the particular garment and material type would fit their body.

Keywords: 3D scanning, fabric mechanical properties, quality of fit, virtual fitting

Procedia PDF Downloads 176
1604 Sustainability Assessment Tool for the Selection of Optimal Site Remediation Technologies for Contaminated Gasoline Sites

Authors: Connor Dunlop, Bassim Abbassi, Richard G. Zytner

Abstract:

Life cycle assessment (LCA) is a powerful tool established by the International Organization for Standardization (ISO) that can be used to assess the environmental impacts of a product or process from cradle to grave. Many studies utilize the LCA methodology within the site remediation field to compare various decontamination methods, including bioremediation, soil vapor extraction or excavation, and off-site disposal. However, with the authors' best knowledge, limited information is available in the literature on a sustainability tool that could be used to help with the selection of the optimal remediation technology. This tool, based on the LCA methodology, would consider site conditions like environmental, economic, and social impacts. Accordingly, this project was undertaken to develop a tool to assist with the selection of optimal sustainable technology. Developing a proper tool requires a large amount of data. As such, data was collected from previous LCA studies looking at site remediation technologies. This step identified knowledge gaps or limitations within project data. Next, utilizing the data obtained from the literature review and other organizations, an extensive LCA study is being completed following the ISO 14040 requirements. Initial technologies being compared include bioremediation, excavation with off-site disposal, and a no-remediation option for a generic gasoline-contaminated site. To complete the LCA study, the modelling software SimaPro is being utilized. A sensitivity analysis of the LCA results will also be incorporated to evaluate the impact on the overall results. Finally, the economic and social impacts associated with each option will then be reviewed to understand how they fluctuate at different sites. All the results will then be summarized, and an interactive tool using Excel will be developed to help select the best sustainable site remediation technology. Preliminary LCA results show improved sustainability for the decontamination of a gasoline-contaminated site for each technology compared to the no-remediation option. Sensitivity analyses are now being completed on on-site parameters to determine how the environmental impacts fluctuate at other contaminated gasoline locations as the parameters vary, including soil type and transportation distances. Additionally, the social improvements and overall economic costs associated with each technology are being reviewed. Utilizing these results, the sustainability tool created to assist in the selection of the overall best option will be refined.

Keywords: life cycle assessment, site remediation, sustainability tool, contaminated sites

Procedia PDF Downloads 56