Search results for: factor models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11547

Search results for: factor models

177 Executive Function and Attention Control in Bilingual and Monolingual Children: A Systematic Review

Authors: Zihan Geng, L. Quentin Dixon

Abstract:

It has been proposed that early bilingual experience confers a number of advantages in the development of executive control mechanisms. Although the literature provides empirical evidence for bilingual benefits, some studies also reported null or mixed results. To make sense of these contradictory findings, the current review synthesize recent empirical studies investigating bilingual effects on children’s executive function and attention control. The publication time of the studies included in the review ranges from 2010 to 2017. The key searching terms are bilingual, bilingualism, children, executive control, executive function, and attention. The key terms were combined within each of the following databases: ERIC (EBSCO), Education Source, PsycINFO, and Social Science Citation Index. Studies involving both children and adults were also included but the analysis was based on the data generated only by the children group. The initial search yielded 137 distinct articles. Twenty-eight studies from 27 articles with a total of 3367 participants were finally included based on the selection criteria. The selective studies were then coded in terms of (a) the setting (i.e., the country where the data was collected), (b) the participants (i.e., age and languages), (c) sample size (i.e., the number of children in each group), (d) cognitive outcomes measured, (e) data collection instruments (i.e., cognitive tasks and tests), and (f) statistic analysis models (e.g., t-test, ANOVA). The results show that the majority of the studies were undertaken in western countries, mainly in the U.S., Canada, and the UK. A variety of languages such as Arabic, French, Dutch, Welsh, German, Spanish, Korean, and Cantonese were involved. In relation to cognitive outcomes, the studies examined children’s overall planning and problem-solving abilities, inhibition, cognitive complexity, working memory (WM), and sustained and selective attention. The results indicate that though bilingualism is associated with several cognitive benefits, the advantages seem to be weak, at least, for children. Additionally, the nature of the cognitive measures was found to greatly moderate the results. No significant differences are observed between bilinguals and monolinguals in overall planning and problem-solving ability, indicating that there is no bilingual benefit in the cooperation of executive function components at an early age. In terms of inhibition, the mixed results suggest that bilingual children, especially young children, may have better conceptual inhibition measured in conflict tasks, but not better response inhibition measured by delay tasks. Further, bilingual children showed better inhibitory control to bivalent displays, which resembles the process of maintaining two language systems. The null results were obtained for both cognitive complexity and WM, suggesting no bilingual advantage in these two cognitive components. Finally, findings on children’s attention system associate bilingualism with heightened attention control. Together, these findings support the hypothesis of cognitive benefits for bilingual children. Nevertheless, whether these advantages are observable appears to highly depend on the cognitive assessments. Therefore, future research should be more specific about the cognitive outcomes (e.g., the type of inhibition) and should report the validity of the cognitive measures consistently.

Keywords: attention, bilingual advantage, children, executive function

Procedia PDF Downloads 185
176 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology

Authors: Amarendar Reddy Addula

Abstract:

Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.

Keywords: artificial intelligence, ethics & human rights issues, laws, international laws

Procedia PDF Downloads 94
175 Toward the Decarbonisation of EU Transport Sector: Impacts and Challenges of the Diffusion of Electric Vehicles

Authors: Francesca Fermi, Paola Astegiano, Angelo Martino, Stephanie Heitel, Michael Krail

Abstract:

In order to achieve the targeted emission reductions for the decarbonisation of the European economy by 2050, fundamental contributions are required from both energy and transport sectors. The objective of this paper is to analyse the impacts of a largescale diffusion of e-vehicles, either battery-based or fuel cells, together with the implementation of transport policies aiming at decreasing the use of motorised private modes in order to achieve greenhouse gas emission reduction goals, in the context of a future high share of renewable energy. The analysis of the impacts and challenges of future scenarios on transport sector is performed with the ASTRA (ASsessment of TRAnsport Strategies) model. ASTRA is a strategic system-dynamic model at European scale (EU28 countries, Switzerland and Norway), consisting of different sub-modules related to specific aspects: the transport system (e.g. passenger trips, tonnes moved), the vehicle fleet (composition and evolution of technologies), the demographic system, the economic system, the environmental system (energy consumption, emissions). A key feature of ASTRA is that the modules are linked together: changes in one system are transmitted to other systems and can feed-back to the original source of variation. Thanks to its multidimensional structure, ASTRA is capable to simulate a wide range of impacts stemming from the application of transport policy measures: the model addresses direct impacts as well as second-level and third-level impacts. The simulation of the different scenarios is performed within the REFLEX project, where the ASTRA model is employed in combination with several energy models in a comprehensive Modelling System. From the transport sector perspective, some of the impacts are driven by the trend of electricity price estimated from the energy modelling system. Nevertheless, the major drivers to a low carbon transport sector are policies related to increased fuel efficiency of conventional drivetrain technologies, improvement of demand management (e.g. increase of public transport and car sharing services/usage) and diffusion of environmentally friendly vehicles (e.g. electric vehicles). The final modelling results of the REFLEX project will be available from October 2018. The analysis of the impacts and challenges of future scenarios is performed in terms of transport, environmental and social indicators. The diffusion of e-vehicles produces a consistent reduction of future greenhouse gas emissions, although the decarbonisation target can be achieved only with the contribution of complementary transport policies on demand management and supporting the deployment of low-emission alternative energy for non-road transport modes. The paper explores the implications through time of transport policy measures on mobility and environment, underlying to what extent they can contribute to a decarbonisation of the transport sector. Acknowledgements: The results refer to the REFLEX project which has received grants from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 691685.

Keywords: decarbonisation, greenhouse gas emissions, e-mobility, transport policies, energy

Procedia PDF Downloads 153
174 E-Waste Generation in Bangladesh: Present and Future Estimation by Material Flow Analysis Method

Authors: Rowshan Mamtaz, Shuvo Ahmed, Imran Noor, Sumaiya Rahman, Prithvi Shams, Fahmida Gulshan

Abstract:

Last few decades have witnessed a phenomenal rise in the use of electrical and electronic equipment globally in our everyday life. As these items reach the end of their lifecycle, they turn into e-wastes and contribute to the waste stream. Bangladesh, in conformity with the global trend and due to its ongoing rapid growth, is also using electronics-based appliances and equipment at an increasing rate. This has caused a corresponding increase in the generation of e-wastes. Bangladesh is a developing country; its overall waste management system, is not yet efficient, nor is it environmentally sustainable. Most of its solid wastes are disposed of in a crude way at dumping sites. Addition of e-wastes, which often contain toxic heavy metals, into its waste stream has made the situation more difficult and challenging. Assessment of generation of e-wastes is an important step towards addressing the challenges posed by e-wastes, setting targets, and identifying the best practices for their management. Understanding and proper management of e-wastes is a stated item of the Sustainable Development Goals (SDG) campaign, and Bangladesh is committed to fulfilling it. A better understanding and availability of reliable baseline data on e-wastes will help in preventing illegal dumping, promote recycling, and create jobs in the recycling sectors and thus facilitate sustainable e-waste management. With this objective in mind, the present study has attempted to estimate the amount of e-wastes and its future generation trend in Bangladesh. To achieve this, sales data on eight selected electrical and electronic products (TV, Refrigerator, Fan, Mobile phone, Computer, IT equipment, CFL (Compact Fluorescent Lamp) bulbs, and Air Conditioner) have been collected from different sources. Primary and secondary data on the collection, recycling, and disposal of the e-wastes have also been gathered by questionnaire survey, field visits, interviews, and formal and informal meetings with the stakeholders. Material Flow Analysis (MFA) method has been applied, and mathematical models have been developed in the present study to estimate e-waste amounts and their future trends up to the year 2035 for the eight selected electrical and electronic equipment. End of life (EOL) method is adopted in the estimation. Model inputs are products’ annual sale/import data, past and future sales data, and average life span. From the model outputs, it is estimated that the generation of e-wastes in Bangladesh in 2018 is 0.40 million tons and by 2035 the amount will be 4.62 million tons with an average annual growth rate of 20%. Among the eight selected products, the number of e-wastes generated from seven products are increasing whereas only one product, CFL bulb, showed a decreasing trend of waste generation. The average growth rate of e-waste from TV sets is the highest (28%) while those from Fans and IT equipment are the lowest (11%). Field surveys conducted in the e-waste recycling sector also revealed that every year around 0.0133 million tons of e-wastes enter into the recycling business in Bangladesh which may increase in the near future.

Keywords: Bangladesh, end of life, e-waste, material flow analysis

Procedia PDF Downloads 198
173 Smart Services for Easy and Retrofittable Machine Data Collection

Authors: Till Gramberg, Erwin Gross, Christoph Birenbaum

Abstract:

This paper presents the approach of the Easy2IoT research project. Easy2IoT aims to enable companies in the prefabrication sheet metal and sheet metal processing industry to enter the Industrial Internet of Things (IIoT) with a low-threshold and cost-effective approach. It focuses on the development of physical hardware and software to easily capture machine activities from on a sawing machine, benefiting various stakeholders in the SME value chain, including machine operators, tool manufacturers and service providers. The methodological approach of Easy2IoT includes an in-depth requirements analysis and customer interviews with stakeholders along the value chain. Based on these insights, actions, requirements and potential solutions for smart services are derived. The focus is on providing actionable recommendations, competencies and easy integration through no-/low-code applications to facilitate implementation and connectivity within production networks. At the core of the project is a novel, non-invasive measurement and analysis system that can be easily deployed and made IIoT-ready. This system collects machine data without interfering with the machines themselves. It does this by non-invasively measuring the tension on a sawing machine. The collected data is then connected and analyzed using artificial intelligence (AI) to provide smart services through a platform-based application. Three Smart Services are being developed within Easy2IoT to provide immediate benefits to users: Wear part and product material condition monitoring and predictive maintenance for sawing processes. The non-invasive measurement system enables the monitoring of tool wear, such as saw blades, and the quality of consumables and materials. Service providers and machine operators can use this data to optimize maintenance and reduce downtime and material waste. Optimize Overall Equipment Effectiveness (OEE) by monitoring machine activity. The non-invasive system tracks machining times, setup times and downtime to identify opportunities for OEE improvement and reduce unplanned machine downtime. Estimate CO2 emissions for connected machines. CO2 emissions are calculated for the entire life of the machine and for individual production steps based on captured power consumption data. This information supports energy management and product development decisions. The key to Easy2IoT is its modular and easy-to-use design. The non-invasive measurement system is universally applicable and does not require specialized knowledge to install. The platform application allows easy integration of various smart services and provides a self-service portal for activation and management. Innovative business models will also be developed to promote the sustainable use of the collected machine activity data. The project addresses the digitalization gap between large enterprises and SME. Easy2IoT provides SME with a concrete toolkit for IIoT adoption, facilitating the digital transformation of smaller companies, e.g. through retrofitting of existing machines.

Keywords: smart services, IIoT, IIoT-platform, industrie 4.0, big data

Procedia PDF Downloads 73
172 Negative Perceptions of Ageing Predicts Greater Dysfunctional Sleep Related Cognition Among Adults Aged 60+

Authors: Serena Salvi

Abstract:

Ageistic stereotypes and practices have become a normal and therefore pervasive phenomenon in various aspects of everyday life. Over the past years, renewed awareness towards self-directed age stereotyping in older adults has given rise to a line of research focused on the potential role of attitudes towards ageing on seniors’ health and functioning. This set of studies has showed how a negative internalisation of ageistic stereotypes would discourage older adults in seeking medical advice, in addition to be associated to negative subjective health evaluation. An important dimension of mental health that is often affected in older adults is represented by sleep quality. Self-reported sleep quality among older adults has shown to be often unreliable when compared to their objective sleep measures. Investigations focused on self-reported sleep quality among older adults have suggested how this portion of the population would tend to accept disrupted sleep if believed to be up to standard for their age. On the other hand, unrealistic expectations, and dysfunctional beliefs towards sleep in ageing, might prompt older adults to report sleep disruption even in the absence of objective disrupted sleep. Objective of this study is to examine an association between personal attitudes towards ageing in adults aged 60+ and dysfunctional sleep related cognition. More in detail, this study aims to investigate a potential association between personal attitudes towards ageing, sleep locus of control and dysfunctional beliefs towards sleep among this portion of the population. Data in this study were statistically analysed in SPSS software. Participants were recruited through the online participants recruitment system Prolific. Inclusion of attention check questions throughout the questionnaire and consistency of responses were looked at. Prior to the commencement of this study, Ethical Approval was granted (ref. 39396). Descriptive statistics were used to determine the frequency, mean, and SDs of the variables. Pearson coefficient was used for interval variables, independent T-test for comparing means between two independent groups, analysis of variance (ANOVA) test for comparing the means in several independent groups, and hierarchical linear regression models for predicting criterion variables based on predictor variables. In this study self-perceptions of ageing were assessed using APQ-B’s subscales, while dysfunctional sleep related cognition was operationalised using the SLOC and the DBAS16 scales. Of the final subscales taken in consideration in the brief version of the APQ questionnaire, Emotional Representations (ER), Control Positive (PC) and Control and Consequences Negative (NC) have shown to be of particularly relevance for the remits of this study. Regression analysis show how an increase in the APQ-B subscale Emotional Representations (ER) predicts an increase in dysfunctional beliefs and attitudes towards sleep in this sample, after controlling for subjective sleep quality, level of depression and chronological age. A second regression analysis showed that APQ-B subscales Control Positive (PC) and Control and Consequences Negative (NC) were significant predictors in the change of variance of SLOC, after controlling for subjective sleep quality, level of depression and dysfunctional beliefs about sleep.

Keywords: sleep-related cognition, perceptions of aging, older adults, sleep quality

Procedia PDF Downloads 103
171 Liposome Loaded Polysaccharide Based Hydrogels: Promising Delayed Release Biomaterials

Authors: J. Desbrieres, M. Popa, C. Peptu, S. Bacaita

Abstract:

Because of their favorable properties (non-toxicity, biodegradability, mucoadhesivity etc.), polysaccharides were studied as biomaterials and as pharmaceutical excipients in drug formulations. These formulations may be produced in a wide variety of forms including hydrogels, hydrogel based particles (or capsules), films etc. In these formulations, the polysaccharide based materials are able to provide local delivery of loaded therapeutic agents but their delivery can be rapid and not easily time-controllable due to, particularly, the burst effect. This leads to a loss in drug efficiency and lifetime. To overcome the consequences of burst effect, systems involving liposomes incorporated into polysaccharide hydrogels may appear as a promising material in tissue engineering, regenerative medicine and drug loading systems. Liposomes are spherical self-closed structures, composed of curved lipid bilayers, which enclose part of the surrounding solvent into their structure. The simplicity of production, their biocompatibility, the size and similar composition of cells, the possibility of size adjustment for specific applications, the ability of hydrophilic or/and hydrophobic drug loading make them a revolutionary tool in nanomedicine and biomedical domain. Drug delivery systems were developed as hydrogels containing chitosan or carboxymethylcellulose (CMC) as polysaccharides and gelatin (GEL) as polypeptide, and phosphatidylcholine or phosphatidylcholine/cholesterol liposomes able to accurately control this delivery, without any burst effect. Hydrogels based on CMC were covalently crosslinked using glutaraldehyde, whereas chitosan based hydrogels were double crosslinked (ionically using sodium tripolyphosphate or sodium sulphate and covalently using glutaraldehyde). It has been proven that the liposome integrity is highly protected during the crosslinking procedure for the formation of the film network. Calcein was used as model active matter for delivery experiments. Multi-Lamellar vesicles (MLV) and Small Uni-Lamellar Vesicles (SUV) were prepared and compared. The liposomes are well distributed throughout the whole area of the film, and the vesicle distribution is equivalent (for both types of liposomes evaluated) on the film surface as well as deeper (100 microns) in the film matrix. An obvious decrease of the burst effect was observed in presence of liposomes as well as a uniform increase of calcein release that continues even at large time scales. Liposomes act as an extra barrier for calcein release. Systems containing MLVs release higher amounts of calcein compared to systems containing SUVs, although these liposomes are more stable in the matrix and diffuse with difficulty. This difference comes from the higher quantity of calcein present within the MLV in relation with their size. Modeling of release kinetics curves was performed and the release of hydrophilic drugs may be described by a multi-scale mechanism characterized by four distinct phases, each of them being characterized by a different kinetics model (Higuchi equation, Korsmeyer-Peppas model etc.). Knowledge of such models will be a very interesting tool for designing new formulations for tissue engineering, regenerative medicine and drug delivery systems.

Keywords: controlled and delayed release, hydrogels, liposomes, polysaccharides

Procedia PDF Downloads 226
170 Poly(Methyl Methacrylate) Degradation Products and Its in vitro Cytotoxicity Evaluation in NIH3T3 Cells

Authors: Lesly Y Carmona-Sarabia, Luisa Barraza-Vergara, Vilmalí López-Mejías, Wandaliz Torres-García, Maribella Domenech-Garcia, Madeline Torres-Lugo

Abstract:

Biosensors are used in many applications providing real-time monitoring to treat long-term conditions. Thus, understanding the physicochemical properties and biological side effects on the skin of polymers (e. g., poly(methyl methacrylate), PMMA) employed in the fabrication of wearable biosensors is crucial for the selection of manufacturing materials within this field. The PMMA (hydrophobic and thermoplastic polymer) is commonly employed as a coating material or substrate in the fabrication of wearable devices. The cytotoxicityof PMMA (including residual monomers or degradation products) on the skin, in terms of cells and tissue, is required to prevent possible adverse effects (cell death, skin reactions, sensitization) on human health. Within this work, accelerated aging of PMMA (Mw ~ 15000) through thermal and photochemical degradation was under-taken. The accelerated aging of PMMA was carried out by thermal (200°C, 1h) and photochemical degradation (UV-Vis, 8-15d) adapted employing ISO protocols (ISO-10993-12, ISO-4892-1:2016, ISO-877-1:2009, ISO-188: 2011). In addition, in vitro cytotoxicity evaluation of PMMA degradation products was performed using NIH3T3 fibroblast cells to assess the response of skin tissues (in terms of cell viability) exposed with polymers utilized to manufacture wearable biosensors, such as PMMA. The PMMA (Mw ~ 15000) before and after accelerated aging experiments was characterized by thermal gravimetric analysis (TGA), differential scanning calorimetric (DSC), powder X-ray diffractogram (PXRD), and scanning electron microscopy-energy dispersive spectroscopy (SEM-EDS) to determine and verify the successful degradation of this polymer under the specific conditions previously mention. The degradation products were characterized through nuclear magnetic resonance (NMR) to identify possible byproducts generated after the accelerated aging. Results demonstrated a percentage (%) weight loss between 1.5-2.2% (TGA thermographs) for PMMA after accelerated aging. The EDS elemental analysis reveals a 1.32 wt.% loss of carbon for PMMA after thermal degradation. These results might be associated with the amount (%) of PMMA degrade after the accelerated aging experiments. Furthermore, from the thermal degradation products was detected the presence of the monomer and methyl formate (low concentrations) and a low molecular weight radical (·COOCH3) in higher concentrations by NMR. In the photodegradation products, methyl formate was detected in higher concentrations. These results agree with the proposed thermal or photochemical degradation mechanisms found in the literature.1,2 Finally, significant cytotoxicity on the NIH3T3 cells was obtained for the thermal and photochemical degradation products. A decrease in cell viability by > 90% (stock solutions) was observed. It is proposed that the presence of byproducts (e.g. methyl formate or radicals such as ·COOCH₃) from the PMMA degradation might be responsible for the cytotoxicity observed in the NIH3T3 fibroblast cells. Additionally, experiments using skin models will be employed to compare with the NIH3T3 fibroblast cells model.

Keywords: biosensors, polymer, skin irritation, degradation products, cell viability

Procedia PDF Downloads 139
169 Wind Tunnel Tests on Ground-Mounted and Roof-Mounted Photovoltaic Array Systems

Authors: Chao-Yang Huang, Rwey-Hua Cherng, Chung-Lin Fu, Yuan-Lung Lo

Abstract:

Solar energy is one of the replaceable choices to reduce the CO2 emission produced by conventional power plants in the modern society. As an island which is frequently visited by strong typhoons and earthquakes, it is an urgent issue for Taiwan to make an effort in revising the local regulations to strengthen the safety design of photovoltaic systems. Currently, the Taiwanese code for wind resistant design of structures does not have a clear explanation on photovoltaic systems, especially when the systems are arranged in arrayed format. Furthermore, when the arrayed photovoltaic system is mounted on the rooftop, the approaching flow is significantly altered by the building and led to different pressure pattern in the different area of the photovoltaic system. In this study, L-shape arrayed photovoltaic system is mounted on the ground of the wind tunnel and then mounted on the building rooftop. The system is consisted of 60 PV models. Each panel model is equivalent to a full size of 3.0 m in depth and 10.0 m in length. Six pressure taps are installed on the upper surface of the panel model and the other six are on the bottom surface to measure the net pressures. Wind attack angle is varied from 0° to 360° in a 10° interval for the worst concern due to wind direction. The sampling rate of the pressure scanning system is set as high enough to precisely estimate the peak pressure and at least 20 samples are recorded for good ensemble average stability. Each sample is equivalent to 10-minute time length in full scale. All the scale factors, including timescale, length scale, and velocity scale, are properly verified by similarity rules in low wind speed wind tunnel environment. The purpose of L-shape arrayed system is for the understanding the pressure characteristics at the corner area. Extreme value analysis is applied to obtain the design pressure coefficient for each net pressure. The commonly utilized Cook-and-Mayne coefficient, 78%, is set to the target non-exceedance probability for design pressure coefficients under Gumbel distribution. Best linear unbiased estimator method is utilized for the Gumbel parameter identification. Careful time moving averaging method is also concerned in data processing. Results show that when the arrayed photovoltaic system is mounted on the ground, the first row of the panels reveals stronger positive pressure than that mounted on the rooftop. Due to the flow separation occurring at the building edge, the first row of the panels on the rooftop is most in negative pressures; the last row, on the other hand, shows positive pressures because of the flow reattachment. Different areas also have different pressure patterns, which corresponds well to the regulations in ASCE7-16 describing the area division for design values. Several minor observations are found according to parametric studies, such as rooftop edge effect, parapet effect, building aspect effect, row interval effect, and so on. General comments are then made for the proposal of regulation revision in Taiwanese code.

Keywords: aerodynamic force coefficient, ground-mounted, roof-mounted, wind tunnel test, photovoltaic

Procedia PDF Downloads 138
168 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 135
167 Brittle Fracture Tests on Steel Bridge Bearings: Application of the Potential Drop Method

Authors: Natalie Hoyer

Abstract:

Usually, steel structures are designed for the upper region of the steel toughness-temperature curve. To address the reduced toughness properties in the temperature transition range, additional safety assessments based on fracture mechanics are necessary. These assessments enable the appropriate selection of steel materials to prevent brittle fracture. In this context, recommendations were established in 2011 to regulate the appropriate selection of steel grades for bridge bearing components. However, these recommendations are no longer fully aligned with more recent insights: Designing bridge bearings and their components in accordance with DIN EN 1337 and the relevant sections of DIN EN 1993 has led to an increasing trend of using large plate thicknesses, especially for long-span bridges. However, these plate thicknesses surpass the application limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with the regulations outlined in DIN EN 1993-1-10 regarding material toughness and through-thickness properties requires some further modifications. Therefore, these standards cannot be directly applied to the material selection for bearings without additional information. In addition, recent findings indicate that certain bridge bearing components are subjected to high fatigue loads, necessitating consideration in structural design, material selection, and calculations. To address this issue, the German Center for Rail Traffic Research initiated a research project aimed at developing a proposal to enhance the existing standards. This proposal seeks to establish guidelines for the selection of steel materials for bridge bearings to prevent brittle fracture, particularly for thick plates and components exposed to specific fatigue loads. The results derived from theoretical analyses, including finite element simulations and analytical calculations, are verified through component testing on a large-scale. During these large-scale tests, where a brittle failure is deliberately induced in a bearing component, an artificially generated defect is introduced into the specimen at the predetermined hotspot. Subsequently, a dynamic load is imposed until the crack initiation process transpires, replicating realistic conditions akin to a sharp notch resembling a fatigue crack. To stop the action of the dynamic load in time, it is important to precisely determine the point at which the crack size transitions from stable crack growth to unstable crack growth. To achieve this, the potential drop measurement method is employed. The proposed paper informs about the choice of measurement method (alternating current potential drop (ACPD) or direct current potential drop (DCPD)), presents results from correlations with created FE models, and may proposes a new approach to introduce beach marks into the fracture surface within the framework of potential drop measurement.

Keywords: beach marking, bridge bearing design, brittle fracture, design for fatigue, potential drop

Procedia PDF Downloads 42
166 Innovation in PhD Training in the Interdisciplinary Research Institute

Authors: B. Shaw, K. Doherty

Abstract:

The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.

Keywords: interdisciplinary, method, research student, training

Procedia PDF Downloads 206
165 Towards Sustainable Evolution of Bioeconomy: The Role of Technology and Innovation Management

Authors: Ronald Orth, Johanna Haunschild, Sara Tsog

Abstract:

The bioeconomy is an inter- and cross-disciplinary field covering a large number and wide scope of existing and emerging technologies. It has a great potential to contribute to the transformation process of industry landscape and ultimately drive the economy towards sustainability. However, bioeconomy per se is not necessarily sustainable and technology should be seen as an enabler rather than panacea to all our ecological, social and economic issues. Therefore, to draw and maximize benefits from bioeconomy in terms of sustainability, we propose that innovative activities should encompass not only novel technologies and bio-based new materials but also multifocal innovations. For multifocal innovation endeavors, innovation management plays a substantial role, as any innovation emerges in a complex iterative process where communication and knowledge exchange among relevant stake holders has a pivotal role. The knowledge generation and innovation are although at the core of transition towards a more sustainable bio-based economy, to date, there is a significant lack of concepts and models that approach bioeconomy from the innovation management approach. The aim of this paper is therefore two-fold. First, it inspects the role of transformative approach in the adaptation of bioeconomy that contributes to the environmental, ecological, social and economic sustainability. Second, it elaborates the importance of technology and innovation management as a tool for smooth, prompt and effective transition of firms to the bioeconomy. We conduct a qualitative literature study on the sustainability challenges that bioeconomy entails thus far using Science Citation Index and based on grey literature, as major economies e.g. EU, USA, China and Brazil have pledged to adopt bioeconomy and have released extensive publications on the topic. We will draw an example on the forest based business sector that is transforming towards the new green economy more rapidly as expected, although this sector has a long-established conventional business culture with consolidated and fully fledged industry. Based on our analysis we found that a successful transition to sustainable bioeconomy is conditioned on heterogenous and contested factors in terms of stakeholders , activities and modes of innovation. In addition, multifocal innovations occur when actors from interdisciplinary fields engage in intensive and continuous interaction where the focus of innovation is allocated to a field of mutually evolving socio-technical practices that correspond to the aims of the novel paradigm of transformative innovation policy. By adopting an integrated and systems approach as well as tapping into various innovation networks and joining global innovation clusters, firms have better chance of creating an entire new chain of value added products and services. This requires professionals that have certain capabilities and skills such as: foresight for future markets, ability to deal with complex issues, ability to guide responsible R&D, ability of strategic decision making, manage in-depth innovation systems analysis including value chain analysis. Policy makers, on the other hand, need to acknowledge the essential role of firms in the transformative innovation policy paradigm.

Keywords: bioeconomy, innovation and technology management, multifocal innovation, sustainability, transformative innovation policy

Procedia PDF Downloads 125
164 Psychological Functioning of Youth Experiencing Community and Collective Violence in Post-conflict Northern Ireland

Authors: Teresa Rushe, Nicole Devlin, Tara O Neill

Abstract:

In this study, we sought to examine associations between childhood experiences of community and collective violence and psychological functioning in young people who grew up in post-conflict Northern Ireland. We hypothesized that those who grew up with such experiences would demonstrate internalizing and externalizing difficulties in early adulthood and, furthermore, that these difficulties would be mediated by adverse childhood experiences occurring within the home environment. As part of the Northern Ireland Childhood Adversity Study, we recruited 213 young people aged 18-25 years (108 males) who grew up in the post-conflict society of Northern Ireland using purposive sampling. Participants completed a digital questionnaire to measure adverse childhood experiences as well as aspects of psychological functioning. We employed the Adverse Childhood Experience -International Questionnaire (ACE-IQ¬) adaptation of the original Adverse Childhood Experiences Questionnaire (ACE) as it additionally measured aspects of witnessing community violence (e.g., seeing someone being beaten/killed, fights) and experiences of collective violence (e.g., war, terrorism, police, or gangs’ battles exposure) during the first 18 years of life. 51% of our sample reported experiences of community and/or collective violence (N=108). Compared to young people with no such experiences (N=105), they also reported significantly more adverse experiences indicative of household dysfunction (e.g., family substance misuse, mental illness or domestic violence in the family, incarceration of a family member) but not more experiences of abuse or neglect. As expected, young people who grew up with the community and/or collective violence reported significantly higher anxiety and depression scores and were more likely to engage in acts of deliberate self-harm (internalizing symptoms). They also started drinking and taking drugs at a younger age and were significantly more likely to have been in trouble with the police (externalizing symptoms). When the type of violence exposure was separated by whether the violence was witnessed (community violence) or more directly experienced (collective violence), we found community and collective violence to have similar effects on externalizing symptoms, but for internalizing symptoms, we found evidence of a differential effect. Collective violence was associated with depressive symptoms, whereas witnessing community violence was associated with anxiety-type symptoms and deliberate self-harm. However, when experiences of household dysfunction were entered into the models predicting anxiety, depression, and deliberate self-harm, none of the main effects remained significant. This suggests internalizing type symptoms are mediated by immediate family-level experiences. By contrast, significant community and collective violence effects on externalizing behaviours: younger initiation of alcohol use, younger initiation of drug use, and getting into trouble with the police persisted after controlling for family-level factors and thus are directly associated with growing up with the community and collective violence. Given the cross-sectional nature of our study, we cannot comment on the direction of the effect. However, post-hoc correlational analyses revealed associations between externalising behaviours and personal factors, including greater risk-taking and young age at puberty. The implications of the findings will be discussed in relation to interventions for young people and families living with the community and collective violence.

Keywords: community and collective violence, adverse childhood experiences, youth, psychological wellbeing

Procedia PDF Downloads 83
163 An Engaged Approach to Developing Tools for Measuring Caregiver Knowledge and Caregiver Engagement in Juvenile Type 1 Diabetes

Authors: V. Howard, R. Maguire, S. Corrigan

Abstract:

Background: Type 1 Diabetes (T1D) is a chronic autoimmune disease, typically diagnosed in childhood. T1D puts an enormous strain on families; controlling blood-glucose in children is difficult and the consequences of poor control for patient health are significant. Successful illness management and better health outcomes can be dependent on quality of caregiving. On diagnosis, parent-caregivers face a steep learning curve as T1D care requires a significant level of knowledge to inform complex decision making throughout the day. The majority of illness management is carried out in the home setting, independent of clinical health providers. Parent-caregivers vary in their level of knowledge and their level of engagement in applying this knowledge in the practice of illness management. Enabling researchers to quantify these aspects of the caregiver experience is key to identifying targets for psychosocial support interventions, which are desirable for reducing stress and anxiety in this highly burdened cohort, and supporting better health outcomes in children. Currently, there are limited tools available that are designed to capture this information. Where tools do exist, they are not comprehensive and do not adequately capture the lived experience. Objectives: Development of quantitative tools, informed by lived experience, to enable researchers gather data on parent-caregiver knowledge and engagement, which accurately represents the experience/cohort and enables exploration of questions that are of real-world value to the cohort themselves. Methods: This research employed an engaged approach to address the problem of quantifying two key aspects of caregiver diabetes management: Knowledge and engagement. The research process was multi-staged and iterative. Stage 1: Working from a constructivist standpoint, literature was reviewed to identify relevant questionnaires, scales and single-item measures of T1D caregiver knowledge and engagement, and harvest candidate questionnaire items. Stage 2: Aggregated findings from the review were circulated among a PPI (patient and public involvement) expert panel of caregivers (n=6), for discussion and feedback. Stage 3: In collaboration with the expert panel, data were interpreted through the lens of lived experience to create a long-list of candidate items for novel questionnaires. Items were categorized as either ‘knowledge’ or ‘engagement’. Stage 4: A Delphi-method process (iterative surveys) was used to prioritize question items and generate novel questions that further captured the lived experience. Stage 5: Both questionnaires were piloted to refine wording of text to increase accessibility and limit socially desirable responding. Stage 6: Tools were piloted using an online survey that was deployed using an online peer-support group for caregivers for Juveniles with T1D. Ongoing Research: 123 parent-caregivers completed the survey. Data analysis is ongoing to establish face and content validity qualitatively and through exploratory factor analysis. Reliability will be established using an alternative-form method and Cronbach’s alpha will assess internal consistency. Work will be completed by early 2024. Conclusion: These tools will enable researchers to gain deeper insights into caregiving practices among parents of juveniles with T1D. Development was driven by lived experience, illustrating the value of engaged research at all levels of the research process.

Keywords: caregiving, engaged research, juvenile type 1 diabetes, quantified engagement and knowledge

Procedia PDF Downloads 55
162 On-Farm Biopurification Systems: Fungal Bioaugmentation of Biomixtures For Carbofuran Removal

Authors: Carlos E. Rodríguez-Rodríguez, Karla Ruiz-Hidalgo, Kattia Madrigal-Zúñiga, Juan Salvador Chin-Pampillo, Mario Masís-Mora, Elizabeth Carazo-Rojas

Abstract:

One of the main causes of contamination linked to agricultural activities is the spillage and disposal of pesticides, especially during the loading, mixing or cleaning of agricultural spraying equipment. One improvement in the handling of pesticides is the use of biopurification systems (BPS), simple and cheap degradation devices where the pesticides are biologically degraded at accelerated rates. The biologically active core of BPS is the biomixture, which is constituted by soil pre-exposed to the target pesticide, a lignocellulosic substrate to promote the activity of ligninolitic fungi and a humic component (peat or compost), mixed at a volumetric proportion of 50:25:25. Considering the known ability of lignocellulosic fungi to degrade a wide range of organic pollutants, and the high amount of lignocellulosic waste used in biomixture preparation, the bioaugmentation of biomixtures with these fungi represents an interesting approach for improving biomixtures. The present work aimed at evaluating the effect of the bioaugmentation of rice husk based biomixtures with the fungus Trametes versicolor in the removal of the insectice/nematicide carbofuran (CFN) and to optimize the composition of the biomixture to obtain the best performance in terms of CFN removal and mineralization, reduction in formation of transformation products and decrease in residual toxicity of the matrix. The evaluation of several lignocellulosic residues (rice husk, wood chips, coconut fiber, sugarcane bagasse or newspaper print) revealed the best colonization by T. versicolor in rice husk. Pre-colonized rice husk was then used in the bioaugmentation of biomixtures also containing soil pre-exposed to CFN and either peat (GTS biomixture) or compost (GCS biomixture). After spiking with 10 mg/kg CBF, the efficiency of the biomixture was evaluated through a multi-component approach that included: monitoring of CBF removal and production of CBF transformation products, mineralization of radioisotopically labeled carbofuran (14C-CBF) and changes in the toxicity of the matrix after the treatment (Daphnia magna acute immobilization test). Estimated half-lives of CBF in the biomixtures were 3.4 d and 8.1 d in GTS and GCS, respectively. The transformation products 3-hydroxycarbofuran and 3-ketocarbofuran were detected at the moment of CFN application, however their concentration continuously disappeared. Mineralization of 14C-CFN was also faster in GTS than GCS. The toxicological evaluation showed a complete toxicity removal in the biomixtures after 48 d of treatment. The composition of the GCS biomixture was optimized using a central composite design and response surface methodology. The design variables were the volumetric content of fungally pre-colonized rice husk and the volumetric ratio compost/soil. According to the response models, maximization of CFN removal and mineralization rate, and minimization in the accumulation of transformation products were obtained with an optimized biomixture of composition 30:43:27 (pre-colonized rice husk:compost:soil), which differs from the 50:25:25 composition commonly employed in BPS. Results suggest that fungal bioaugmentation may enhance the performance of biomixtures in CFN removal. Optimization reveals the importance of assessing new biomixture formulations in order to maximize their performance.

Keywords: bioaugmentation, biopurification systems, degradation, fungi, pesticides, toxicity

Procedia PDF Downloads 311
161 Structural Characterization and Hot Deformation Behaviour of Al3Ni2/Al3Ni in-situ Core-shell intermetallic in Al-4Cu-Ni Composite

Authors: Ganesh V., Asit Kumar Khanra

Abstract:

An in-situ powder metallurgy technique was employed to create Ni-Al3Ni/Al3Ni2 core-shell-shaped aluminum-based intermetallic reinforced composites. The impact of Ni addition on the phase composition, microstructure, and mechanical characteristics of the Al-4Cu-xNi (x = 0, 2, 4, 6, 8, 10 wt.%) in relation to various sintering temperatures was investigated. Microstructure evolution was extensively examined using X-ray diffraction (XRD), scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM-EDX), and transmission electron microscopy (TEM) techniques. Initially, under sintering conditions, the formation of "Single Core-Shell" structures was observed, consisting of Ni as the core with Al3Ni2 intermetallic, whereas samples sintered at 620°C exhibited both "Single Core-Shell" and "Double Core-Shell" structures containing Al3Ni2 and Al3Ni intermetallics formed between the Al matrix and Ni reinforcements. The composite achieved a high compressive yield strength of 198.13 MPa and ultimate strength of 410.68 MPa, with 24% total elongation for the sample containing 10 wt.% Ni. Additionally, there was a substantial increase in hardness, reaching 124.21 HV, which is 2.4 times higher than that of the base aluminum. Nanoindentation studies showed hardness values of 1.54, 4.65, 21.01, 13.16, 5.52, 6.27, and 8.39GPa corresponding to α-Al matrix, Ni, Al3Ni2, Ni and Al3Ni2 interface, Al3Ni, and their respective interfaces. Even at 200°C, it retained 54% of its room temperature strength (90.51 MPa). To investigate the deformation behavior of the composite material, experiments were conducted at deformation temperatures ranging from 300°C to 500°C, with strain rates varying from 0.0001s-1 to 0.1s-1. A sine-hyperbolic constitutive equation was developed to characterize the flow stress of the composite, which exhibited a significantly higher hot deformation activation energy of 231.44 kJ/mol compared to the self-diffusion of pure aluminum. The formation of Al2Cu intermetallics at grain boundaries and Al3Ni2/Al3Ni within the matrix hindered dislocation movement, leading to an increase in activation energy, which might have an adverse effect on high-temperature applications. Two models, the Strain-compensated Arrhenius model and the Artificial Neural Network (ANN) model, were developed to predict the composite's flow behavior. The ANN model outperformed the Strain-compensated Arrhenius model with a lower average absolute relative error of 2.266%, a smaller root means square error of 1.2488 MPa, and a higher correlation coefficient of 0.9997. Processing maps revealed that the optimal hot working conditions for the composite were in the temperature range of 420-500°C and strain rates between 0.0001s-1 and 0.001s-1. The changes in the composite microstructure were successfully correlated with the theory of processing maps, considering temperature and strain rate conditions. The uneven distribution in the shape and size of Core-shell/Al3Ni intermetallic compounds influenced the flow stress curves, leading to Dynamic Recrystallization (DRX), followed by partial Dynamic Recovery (DRV), and ultimately strain hardening. This composite material shows promise for applications in the automobile and aerospace industries.

Keywords: core-shell structure, hot deformation, intermetallic compounds, powder metallurgy

Procedia PDF Downloads 20
160 The Study of Adsorption of RuP onto TiO₂ (110) Surface Using Photoemission Deposited by Electrospray

Authors: Tahani Mashikhi

Abstract:

Countries worldwide rely on electric power as a critical economic growth and progress factor. Renewable energy sources, often referred to as alternative energy sources, such as wind, solar energy, geothermal energy, biomass, and hydropower, have garnered significant interest in response to the rising consumption of fossil fuels. Dye-sensitized solar cells (DSSCs) are a highly promising alternative for energy production as they possess numerous advantages compared to traditional silicon solar cells and thin-film solar cells. These include their low cost, high flexibility, straightforward preparation methodology, ease of production, low toxicity, different colors, semi-transparent quality, and high power conversion efficiency. A solar cell, also known as a photovoltaic cell, is a device that converts the energy of light from the sun into electrical energy through the photovoltaic effect. The Gratzel cell is the initial dye-sensitized solar cell made from colloidal titanium dioxide. The operational mechanism of DSSCs relies on various key elements, such as a layer composed of wide band gap semiconducting oxide materials (e.g. titanium dioxide [TiO₂]), as well as a photosensitizer or dye that absorbs sunlight to inject electrons into the conduction band, the electrolyte utilizes the triiodide/iodide redox pair (I− /I₃−) to regenerate dye molecules and a counter electrode made of carbon or platinum facilitates the movement of electrons across the circuit. Electrospray deposition permits the deposition of fragile, non-volatile molecules in a vacuum environment, including dye sensitizers, complex molecules, nanoparticles, and biomolecules. Surface science techniques, particularly X-ray photoelectron spectroscopy, are employed to examine dye-sensitized solar cells. This study investigates the possible application of electrospray deposition to build high-quality layers in situ in a vacuum. Two distinct categories of dyes can be employed as sensitizers in DSSCs: organometallic semiconductor sensitizers and purely organic dyes. Most organometallic dyes, including Ru533, RuC, and RuP, contain a ruthenium atom, which is a rare element. This ruthenium atom enhances the efficiency of dye-sensitized solar cells (DSSCs). These dyes are characterized by their high cost and typically appear as dark purple powders. On the other hand, organic dyes, such as SQ2, RK1, D5, SC4, and R6, exhibit reduced efficacy due to the lack of a ruthenium atom. These dyes appear in green, red, orange, and blue powder-colored. This study will specifically concentrate on metal-organic dyes. The adsorption of dye molecules onto the rutile TiO₂ (110) surface has been deposited in situ under ultra-high vacuum conditions by combining an electrospray deposition method with X-ray photoelectron spectroscopy. The X-ray photoelectron spectroscopy (XPS) technique examines chemical bonds and interactions between molecules and TiO₂ surfaces. The dyes were deposited at varying times, from 5 minutes to 40 minutes, to achieve distinct layers of coverage categorized as sub-monolayer, monolayer, few layers, or multilayer. Based on the O 1s photoelectron spectra data, it can be observed that the monolayer establishes a strong chemical bond with the Ti atoms of the oxide substrate by deprotonating the carboxylic acid groups through 2M-bidentate bridging anchors. The C 1s and N 1s photoelectron spectra indicate that the molecule remains intact at the surface. This can be due to the existence of all functional groups and a ruthenium atom, where the binding energy of Ru 3d is consistent with Ru2+.

Keywords: deposit, dye, electrospray, TiO₂, XPS

Procedia PDF Downloads 45
159 Pathomorphological Markers of the Explosive Wave Action on Human Brain

Authors: Sergey Kozlov, Juliya Kozlova

Abstract:

Introduction: The increased attention of researchers to an explosive trauma around the world is associated with a constant renewal of military weapons and a significant increase in terrorist activities using explosive devices. Explosive wave is a well known damaging factor of explosion. The most sensitive to the action of explosive wave in the human body are the head brain, lungs, intestines, urine bladder. The severity of damage to these organs depends on the distance from the explosion epicenter to the object, the power of the explosion, presence of barriers, parameters of the body position, and the presence of protective clothing. One of the places where a shock wave acts, in human tissues and organs, is the vascular endothelial barrier, which suffers the greatest damage in the head brain and lungs. The objective of the study was to determine the pathomorphological changes of the head brain followed the action of explosive wave. Materials and methods of research: To achieve the purpose of the study, there have been studied 6 male corpses delivered to the morgue of Municipal Institution "Dnipropetrovsk regional forensic bureau" during 2014-2016 years. The cause of death of those killed was a military explosive injury. After a visual external assessment of the head brain, for histological study there was conducted the 1 x 1 x 1 cm/piece sampling from different parts of the head brain, i.e. the frontal, parietal, temporal, occipital sites, and also from the cerebellum, pons, medulla oblongata, thalamus, walls of the lateral ventricles, the bottom of the 4th ventricle. Pieces of the head brain were immersed in 10% formalin solution for 24 hours. After fixing, the paraffin blocks were made from the material using the standard method. Then, using a microtome, there were made sections of 4-6 micron thickness from paraffin blocks which then were stained with hematoxylin and eosin. Microscopic analysis was performed using a light microscope with x4, x10, x40 lenses. Results of the study: According to the results of our study, injuries of the head brain were divided into macroscopic and microscopic. Macroscopic injuries were marked according to the results of visual assessment of haemorrhages under the membranes and into the substance, their nature, and localisation, areas of softening. In the microscopic study, our attention was drawn to both vascular changes and those of neurons and glial cells. Microscopic qualitative analysis of histological sections of different parts of the head brain revealed a number of structural changes both at the cellular and tissue levels. Typical changes in most of the studied areas of the head brain included damages of the vascular system. The most characteristic microscopic sign was the separation of vascular walls from neuroglia with the formation of perivascular space. Along with this sign, wall fragmentation of these vessels, haemolysis of erythrocytes, formation of haemorrhages in the newly formed perivascular spaces were found. In addition to damages of the cerebrovascular system, destruction of the neurons, presence of oedema of the brain tissue were observed in the histological sections of the brain. On some sections, the head brain had a heterogeneous step-like or wave-like nature. Conclusions: The pathomorphological microscopic changes in the brain, identified in the study on the died of explosive traumas, can be used for diagnostic purposes in conjunction with other characteristic signs of explosive trauma in forensic and pathological studies. The complex of microscopic signs in the head brain, i.e. separation of blood vessel walls from neuroglia with the perivascular space formation, fragmentation of walls of these blood vessels, erythrocyte haemolysis, formation of haemorrhages in the newly formed perivascular spaces is the direct indication of explosive wave action.

Keywords: blast wave, neurotrauma, human, brain

Procedia PDF Downloads 192
158 Preparation and Characterization of Anti-Acne Dermal Products Based on Erythromycin β-Cyclodextrin Lactide Complex

Authors: Lacramioara Ochiuz, Manuela Hortolomei, Aurelia Vasile, Iulian Stoleriu, Marcel Popa, Cristian Peptu

Abstract:

Local antibiotherapy is one of the most effective acne therapies. Erythromycin (ER) is a macrolide antibiotic topically administered for over 30 years in the form of gel, ointment or hydroalcoholic solution for the acne therapy. The use of ER as a base for topical dosage forms raises some technological challenges due to the physicochemical properties of this substance. The main disadvantage of ER is the poor water solubility (2 mg/mL) that limits both formulation using hydrophilic bases and skin permeability. Cyclodextrins (CDs) are biocompatible cyclic oligomers of glucose, with hydrophobic core and hydrophilic exterior. CDs are used to improve the bioavailability of drugs by increasing their solubility and/or their rate of dissolution after including the poorly water soluble substances (such as ER) in the hydrophobic cavity of CDs. Adding CDs leads to the increase of solubility and improved stability of the drug substance, increased permeability of substances of low water solubility, decreased toxicity and even to active dose reduction as a result of increased bioavailability. CDs increase skin tolerability by reducing the irritant effect of certain substances. We have included ER to lactide modified β-cyclodextrin, in order to improve the therapeutic effect of topically administered ER. The aims of the present study were to synthesise and describe a new complex with prolonged release of ER with lactide modified β-cyclodextrin (CD-LA_E), to investigate the CD-LA_E complex by scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FTIR), to analyse the effect of semisolid base on the in vitro and ex vivo release characteristics of ER in the CD-LA_E complex by assessing the permeability coefficient and the release kinetics by fitting on mathematical models. SEM showed that, by complexation, ER changes its crystal structure and enters the amorphous phase. FTIR analysis has shown that certain specific bands of some groups in the ER structure move during the incapsulation process. The structure of the CD-LA_E complex has a molar ratio of 2.12 to 1 between lactide modified β-cyclodextrin and ER. The three semisolid bases (2% Carbopol, 13% Lutrol 127 and organogel based on Lutrol and isopropyl myristate) show a good capacity for incorporating the CD-LA_E complex, having a content of active ingredient ranging from 98.3% to 101.5% as compared to the declared value of 2% ER. The results of the in vitro dissolution test showed that the ER solubility was significantly increased by CDs incapsulation. The amount of ER released from the CD-LA_E gels was in the range of 76.23% to 89.01%, whereas gels based on ER released a maximum percentage of 26.01% ER. The ex vivo dissolution test confirms the increased ER solubility achieved by complexation, and supports the assumption that the use of this process might increase ER permeability. The highest permeability coefficient was obtained in ER released from gel based on 2% Carbopol: in vitro 33.33 μg/cm2/h, and ex vivo 26.82 μg/cm2/h, respectively. The release kinetics of complexed ER is performed by Fickian diffusion, according to the results obtained by fitting the data in the Korsmeyer-Peppas model.

Keywords: erythromycin, acne, lactide, cyclodextrin

Procedia PDF Downloads 266
157 Simulation Research of Innovative Ignition System of ASz62IR Radial Aircraft Engine

Authors: Miroslaw Wendeker, Piotr Kacejko, Mariusz Duk, Pawel Karpinski

Abstract:

The research in the field of aircraft internal combustion engines is currently driven by the needs of decreasing fuel consumption and CO2 emissions, while fulfilling the level of safety. Currently, reciprocating aircraft engines are found in sports, emergency, agricultural and recreation aviation. Technically, they are most at a pre-war knowledge of the theory of operation, design and manufacturing technology, especially if compared to that high level of development of automotive engines. Typically, these engines are driven by carburetors of a quite primitive construction. At present, due to environmental requirements and dealing with a climate change, it is beneficial to develop aircraft piston engines and adopt the achievements of automotive engineering such as computer-controlled low-pressure injection, electronic ignition control and biofuels. The paper describes simulation research of the innovative power and control systems for the aircraft radial engine of high power. Installing an electronic ignition system in the radial aircraft engine is a fundamental innovative idea of this solution. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. In this framework, this research work focuses on describing a methodology for optimizing the electronically controlled ignition system. This attempt can reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. New, redundant elements of the control system can improve the safety of aircraft. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. The simulation research aimed to determine the vulnerability of the values measured (they were planned as the quantities measured by the measurement systems) to determining the optimal ignition angle (the angle of maximum torque at a given operating point). The described results covered: a) research in steady states; b) velocity ranging from 1500 to 2200 rpm (every 100 rpm); c) loading ranging from propeller power to maximum power; d) altitude ranging according to the International Standard Atmosphere from 0 to 8000 m (every 1000 m); e) fuel: automotive gasoline ES95. The three models of different types of ignition coil (different energy discharge) were studied. The analysis aimed at the optimization of the design of the innovative ignition system for an aircraft engine. The optimization involved: a) the optimization of the measurement systems; b) the optimization of actuator systems. The studies enabled the research on the vulnerability of the signals to the control of the ignition timing. Accordingly, the number and type of sensors were determined for the ignition system to achieve its optimal performance. The results confirmed the limited benefits, in terms of fuel consumption. Thus, including spark management in the optimization is mandatory to significantly decrease the fuel consumption. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: piston engine, radial engine, ignition system, CFD model, engine optimization

Procedia PDF Downloads 386
156 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning

Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar

Abstract:

Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.

Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence

Procedia PDF Downloads 78
155 Drivers of Global Great Power Assertiveness: Russia and Its Involvement in the Global South

Authors: Elina Vroblevska, Toms Ratfelders

Abstract:

This paper examines the impact of international status-seeking aspirations on great power behavior within the international system. In particular, we seek to test the assumption advanced by the proponents of the Social Identity Theory (SIT) that the inability to achieve social mobilization through joining perceived higher-status social groups (of states) leads great powers to adopt the approach of social competition in which they aim to equal or outdo the dominant group in the area on which its claim to superior status rests. Since the dissolution of the Soviet Union, Russia has struggled to be accepted as a great power by the group of Western states that had created the dominant international system order, while the Soviet states were isolated. While the 1990s and the beginning of the 21st century can be characterized by striving to integrate into the existing order, the second decade has seen a rather sharp turn towards creating a new power center for Russia through the realization of ideas of multipolarity rivalry and uniqueness of the state itself. Increasingly, we have seen the Kremlin striving to collaborate and mobilize groups of states that fall outside of the categories of democracy, multiculturalism, and international order, the way that is perceived by the dominant group, which can be described as the West. Instead, Russia builds its own narrative where it creates an alternative understanding of these values, differentiating from the higher-status social group. The Global South, from a Russian perspective, is the group of states that can still be swayed to create an alternative power center in the international system - one where Russia can assert its status as a great power. This is based on a number of reasons, the most important being that the global north is already highly institutionalized in terms of economy (the EU) and defense (NATO), leaving no room for Russia but to integrate within the existing framework. Second, the difference in values and their interpretation - Russia has been adamant, for the last twenty years, on basing its moral code on traditional values like religion, the heterosexual family model, and moral superiority, which contradict the overall secularism of the Global North. And last, the striking difference in understanding of state governance models - with Russia becoming more autocratic over the course of the last 20 years, it has deliberately created distance between itself and democratic states, entering a “gray area” of alternative understanding of democracy which is more relatable to the global South countries. Using computational text analysis of the excerpts of Vladimir Putin’s speeches delivered from 2000-2022 regarding the areas that fall outside the immediate area of interest of Russia (the Global South), we identify 80 topics that relate to the particular component of the great power status - interest to use force globally. These topics are compared across four temporal frames that capture the periods of more and less permissible Western social boundaries. We find that there exists a negative association between such permissiveness and Putin’s emphasis on the “use of force” topics. This lends further support to the Social Identity Theory and contributes to broadening its applicability to explaining the questions related to great power assertiveness in areas outside of their primary focus regions.

Keywords: Russia, Global South, great power, identity

Procedia PDF Downloads 54
154 Interactions between Sodium Aerosols and Fission Products: A Theoretical Chemistry and Experimental Approach

Authors: Ankita Jadon, Sidi Souvi, Nathalie Girault, Denis Petitprez

Abstract:

Safety requirements for Generation IV nuclear reactor designs, especially the new generation sodium-cooled fast reactors (SFR) require a risk-informed approach to model severe accidents (SA) and their consequences in case of outside release. In SFRs, aerosols are produced during a core disruptive accident when primary system sodium is ejected into the containment and burn in contact with the air; producing sodium aerosols. One of the key aspects of safety evaluation is the in-containment sodium aerosol behavior and their interaction with fission products. The study of the effects of sodium fires is essential for safety evaluation as the fire can both thermally damage the containment vessel and cause an overpressurization risk. Besides, during the fire, airborne fission product first dissolved in the primary sodium can be aerosolized or, as it can be the case for fission products, released under the gaseous form. The objective of this work is to study the interactions between sodium aerosols and fission products (Iodine, toxic and volatile, being the primary concern). Sodium fires resulting from an SA would produce aerosols consisting of sodium peroxides, hydroxides, carbonates, and bicarbonates. In addition to being toxic (in oxide form), this aerosol will then become radioactive. If such aerosols are leaked into the environment, they can pose a danger to the ecosystem. Depending on the chemical affinity of these chemical forms with fission products, the radiological consequences of an SA leading to containment leak tightness loss will also be affected. This work is split into two phases. Firstly, a method to theoretically understand the kinetics and thermodynamics of the heterogeneous reaction between sodium aerosols and fission products: I2 and HI are proposed. Ab-initio, density functional theory (DFT) calculations using Vienna ab-initio simulation package are carried out to develop an understanding of the surfaces of sodium carbonate (Na2CO3) aerosols and hence provide insight on its affinity towards iodine species. A comprehensive study of I2 and HI adsorption, as well as bicarbonate formation on the calculated lowest energy surface of Na2CO3, was performed which provided adsorption energies and description of the optimized configuration of adsorbate on the stable surface. Secondly, the heterogeneous reaction between (I2)g and Na2CO3 aerosols were investigated experimentally. To study this, (I2)g was generated by heating a permeation tube containing solid I2, and, passing it through a reaction chamber containing Na2CO3 aerosol deposit. The concentration of iodine was then measured at the exit of the reaction chamber. Preliminary observations indicate that there is an effective uptake of (I2)g on Na2CO3 surface, as suggested by our theoretical chemistry calculations. This work is the first step in addressing the gaps in knowledge of in-containment and atmospheric source term which are essential aspects of safety evaluation of SFR SA. In particular, this study is aimed to determine and characterize the radiological and chemical source term. These results will then provide useful insights for the developments of new models to be implemented in integrated computer simulation tool to analyze and evaluate SFR safety designs.

Keywords: iodine adsorption, sodium aerosols, sodium cooled reactor, DFT calculations, sodium carbonate

Procedia PDF Downloads 215
153 [Keynote Talk]: Bioactive Cyclic Dipeptides of Microbial Origin in Discovery of Cytokine Inhibitors

Authors: Sajeli A. Begum, Ameer Basha, Kirti Hira, Rukaiyya Khan

Abstract:

Cyclic dipeptides are simple diketopiperazine derivatives being investigated by several scientists for their biological effects which include anticancer, antimicrobial, haematological, anticonvulsant, immunomodulatory effect, etc. They are potentially active microbial metabolites having been synthesized too, for developing into drug candidates. Cultures of Pseudomonas species have earlier been reported to produce cyclic dipeptides, helping in quorum sensing signals and bacterial–host colonization phenomena during infections, causing cell anti-proliferation and immunosuppression. Fluorescing Pseudomonas species have been identified to secrete lipid derivatives, peptides, pyrroles, phenazines, indoles, aminoacids, pterines, pseudomonic acids and some antibiotics. In the present work, results of investigation on the cyclic dipeptide metabolites secreted by the culture broth of Pseudomonas species as potent pro-inflammatory cytokine inhibitors are discussed. The bacterial strain was isolated from the rhizospheric soil of groundnut crop and identified as Pseudomonas aeruginosa by 16S rDNA sequence (GenBank Accession No. KT625586). Culture broth of this strain was prepared by inoculating into King’s B broth and incubating at 30 ºC for 7 days. The ethyl acetate extract of culture broth was prepared and lyophilized to get a dry residue (EEPA). Lipopolysaccharide (LPS)-induced ELISA assay proved the inhibition of tumor necrosis factor-alpha (TNF-α) secretion in culture supernatant of RAW 264.7 cells by EEPA (IC50 38.8 μg/mL). The effect of oral administration of EEPA on plasma TNF-α level in rats was tested by ELISA kit. The LPS mediated plasma TNF-α level was reduced to 45% with 125 mg/kg dose of EEPA. Isolation of the chemical constituents of EEPA through column chromatography yielded ten cyclic dipeptides, which were characterized using nuclear magnetic resonance and mass spectroscopic techniques. These cyclic dipeptides are biosynthesized in microorganisms by multifunctional assembly of non-ribosomal peptide synthases and cyclic dipeptide synthase. Cyclo (Gly-L-Pro) was found to be more potentially (IC50 value 4.5 μg/mL) inhibiting TNF-α production followed by cyclo (trans-4-hydroxy-L-Pro-L-Phe) (IC50 value 14.2 μg/mL) and the effect was equal to that of standard immunosuppressant drug, prednisolone. Further, the effect was analyzed by determining mRNA expression of TNF-α in LPS-stimulated RAW 264.7 macrophages using quantitative real-time reverse transcription polymerase chain reaction. EEPA and isolated cyclic dipeptides demonstrated diminution of TNF-α mRNA expression levels in a dose-dependent manner under the tested conditions. Also, they were found to control the expression of other pro-inflammatory cytokines like IL-1β and IL-6, when tested through their mRNA expression levels in LPS-stimulated RAW 264.7 macrophages under LPS-stimulated conditions. In addition, significant inhibition effect was found on Nitric oxide production. Further all the compounds exhibited weak toxicity to LPS-induced RAW 264.7 cells. Thus the outcome of the study disclosed the effectiveness of EEPA and the isolated cyclic dipeptides in down-regulating key cytokines involved in pathophysiology of autoimmune diseases.In another study led by the investigators, microbial cyclic dipeptides were found to exhibit excellent antimicrobial effect against Fusarium moniliforme which is an important causative agent of Sorghum grain mold disease. Thus, cyclic dipeptides are emerging small molecular drug candidates for various autoimmune diseases.

Keywords: cyclic dipeptides, cytokines, Fusarium moniliforme, Pseudomonas, TNF-alpha

Procedia PDF Downloads 211
152 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices

Authors: Kaustav Mukherjee

Abstract:

In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parameters

Keywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss

Procedia PDF Downloads 132
151 Applying Concept Mapping to Explore Temperature Abuse Factors in the Processes of Cold Chain Logistics Centers

Authors: Marco F. Benaglia, Mei H. Chen, Kune M. Tsai, Chia H. Hung

Abstract:

As societal and family structures, consumer dietary habits, and awareness about food safety and quality continue to evolve in most developed countries, the demand for refrigerated and frozen foods has been growing, and the issues related to their preservation have gained increasing attention. A well-established cold chain logistics system is essential to avoid any temperature abuse; therefore, assessing potential disruptions in the operational processes of cold chain logistics centers becomes pivotal. This study preliminarily employs HACCP to find disruption factors in cold chain logistics centers that may cause temperature abuse. Then, concept mapping is applied: selected experts engage in brainstorming sessions to identify any further factors. The panel consists of ten experts, including four from logistics and home delivery, two from retail distribution, one from the food industry, two from low-temperature logistics centers, and one from the freight industry. Disruptions include equipment-related aspects, human factors, management aspects, and process-related considerations. The areas of observation encompass freezer rooms, refrigerated storage areas, loading docks, sorting areas, and vehicle parking zones. The experts also categorize the disruption factors based on perceived similarities and build a similarity matrix. Each factor is evaluated for its impact, frequency, and investment importance. Next, multiple scale analysis, cluster analysis, and other methods are used to analyze these factors. Simultaneously, key disruption factors are identified based on their impact and frequency, and, subsequently, the factors that companies prioritize and are willing to invest in are determined by assessing investors’ risk aversion behavior. Finally, Cumulative Prospect Theory (CPT) is applied to verify the risk patterns. 66 disruption factors are found and categorized into six clusters: (1) "Inappropriate Use and Maintenance of Hardware and Software Facilities", (2) "Inadequate Management and Operational Negligence", (3) "Product Characteristics Affecting Quality and Inappropriate Packaging", (4) "Poor Control of Operation Timing and Missing Distribution Processing", (5) "Inadequate Planning for Peak Periods and Poor Process Planning", and (6) "Insufficient Cold Chain Awareness and Inadequate Training of Personnel". This study also identifies five critical factors in the operational processes of cold chain logistics centers: "Lack of Personnel’s Awareness Regarding Cold Chain Quality", "Personnel Not Following Standard Operating Procedures", "Personnel’s Operational Negligence", "Management’s Inadequacy", and "Lack of Personnel’s Knowledge About Cold Chain". The findings show that cold chain operators prioritize prevention and improvement efforts in the "Inappropriate Use and Maintenance of Hardware and Software Facilities" cluster, particularly focusing on the factors of "Temperature Setting Errors" and "Management’s Inadequacy". However, through the application of CPT theory, this study reveals that companies are not usually willing to invest in the improvement of factors related to the "Inappropriate Use and Maintenance of Hardware and Software Facilities" cluster due to its low occurrence likelihood, but they acknowledge the severity of the consequences if it does occur. Hence, the main implication is that the key disruption factors in cold chain logistics centers’ processes are associated with personnel issues; therefore, comprehensive training, periodic audits, and the establishment of reasonable incentives and penalties for both new employees and managers may significantly reduce disruption issues.

Keywords: concept mapping, cold chain, HACCP, cumulative prospect theory

Procedia PDF Downloads 69
150 Geographic Mapping of Tourism in Rural Areas: A Case Study of Cumbria, United Kingdom

Authors: Emma Pope, Demos Parapanos

Abstract:

Rural tourism has become more obvious and prevalent, with tourists’ increasingly seeking authentic experiences. This movement accelerated post-Covid, putting destinations in danger of reaching levels of saturation called ‘overtourism’. Whereas the phenomenon of overtourism has been frequently discussed in the urban context by academics and practitioners over recent years, it has hardly been referred to in the context of rural tourism, where perhaps it is even more difficult to manage. Rural tourism was historically considered small-scale, marked by its traditional character and by having little impact on nature and rural society. The increasing number of rural areas experiencing overtourism, however, demonstrates the need for new approaches, especially as the impacts and enablers of overtourism are context specific. Cumbria, with approximately 47 million visitors each year, and 23,000 operational enterprises, is one of these rural areas experiencing overtourism in the UK. Using the county of Cumbria as an example, this paper aims to explore better planning and management in rural destinations by clustering the area into rural and ‘urban-rural’ tourism zones. To achieve the aim, this study uses secondary data from a variety of sources to identify variables relating to visitor economy development and demand. These data include census data relating to population and employment, tourism industry-specific data including tourism revenue, visitor activities, and accommodation stock, and big data sources such as Trip Advisor and All Trails. The combination of these data sources provides a breadth of tourism-related variables. The subsequent analysis of this data draws upon various validated models. For example, tourism and hospitality employment density, territorial tourism pressure, and accommodation density. In addition to these statistical calculations, other data are utilized to further understand the context of these zones, for example, tourist services, attractions, and activities. The data was imported into ARCGIS where the density of the different variables is visualized on maps. This study aims to provide an understanding of the geographical context of visitor economy development and tourist behavior in rural areas. The findings contribute to an understanding of the spatial dynamics of tourism within the region of Cumbria through the creation of thematized maps. Different zones of tourism industry clusters are identified, which include elements relating to attractions, enterprises, infrastructure, tourism employment and economic impact. These maps visualize hot and cold spots relating to a variety of tourism contexts. It is believed that the strategy used to provide a visual overview of tourism development and demand in Cumbria could provide a strategic tool for rural areas to better plan marketing opportunities and avoid overtourism. These findings can inform future sustainability policy and destination management strategies within the areas through an understanding of the processes behind the emergence of both hot and cold spots. It may mean that attract and disperse needs to be reviewed in terms of a strategic option. In other words, to use sector or zonal policies for the individual hot or cold areas with transitional zones dependent upon local economic, social and environmental factors.

Keywords: overtourism, rural tourism, sustainable tourism, tourism planning, tourism zones

Procedia PDF Downloads 74
149 Religious Government Interaction in Urban Settings

Authors: Rebecca Sager, Gary Adler, Damon Mayrl, Jonathan Cooley

Abstract:

The United States’ unique constitutional structure and religious roots have fostered the flourishing of local communities through the close interaction of church and state. Today, these local relationships play out in these circumstances, including increased religious diversity and changing jurisprudence to more accommodating church-state interaction. This project seeks to understand the meanings of church-state interaction among diverse religious leaders in a variety of local settings. Using data from interviews with over 200 religious leaders in six states in the US, we examine how religious groups interact with various non-elected and elected government officials. We have interviewed local religious actors in eight communities characterized by the difference in location and religious homogeneity. These include a small city within a major metropolitan area, several religiously diverse cities in various areas across the country, a small college town with religious diversity set in a religiously-homogenous rural area, and a small farming community with minimal religious diversity. We identified three types of religious actors in each of our geographic areas: congregations, religious non-profit organizations, and clergy coalitions. Given the well-known difficulties in identifying religious organizations, we used the following to construct a local population list from which to sample: the Association of Religion Data Archives ProPublica’s Nonprofit Explorer, Guidestar, and the Internal Revenue Service Exempt Business Master File. Our sample for selecting interviewees were stratified by three criteria: religious tradition (Christian v. non-Christian), sectarian orientation (Mainline/Catholic v. Evangelical Protestant), and organizational form (congregation vs. other). Each interview included the elicitation of local church-state interactions experienced by the organization and organizational members, the enumeration of information sources for navigating church-state interactions, and the personal and community background of interviewees. We coded interviews to identify the cognitive schema of “church” and “state,” the models of legitimate relations between the two, and discretion rules for managing interaction and avoiding conflict. We also enumerate arenas in which and issues for which local state officials are engaged. In this paper, we focus on Korean religious groups and examine how their interactions differ from other congregations, including other immigrant congregations. These churches were particularly common in one large metropolitan area. We find that Korean churches are much more likely to be concerned about any governmental interactions and have fewer connections than non-Korean churches leading to more disconnection from their communities. We argue that due to their status as new immigrant churches without a lot of community ties for many members and being in a large city, Korean churches were particularly concerned about too much interaction with any type of government officials, even ones that could be potentially helpful. While other immigrant churches were somewhat willing to work with government groups, such as Latino-based Catholic groups, Korean churches were the least likely to want to create these connections. Understanding these churches and how immigrant church identity varies and creates different types of interaction is crucial to understanding how church/state interaction can be more meaningful over space and place.

Keywords: religion, congregations, government, politics

Procedia PDF Downloads 88
148 Consumer Preferences for Low-Carbon Futures: A Structural Equation Model Based on the Domestic Hydrogen Acceptance Framework

Authors: Joel A. Gordon, Nazmiye Balta-Ozkan, Seyed Ali Nabavi

Abstract:

Hydrogen-fueled technologies are rapidly advancing as a critical component of the low-carbon energy transition. In countries historically reliant on natural gas for home heating, such as the UK, hydrogen may prove fundamental for decarbonizing the residential sector, alongside other technologies such as heat pumps and district heat networks. While the UK government is set to take a long-term policy decision on the role of domestic hydrogen by 2026, there are considerable uncertainties regarding consumer preferences for ‘hydrogen homes’ (i.e., hydrogen-fueled appliances for space heating, hot water, and cooking. In comparison to other hydrogen energy technologies, such as road transport applications, to date, few studies have engaged with the social acceptance aspects of the domestic hydrogen transition, resulting in a stark knowledge deficit and pronounced risk to policymaking efforts. In response, this study aims to safeguard against undesirable policy measures by revealing the underlying relationships between the factors of domestic hydrogen acceptance and their respective dimensions: attitudinal, socio-political, community, market, and behavioral acceptance. The study employs an online survey (n=~2100) to gauge how different UK householders perceive the proposition of switching from natural gas to hydrogen-fueled appliances. In addition to accounting for housing characteristics (i.e., housing tenure, property type and number of occupants per dwelling) and several other socio-structural variables (e.g. age, gender, and location), the study explores the impacts of consumer heterogeneity on hydrogen acceptance by recruiting respondents from across five distinct groups: (1) fuel poor householders, (2) technology engaged householders, (3) environmentally engaged householders, (4) technology and environmentally engaged householders, and (5) a baseline group (n=~700) which filters out each of the smaller targeted groups (n=~350). This research design reflects the notion that supporting a socially fair and efficient transition to hydrogen will require parallel engagement with potential early adopters and demographic groups impacted by fuel poverty while also accounting strongly for public attitudes towards net zero. Employing a second-order multigroup confirmatory factor analysis (CFA) in Mplus, the proposed hydrogen acceptance model is tested to fit the data through a partial least squares (PLS) approach. In addition to testing differences between and within groups, the findings provide policymakers with critical insights regarding the significance of knowledge and awareness, safety perceptions, perceived community impacts, cost factors, and trust in key actors and stakeholders as potential explanatory factors of hydrogen acceptance. Preliminary results suggest that knowledge and awareness of hydrogen are positively associated with support for domestic hydrogen at the household, community, and national levels. However, with the exception of technology and/or environmentally engaged citizens, much of the population remains unfamiliar with hydrogen and somewhat skeptical of its application in homes. Knowledge and awareness present as critical to facilitating positive safety perceptions, alongside higher levels of trust and more favorable expectations for community benefits, appliance performance, and potential cost savings. Based on these preliminary findings, policymakers should be put on red alert about diffusing hydrogen into the public consciousness in alignment with energy security, fuel poverty, and net-zero agendas.

Keywords: hydrogen homes, social acceptance, consumer heterogeneity, heat decarbonization

Procedia PDF Downloads 114