Search results for: conventional pressure sensor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8374

Search results for: conventional pressure sensor

214 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 303
213 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)

Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara

Abstract:

Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.

Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry

Procedia PDF Downloads 141
212 Information and Communication Technology Skills of Finnish Students in Particular by Gender

Authors: Antero J. S. Kivinen, Suvi-Sadetta Kaarakainen

Abstract:

Digitalization touches every aspect of contemporary society, changing the way we live our everyday life. Contemporary society is sometimes described as knowledge society including unprecedented amount of information people face daily. The tools to manage this information flow are ICT-skills which are both technical skills and reflective skills needed to manage incoming information. Therefore schools are under constant pressure of revision. In the latest Programme for International Student Assessment (PISA) girls have been outperforming boys in all Organization for Economic Co-operation and Development (OECD) member countries and the gender gap between girls and boys is widest in Finland. This paper presents results of the Comprehensive Schools in the Digital Age project of RUSE, University of Turku. The project is in connection with Finnish Government Analysis, Assessment and Research Activities. First of all, this paper examines gender differences in ICT-skills of Finnish upper comprehensive school students. Secondly, it explores in which way differences are changing when students proceed to upper secondary and vocational education. ICT skills are measured using a performance-based ICT-skill test. Data is collected in 3 phases, January-March 2017 (upper comprehensive schools, n=5455), September-December 2017 (upper secondary and vocational schools, n~3500) and January-March 2018 (Upper comprehensive schools). The age of upper comprehensive school student’s is 15-16 and upper secondary and vocational school 16-18. The test is divided into 6 categories: basic operations, productivity software, social networking and communication, content creation and publishing, applications and requirements for the ICT study programs. Students have filled a survey about their ICT-usage and study materials they use in school and home. Cronbach's alpha was used to estimate the reliability of the ICT skill test. Statistical differences between genders were examined using two-tailed independent samples t-test. Results of first data from upper comprehensive schools show that there is no statistically significant difference in ICT-skill tests total scores between genders (boys 10.24 and girls 10.64, maximum being 36). Although, there were no gender difference in total test scores, there are differences in above mentioned six categories. Girls get better scores on school related and social networking test subjects while boys perform better on more technical oriented subjects. Test scores on basic operations are quite low for both groups. Perhaps these can partly be explained by the fact that the test was made on computers and majority of students ICT-usage consist of smartphones and tablets. Against this background it is important to analyze further the reasons for these differences. In a context of ongoing digitalization of everyday life and especially working life, the significant purpose of this analyses is to find answers how to guarantee the adequate ICT skills for all students.

Keywords: basic education, digitalization, gender differences, ICT-skills, upper comprehensive education, upper secondary education, vocational education

Procedia PDF Downloads 111
211 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model

Authors: Muhammad Karim Ahmadzai

Abstract:

Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.

Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis

Procedia PDF Downloads 92
210 The Pore–Scale Darcy–Brinkman–Stokes Model for the Description of Advection–Diffusion–Precipitation Using Level Set Method

Authors: Jiahui You, Kyung Jae Lee

Abstract:

Hydraulic fracturing fluid (HFF) is widely used in shale reservoir productions. HFF contains diverse chemical additives, which result in the dissolution and precipitation of minerals through multiple chemical reactions. In this study, a new pore-scale Darcy–Brinkman–Stokes (DBS) model coupled with Level Set Method (LSM) is developed to address the microscopic phenomena occurring during the iron–HFF interaction, by numerically describing mass transport, chemical reactions, and pore structure evolution. The new model is developed based on OpenFOAM, which is an open-source platform for computational fluid dynamics. Here, the DBS momentum equation is used to solve for velocity by accounting for the fluid-solid mass transfer; an advection-diffusion equation is used to compute the distribution of injected HFF and iron. The reaction–induced pore evolution is captured by applying the LSM, where the solid-liquid interface is updated by solving the level set distance function and reinitialized to a signed distance function. Then, a smoothened Heaviside function gives a smoothed solid-liquid interface over a narrow band with a fixed thickness. The stated equations are discretized by the finite volume method, while the re-initialized equation is discretized by the central difference method. Gauss linear upwind scheme is used to solve the level set distance function, and the Pressure–Implicit with Splitting of Operators (PISO) method is used to solve the momentum equation. The numerical result is compared with 1–D analytical solution of fluid-solid interface for reaction-diffusion problems. Sensitivity analysis is conducted with various Damkohler number (DaII) and Peclet number (Pe). We categorize the Fe (III) precipitation into three patterns as a function of DaII and Pe: symmetrical smoothed growth, unsymmetrical growth, and dendritic growth. Pe and DaII significantly affect the location of precipitation, which is critical in determining the injection parameters of hydraulic fracturing. When DaII<1, the precipitation uniformly occurs on the solid surface both in upstream and downstream directions. When DaII>1, the precipitation mainly occurs on the solid surface in an upstream direction. When Pe>1, Fe (II) transported deeply into and precipitated inside the pores. When Pe<1, the precipitation of Fe (III) occurs mainly on the solid surface in an upstream direction, and they are easily precipitated inside the small pore structures. The porosity–permeability relationship is subsequently presented. This pore-scale model allows high confidence in the description of Fe (II) dissolution, transport, and Fe (III) precipitation. The model shows fast convergence and requires a low computational load. The results can provide reliable guidance for injecting HFF in shale reservoirs to avoid clogging and wellbore pollution. Understanding Fe (III) precipitation, and Fe (II) release and transport behaviors give rise to a highly efficient hydraulic fracture project.

Keywords: reactive-transport , Shale, Kerogen, precipitation

Procedia PDF Downloads 146
209 Study of Formation and Evolution of Disturbance Waves in Annular Flow Using Brightness-Based Laser-Induced Fluorescence (BBLIF) Technique

Authors: Andrey Cherdantsev, Mikhail Cherdantsev, Sergey Isaenkov, Dmitriy Markovich

Abstract:

In annular gas-liquid flow, liquid flows as a film along pipe walls sheared by high-velocity gas stream. Film surface is covered by large-scale disturbance waves which affect pressure drop and heat transfer in the system and are necessary for entrainment of liquid droplets from film surface into the core of gas stream. Disturbance waves are a highly complex and their properties are affected by numerous parameters. One of such aspects is flow development, i.e., change of flow properties with the distance from the inlet. In the present work, this question is studied using brightness-based laser-induced fluorescence (BBLIF) technique. This method enables one to perform simultaneous measurements of local film thickness in large number of points with high sampling frequency. In the present experiments first 50 cm of upward and downward annular flow in a vertical pipe of 11.7 mm i.d. is studied with temporal resolution of 10 kHz and spatial resolution of 0.5 mm. Thus, spatiotemporal evolution of film surface can be investigated, including scenarios of formation, acceleration and coalescence of disturbance waves. The behaviour of disturbance waves' velocity depending on phases flow rates and downstream distance was investigated. Besides measuring the waves properties, the goal of the work was to investigate the interrelation between disturbance waves properties and integral characteristics of the flow such as interfacial shear stress and flow rate of dispersed phase. In particular, it was shown that the initial acceleration of disturbance waves, defined by the value of shear stress, linearly decays with downstream distance. This lack of acceleration which may even lead to deceleration is related to liquid entrainment. Flow rate of disperse phase linearly grows with downstream distance. During entrainment events, liquid is extracted directly from disturbance waves, reducing their mass, area of interaction to the gas shear and, hence, velocity. Passing frequency of disturbance waves at each downstream position was measured automatically with a new algorithm of identification of characteristic lines of individual disturbance waves. Scenarios of coalescence of individual disturbance waves were identified. Transition from initial high-frequency Kelvin-Helmholtz waves appearing at the inlet to highly nonlinear disturbance waves with lower frequency was studied near the inlet using 3D realisation of BBLIF method in the same cylindrical channel and in a rectangular duct with cross-section of 5 mm by 50 mm. It was shown that the initial waves are generally two-dimensional but are promptly broken into localised three-dimensional wavelets. Coalescence of these wavelets leads to formation of quasi two-dimensional disturbance waves. Using cross-correlation analysis, loss and restoration of two-dimensionality of film surface with downstream distance were studied quantitatively. It was shown that all the processes occur closer to the inlet at higher gas velocities.

Keywords: annular flow, disturbance waves, entrainment, flow development

Procedia PDF Downloads 230
208 Electrical Transport through a Large-Area Self-Assembled Monolayer of Molecules Coupled with Graphene for Scalable Electronic Applications

Authors: Chunyang Miao, Bingxin Li, Shanglong Ning, Christopher J. B. Ford

Abstract:

While it is challenging to fabricate electronic devices close to atomic dimensions in conventional top-down lithography, molecular electronics is promising to help maintain the exponential increase in component densities via using molecular building blocks to fabricate electronic components from the bottom up. It offers smaller, faster, and more energy-efficient electronic and photonic systems. A self-assembled monolayer (SAM) of molecules is a layer of molecules that self-assembles on a substrate. They are mechanically flexible, optically transparent, low-cost, and easy to fabricate. A large-area multi-layer structure has been designed and investigated by the team, where a SAM of designed molecules is sandwiched between graphene and gold electrodes. Each molecule can act as a quantum dot, with all molecules conducting in parallel. When a source-drain bias is applied, significant current flows only if a molecular orbital (HOMO or LUMO) lies within the source-drain energy window. If electrons tunnel sequentially on and off the molecule, the charge on the molecule is well-defined and the finite charging energy causes Coulomb blockade of transport until the molecular orbital comes within the energy window. This produces ‘Coulomb diamonds’ in the conductance vs source-drain and gate voltages. For different tunnel barriers at either end of the molecule, it is harder for electrons to tunnel out of the dot than in (or vice versa), resulting in the accumulation of two or more charges and a ‘Coulomb staircase’ in the current vs voltage. This nanostructure exhibits highly reproducible Coulomb-staircase patterns, together with additional oscillations, which are believed to be attributed to molecular vibrations. Molecules are more isolated than semiconductor dots, and so have a discrete phonon spectrum. When tunnelling into or out of a molecule, one or more vibronic states can be excited in the molecule, providing additional transport channels and resulting in additional peaks in the conductance. For useful molecular electronic devices, achieving the optimum orbital alignment of molecules to the Fermi energy in the leads is essential. To explore it, a drop of ionic liquid is employed on top of the graphene to establish an electric field at the graphene, which screens poorly, gating the molecules underneath. Results for various molecules with different alignments of Fermi energy to HOMO have shown highly reproducible Coulomb-diamond patterns, which agree reasonably with DFT calculations. In summary, this large-area SAM molecular junction is a promising candidate for future electronic circuits. (1) The small size (1-10nm) of the molecules and good flexibility of the SAM lead to the scalable assembly of ultra-high densities of functional molecules, with advantages in cost, efficiency, and power dissipation. (2) The contacting technique using graphene enables mass fabrication. (3) Its well-observed Coulomb blockade behaviour, narrow molecular resonances, and well-resolved vibronic states offer good tuneability for various functionalities, such as switches, thermoelectric generators, and memristors, etc.

Keywords: molecular electronics, Coulomb blokade, electron-phonon coupling, self-assembled monolayer

Procedia PDF Downloads 35
207 Innocent Victims and Immoral Women: Sex Workers in the Philippines through the Lens of Mainstream Media

Authors: Sharmila Parmanand

Abstract:

This paper examines dominant media representations of prostitution in the Philippines and interrogates sex workers’ interactions with the media establishment. This analysis of how sex workers are constituted in media, often as both innocent victims and immoral actors, contributes to an understanding of public discourse on sex work in the Philippines, where decriminalisation has recently been proposed and sex workers are currently classified as potential victims under anti-trafficking laws but also as criminals under the penal code. The first part is an analysis of media coverage of two prominent themes on prostitution: first, raid and rescue operations conducted by law enforcement; and second, prostitution on military bases and tourism hotspots. As a result of pressure from activists and international donors, these two themes often define the policy conversations on sex work in the Philippines. The discourses in written and televised news reports and documentaries from established local and international media sources that address these themes are explored through content analysis. Conclusions are drawn based on specific terms commonly used to refer to sex workers, how sex workers are seen as performing their cultural roles as mothers and wives, how sex work is depicted, associations made between sex work and public health, representations of clients and managers and ‘rescuers’ such as the police, anti-trafficking organisations, and faith-based groups, and which actors are presumed to be issue experts. Images of how prostitution is used as a metaphor for relations between the Philippines and foreign nations are also deconstructed, along with common tropes about developing world female subjects. In general, sex workers are simultaneously portrayed as bad mothers who endanger their family’s morality but also as long-suffering victims who endure exploitation for the sake of their children. They are also depicted as unclean, drug-addicted threats to public health. Their managers and clients are portrayed as cold, abusive, and sometimes violent, and their rescuers as moral and altruistic agents who are essential for sex workers’ rehabilitation and restoration as virtuous citizens. The second part explores sex workers’ own perceptions of their interactions with media, through interviews with members of the Philippine Sex Workers Collective, a loose organisation of sex workers around the Philippines. They reveal that they are often excluded by media practitioners and that they do not feel that they have space for meaningful self-revelation about their work when they do engage with journalists, who seem to have an overt agenda of depicting them as either victims or women of loose morals. In their assessment, media narratives do not necessarily reflect their lived experiences, and in some cases, coverage of rescues and raid operations endangers their privacy and instrumentalises their suffering. Media representations of sex workers may produce subject positions such as ‘victims’ or ‘criminals’ and legitimize specific interventions while foreclosing other ways of thinking. Further, in light of media’s power to reflect and shape public consciousness, it is a valuable academic and political project to examine whether sex workers are able to assert agency in determining how they are represented.

Keywords: discourse analysis, news media, sex work, trafficking

Procedia PDF Downloads 359
206 Prostheticly Oriented Approach for Determination of Fixture Position for Facial Prostheses Retention in Cases with Atypical and Combined Facial Defects

Authors: K. A.Veselova, N. V.Gromova, I. N.Antonova, I. N. Kalakutskii

Abstract:

There are many diseases and incidents that may result facial defects and deformities: cancer, trauma, burns, congenital anomalies, and autoimmune diseases. In some cases, patient may acquire atypically extensive facial defect, including more than one anatomical region or, by contrast, atypically small defect (e.g. partial auricular defect). The anaplastology gives us opportunity to help patient with facial disfigurement in cases when plastic surgery is contraindicated. Using of implant retention for facial prosthesis is strongly recommended because improves both aesthetic and functional results and makes using of the prosthesis more comfortable. Prostheticly oriented fixture position is extremely important for aesthetic and functional long-term result; however, the optimal site for fixture placement is not clear in cases with atypical configuration of facial defect. The objective of this report is to demonstrate challenges in fixture position determination we have faced with and offer the solution. In this report, four cases of implant-supported facial prosthesis are described. Extra-oral implants with four millimeter length were used in all cases. The decision regarding the quantity of surgical stages was based on anamnesis of disease. Facial prostheses were manufactured according to conventional technique. Clinical and technological difficulties and mistakes are described, and prostheticly oriented approach for determination of fixture position is demonstrated. The case with atypically large combined orbital and nasal defect resulting after arteriovenous malformation is described: the correct positioning of artificial eye was impossible due to wrong position of the fixture (with suprastructure) located in medial aspect of supraorbital rim. The suprastructure was unfixed and this fixture wasn`t used for retention in order to achieve appropriate artificial eye placement and better aesthetic result. In other case with small partial auricular defect (only helix and antihelix were absent) caused by squamoized cell carcinoma T1N0M0 surgical template was used to avoid the difficulties. To achieve the prostheticly oriented fixture position in case of extremely small defect the template was made on preliminary cast using vacuum thermoforming method. Two radiopaque markers were incorporated into template in preferable for fixture placement positions taking into account future prosthesis configuration. The template was put on remaining ear and cone-beam CT was performed to insure, that the amount of bone is enough for implant insertion in preferable position. Before the surgery radiopaque markers were extracted and template was holed for guide drill. Fabrication of implant-retained facial prostheses gives us opportunity to improve aesthetics, retention and patients’ quality of life. But every inaccuracy in planning leads to challenges on surgery and prosthetic stages. Moreover, in cases with atypically small or extended facial defects prostheticly oriented approach for determination of fixture position is strongly required. The approach including surgical template fabrication is effective, easy and cheap way to avoid mistakes and unpredictable result.

Keywords: anaplastology, facial prosthesis, implant-retained facial prosthesis., maxillofacil prosthese

Procedia PDF Downloads 79
205 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 164
204 Numerical Simulation of the Production of Ceramic Pigments Using Microwave Radiation: An Energy Efficiency Study Towards the Decarbonization of the Pigment Sector

Authors: Pedro A. V. Ramos, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

Global warming mitigation is one of the main challenges of this century, having the net balance of greenhouse gas (GHG) emissions to be null or negative in 2050. Industry electrification is one of the main paths to achieving carbon neutrality within the goals of the Paris Agreement. Microwave heating is becoming a popular industrial heating mechanism due to the absence of direct GHG emissions, but also the rapid, volumetric, and efficient heating. In the present study, a mathematical model is used to simulate the production using microwave heating of two ceramic pigments, at high temperatures (above 1200 Celsius degrees). The two pigments studied were the yellow (Pr, Zr)SiO₂ and the brown (Ti, Sb, Cr)O₂. The chemical conversion of reactants into products was included in the model by using the kinetic triplet obtained with the model-fitting method and experimental data present in the Literature. The coupling between the electromagnetic, thermal, and chemical interfaces was also included. The simulations were computed in COMSOL Multiphysics. The geometry includes a moving plunger to allow for the cavity impedance matching and thus maximize the electromagnetic efficiency. To accomplish this goal, a MATLAB controller was developed to automatically search the position of the moving plunger that guarantees the maximum efficiency. The power is automatically and permanently adjusted during the transient simulation to impose stationary regime and total conversion, the two requisites of every converged solution. Both 2D and 3D geometries were used and a parametric study regarding the axial bed velocity and the heat transfer coefficient at the boundaries was performed. Moreover, a Verification and Validation study was carried out by comparing the conversion profiles obtained numerically with the experimental data available in the Literature; the numerical uncertainty was also estimated to attest to the result's reliability. The results show that the model-fitting method employed in this work is a suitable tool to predict the chemical conversion of reactants into the pigment, showing excellent agreement between the numerical results and the experimental data. Moreover, it was demonstrated that higher velocities lead to higher thermal efficiencies and thus lower energy consumption during the process. This work concludes that the electromagnetic heating of materials having high loss tangent and low thermal conductivity, like ceramic materials, maybe a challenge due to the presence of hot spots, which may jeopardize the product quality or even the experimental apparatus. The MATLAB controller increased the electromagnetic efficiency by 25% and global efficiency of 54% was obtained for the titanate brown pigment. This work shows that electromagnetic heating will be a key technology in the decarbonization of the ceramic sector as reductions up to 98% in the specific GHG emissions were obtained when compared to the conventional process. Furthermore, numerical simulations appear as a suitable technique to be used in the design and optimization of microwave applicators, showing high agreement with experimental data.

Keywords: automatic impedance matching, ceramic pigments, efficiency maximization, high-temperature microwave heating, input power control, numerical simulation

Procedia PDF Downloads 116
203 Genetically Engineered Crops: Solution for Biotic and Abiotic Stresses in Crop Production

Authors: Deepak Loura

Abstract:

Production and productivity of several crops in the country continue to be adversely affected by biotic (e.g., Insect-pests and diseases) and abiotic (e.g., water temperature and salinity) stresses. Over-dependence on pesticides and other chemicals is economically non-viable for the resource-poor farmers of our country. Further, pesticides can potentially affect human and environmental safety. While traditional breeding techniques and proper- management strategies continue to play a vital role in crop improvement, we need to judiciously use biotechnology approaches for the development of genetically modified crops addressing critical problems in the improvement of crop plants for sustainable agriculture. Modern biotechnology can help to increase crop production, reduce farming costs, and improve food quality and the safety of the environment. Genetic engineering is a new technology which allows plant breeders to produce plants with new gene combinations by genetic transformation of crop plants for improvement of agronomic traits. Advances in recombinant DNA technology have made it possible to have genes between widely divergent species to develop genetically modified or genetically engineered plants. Plant genetic engineering provides the strength to harness useful genes and alleles from indigenous microorganisms to enrich the gene pool for developing genetically modified (GM) crops that will have inbuilt (inherent) resistance to insect pests, diseases, and abiotic stresses. Plant biotechnology has made significant contributions in the past 20 years in the development of genetically engineered or genetically modified crops with multiple benefits. A variety of traits have been introduced in genetically engineered crops which include (i) herbicide resistance. (ii) pest resistance, (iii) viral resistance, (iv) slow ripening of fruits and vegetables, (v) fungal and bacterial resistance, (vi) abiotic stress tolerance (drought, salinity, temperature, flooding, etc.). (vii) quality improvement (starch, protein, and oil), (viii) value addition (vitamins, micro, and macro elements), (ix) pharmaceutical and therapeutic proteins, and (x) edible vaccines, etc. Multiple genes in transgenic crops can be useful in developing durable disease resistance and a broad insect-control spectrum and could lead to potential cost-saving advantages for farmers. The development of transgenic to produce high-value pharmaceuticals and the edible vaccine is also under progress, which requires much more research and development work before commercially viable products will be available. In addition, molecular-aided selection (MAS) is now routinely used to enhance the speed and precision of plant breeding. Newer technologies need to be developed and deployed for enhancing and sustaining agricultural productivity. There is a need to optimize the use of biotechnology in conjunction with conventional technologies to achieve higher productivity with fewer resources. Therefore, genetic modification/ engineering of crop plants assumes greater importance, which demands the development and adoption of newer technology for the genetic improvement of crops for increasing crop productivity.

Keywords: biotechnology, plant genetic engineering, genetically modified, biotic, abiotic, disease resistance

Procedia PDF Downloads 48
202 Multiparticulate SR Formulation of Dexketoprofen Trometamol by Wurster Coating Technique

Authors: Bhupendra G. Prajapati, Alpesh R. Patel

Abstract:

The aim of this research work is to develop sustained release multi-particulates dosage form of Dexketoprofen trometamol, which is the pharmacologically active isomer of ketoprofen. The objective is to utilization of active enantiomer with minimal dose and administration frequency, extended release multi-particulates dosage form development for better patience compliance was explored. Drug loaded and sustained release coated pellets were prepared by fluidized bed coating principle by wurster coater. Microcrystalline cellulose as core pellets, povidone as binder and talc as anti-tacking agents were selected during drug loading while Kollicoat SR 30D as sustained release polymer, triethyl citrate as plasticizer and micronized talc as an anti-adherent were used in sustained release coating. Binder optimization trial in drug loading showed that there was increase in process efficiency with increase in the binder concentration. 5 and 7.5%w/w concentration of Povidone K30 with respect to drug amount gave more than 90% process efficiency while higher amount of rejects (agglomerates) were observed for drug layering trial batch taken with 7.5% binder. So for drug loading, optimum Povidone concentration was selected as 5% of drug substance quantity since this trial had good process feasibility and good adhesion of the drug onto the MCC pellets. 2% w/w concentration of talc with respect to total drug layering solid mass shows better anti-tacking property to remove unnecessary static charge as well as agglomeration generation during spraying process. Optimized drug loaded pellets were coated for sustained release coating from 16 to 28% w/w coating to get desired drug release profile and results suggested that 22% w/w coating weight gain is necessary to get the required drug release profile. Three critical process parameters of Wurster coating for sustained release were further statistically optimized for desired quality target product profile attributes like agglomerates formation, process efficiency, and drug release profile using central composite design (CCD) by Minitab software. Results show that derived design space consisting 1.0 to 1.2 bar atomization air pressure, 7.8 to 10.0 gm/min spray rate and 29-34°C product bed temperature gave pre-defined drug product quality attributes. Scanning Image microscopy study results were also dictate that optimized batch pellets had very narrow particle size distribution and smooth surface which were ideal properties for reproducible drug release profile. The study also focused on optimized dexketoprofen trometamol pellets formulation retain its quality attributes while administering with common vehicle, a liquid (water) or semisolid food (apple sauce). Conclusion: Sustained release multi-particulates were successfully developed for dexketoprofen trometamol which may be useful to improve acceptability and palatability of a dosage form for better patient compliance.

Keywords: dexketoprofen trometamol, pellets, fluid bed technology, central composite design

Procedia PDF Downloads 112
201 qPCR Method for Detection of Halal Food Adulteration

Authors: Gabriela Borilova, Monika Petrakova, Petr Kralik

Abstract:

Nowadays, European producers are increasingly interested in the production of halal meat products. Halal meat has been increasingly appearing in the EU's market network and meat products from European producers are being exported to Islamic countries. Halal criteria are mainly related to the origin of muscle used in production, and also to the way products are obtained and processed. Although the EU has legislatively addressed the question of food authenticity, the circumstances of previous years when products with undeclared horse or poultry meat content appeared on EU markets raised the question of the effectiveness of control mechanisms. Replacement of expensive or not-available types of meat for low-priced meat has been on a global scale for a long time. Likewise, halal products may be contaminated (falsified) by pork or food components obtained from pigs. These components include collagen, offal, pork fat, mechanically separated pork, emulsifier, blood, dried blood, dried blood plasma, gelatin, and others. These substances can influence sensory properties of the meat products - color, aroma, flavor, consistency and texture or they are added for preservation and stabilization. Food manufacturers sometimes access these substances mainly due to their dense availability and low prices. However, the use of these substances is not always declared on the product packaging. Verification of the presence of declared ingredients, including the detection of undeclared ingredients, are among the basic control procedures for determining the authenticity of food. Molecular biology methods, based on DNA analysis, offer rapid and sensitive testing. The PCR method and its modification can be successfully used to identify animal species in single- and multi-ingredient raw and processed foods and qPCR is the first choice for food analysis. Like all PCR-based methods, it is simple to implement and its greatest advantage is the absence of post-PCR visualization by electrophoresis. qPCR allows detection of trace amounts of nucleic acids, and by comparing an unknown sample with a calibration curve, it can also provide information on the absolute quantity of individual components in the sample. Our study addresses a problem that is related to the fact that the molecular biological approach of most of the work associated with the identification and quantification of animal species is based on the construction of specific primers amplifying the selected section of the mitochondrial genome. In addition, the sections amplified in conventional PCR are relatively long (hundreds of bp) and unsuitable for use in qPCR, because in DNA fragmentation, amplification of long target sequences is quite limited. Our study focuses on finding a suitable genomic DNA target and optimizing qPCR to reduce variability and distortion of results, which is necessary for the correct interpretation of quantification results. In halal products, the impact of falsification of meat products by the addition of components derived from pigs is all the greater that it is not just about the economic aspect but above all about the religious and social aspect. This work was supported by the Ministry of Agriculture of the Czech Republic (QJ1530107).

Keywords: food fraud, halal food, pork, qPCR

Procedia PDF Downloads 226
200 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 53
199 Prevalence and Molecular Characterization of Extended-Spectrum–β Lactamase and Carbapenemase-Producing Enterobacterales from Tunisian Seafood

Authors: Mehdi Soula, Yosra Mani, Estelle Saras, Antoine Drapeau, Raoudha Grami, Mahjoub Aouni, Jean-Yves Madec, Marisa Haenni, Wejdene Mansour

Abstract:

Multi-resistance to antibiotics in gram-negative bacilli and particularly in enterobacteriaceae, has become frequent in hospitals in Tunisia. However, data on antibiotic resistant bacteria in aquatic products are scarce. The aims of this study are to estimate the proportion of ESBL- and carbapenemase-producing Enterobacterales in seafood (clams and fish) in Tunisia and to molecularly characterize the collected isolates. Two types of seafood were sampled in unrelated markets in four different regions in Tunisia (641 pieces of farmed fish and 1075 mediterranean clams divided into 215 pools, and each pool contained 5 pieces). Once purchased, all samples were incubated in tubes containing peptone salt broth for 24 to 48h at 37°C. After incubation, overnight cultures were isolated on selective MacConkey agar plates supplemented with either imipenem or cefotaxime, identified using API20E test strips (bioMérieux, Marcy-l’Étoile, France) and confirmed by Maldi-TOF MS. Antimicrobial susceptibility was determined by the disk diffusion method on Mueller-Hinton agar plates and results were interpreted according to CA-SFM 2021. ESBL-producing Enterobacterales were detected using the Double Disc Synergy Test (DDST). Carbapenem-resistance was detected using an ertapenem disk and was respectively confirmed using the ROSCO KPC/MBL and OXA-48 Confirm Kit (ROSCO Diagnostica, Taastrup, Denmark). DNA was extracted using a NucleoSpin Microbial DNA extraction kit (Macherey-Nagel, Hoerdt, France), according to the manufacturer’s instructions. Resistance genes were determined using the CGE online tools. The replicon content and plasmid formula were identified from the WGS data using PlasmidFinder 2.0.1 and pMLST 2.0. From farmed fishes, nine ESBL-producing strains (9/641, 1.4%) were isolated, which were identified as E. coli (n=6) and K. pneumoniae (n=3). Among the 215 pools of 5 clams analyzed, 18 ESBL-producing isolates were identified, including 14 E. coli and 4 K. pneumoniae. A low isolation rate of ESBL-producing Enterobacterales was detected 1.6% (18/1075) in clam pools. In fish, the ESBL phenotype was due to the presence of the blaCTX-M-15 gene in all nine isolates, but no carbapenemase gene was identified. In clams, the predominant ESBL phenotype was blaCTX-M-1 (n=6/18). blaCPE (NDM1, OXA48) was detected only in 3 isolates ‘K. pneumoniae isolates’. Replicon typing on the strains carring the ESBL and carbapenemase gene revelead that the major type plasmid carried ESBL were IncF (42.3%) [n=11/26]. In all, our results suggest that seafood can be a reservoir of multi-drug resistant bacteria, most probably of human origin but also by the selection pressure of antibiotic. Our findings raise concerns that seafood bought for consumption may serve as potential reservoirs of AMR genes and pose serious threat to public health.

Keywords: BLSE, carbapenemase, enterobacterales, tunisian seafood

Procedia PDF Downloads 80
198 International Trade, Manufacturing and Employment: The First Two Decades of South African Democracy

Authors: Phillip F. Blaauw, Anna M. Pretorius

Abstract:

South Africa re-entered the international economy in the early 1990s, after Apartheid, at a time when globalisation was gathering momentum. Globalisation led to a more open economy, increased export volumes and a changed export mix. Manufacturing goods gained ground relative to mining products. After 21 years of democracy, South African researchers and policymakers need to evaluate the impact of international trade on the level of employment and compensation of employees in the South African manufacturing industry. This is important given the consistent and high levels of unemployment in South Africa. This paper has this evaluation as its aim. Two complimenting approaches are utilised. The 27 sub divisions of the South African manufacturing industry are classified according to capital/labour ratios. Possible trends in employment levels and employee compensation for these categories are then identified when comparing levels in 1995 to those in 2014. The supplementing empirical approach is cross-sectional and panel data regressions for the same period. The aim of the regression analysis is to explain the observed changes in employment and employee compensation levels between 1995 and 2014. The first part of the empirical approach revealed that over the 20-year period the intermediate capital intensive, labour intensive an ultra-labour intensive manufacturing industries all showed massive declines in overall employment. Only three of the 19 industries for these classifications showed marginal overall employment gains. The only meaningful gains were recorded in three of the eight capital intensive manufacturing industries. The overall performance of the South African manufacturing industry is therefore dismal at best. This scenario plays itself out for the skilled section of the intermediate capital intensive, labour intensive an ultra-labour intensive manufacturing industries as well. 18 out of the 19 industries displayed declines even for the skilled section of the labour force. The formal regression analysis supplements the above results. Real production growth is a statistically significant (95 per cent confidence level) explanatory variable of the overall employment level for the period under consideration, albeit with a small positive coefficient. The variables with the most significant negative relationship with changes in overall employment were the dummy variables for intermediate capital intensive and labour intensive manufacturing goods. Disaggregating overall changes in employment further in terms of skill levels revealed that skilled employment in particular responded negatively to increases in the ratio between imported and local inputs for manufacturing. The dummy variable for the labour intensive sectors remained negative and statistically significant, indicating that the labour intensive sectors of South African manufacturing remain vulnerable to the loss of employment opportunities. Whereas the first period (1995 to 2001) after the opening of the South African economy brought positive changes for skilled employment, continued increases in imported inputs displaced some of the skilled labour as well, putting further pressure on the South African economy with already high and persistent unemployment levels. Given the negative for the world commodity cycle and a stagnant local manufacturing sector, the challenge for policymakers is getting even more pronounced after South Africa’s political coming of age.

Keywords: capital/labour ratios, employment, employee compensation, manufacturing

Procedia PDF Downloads 192
197 Colocalization Analysis to Understand Yttrium Uptake in Saxifraga paniculata Using Complementary Imaging Technics

Authors: Till Fehlauer, Blanche Collin, Bernard Angeletti, Andrea Somogyi, Claire Lallemand, Perrine Chaurand, Cédric Dentant, Clement Levard, Jerome Rose

Abstract:

Over the last decades, yttrium (Y) has gained importance in high-tech applications. It is an essential part of alloys and compounds used for lasers, displays, or cell phones, for example. Due to its chemical similarities with the lanthanides, Y is often considered a rare earth element (REE). Despite their increased usage, the environmental behavior of REEs remains poorly understood. Especially regarding their interactions with plants, many uncertainties exist. On the one hand, Y is known to have a negative effect on root development and germination, but on the other hand, it appears to promote plant growth at low concentrations. In order to understand these phenomena, a precise knowledge is necessary about how Y is absorbed by the plant and how it is handled once inside the organism. Contradictory studies exist, stating that due to a similar ionic radius, Y and the other REEs might be absorbed through Ca²⁺-channels, while others suspect that Y has a shared pathway with Al³⁺. In this study, laser ablation coupled ICP-MS, and synchrotron-based micro-X-ray fluorescence (µXRF, beamline Nanoscopium, SOLEIL, France) have been used in order to localize Y within the plant tissue and identify associated elements. The plant used in this study is Saxifraga paniculata, a rugged alpine plant that has shown an affinity for Y in previous studies (in prep.). Furthermore, Saxifraga paniculata performs guttation, which means that it possesses phloem sap secreting openings on the leaf surface that serve to regulate root pressure. These so-called hydathodes could provide special insights in elemental transport in plants. The plants have been grown on Y doped soil (500mg/kg DW) for four months. The results showed that Y was mainly concentrated in the roots of Saxifraga paniculata (260 ± 85mg/kg), and only a small amount was translocated to the leaves (10 ± 7.8mg/kg). µXRF analysis indicated that within the root transects, the majority of Y remained in the epidermis and hardly penetrated the stele. Laser ablation coupled ICP-MS confirmed this finding and showed a positive correlation in the roots between Y, Fe, Al, and to a lesser extent Ca. In the stem transect, Y was mainly detected in a hotspot of approximately 40µm in diameter situated in the endodermis area. Within the stem and especially in the hotspot, Y was highly colocalized with Al and Fe. Similar-sized Y hotspots have been detected in/on the leaves. All of them were strongly colocalized with Al and Fe, except for those situated within the hydathodes, which showed no colocalization with any of the measured elements. Accordingly, a relation between Y and Ca during root uptake remains possible, whereas a correlation to Fe and Al appears to be dominant in the aerial parts, suggesting common storage compartments, the formation of complexes, or a shared pathway during translocation.

Keywords: laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), Phytoaccumulation, Rare earth elements, Saxifraga paniculata, Synchrotron-based micro-X-ray fluorescence, Yttrium

Procedia PDF Downloads 127
196 Exploring 3-D Virtual Art Spaces: Engaging Student Communities Through Feedback and Exhibitions

Authors: Zena Tredinnick-Kirby, Anna Divinsky, Brendan Berthold, Nicole Cingolani

Abstract:

Faculty members from The Pennsylvania State University, Zena Tredinnick-Kirby, Ph.D., and Anna Divinsky are at the forefront of an innovative educational approach to improve access in asynchronous online art courses. Their pioneering work weaves virtual reality (VR) technologies to construct a more equitable educational experience for students by transforming their learning and engagement. The significance of their study lies in the need to bridge the digital divide in online art courses, making them more inclusive and interactive for all distance learners. In an era where conventional classroom settings are no longer the sole means of instruction, Tredinnick-Kirby and Divinsky harness the power of instructional technologies to break down geographical barriers by incorporating an interactive VR experience that facilitates community building within an online environment transcending physical constraints. The methodology adopted by Tredinnick-Kirby, and Divinsky is centered around integrating 3D virtual spaces into their art courses. Spatial.io, a virtual world platform, enables students to develop digital avatars and engage in virtual art museums through a free browser-based program or an Oculus headset, where they can interact with other visitors and critique each other’s artwork. The goal is not only to provide students with an engaging and immersive learning experience but also to nourish them with a more profound understanding of the language of art criticism and technology. Furthermore, the study aims to cultivate critical thinking skills among students and foster a collaborative spirit. By leveraging cutting-edge VR technology, students are encouraged to explore the possibilities of their field, experimenting with innovative tools and techniques. This approach not only enriches their learning experience but also prepares them for a dynamic and ever-evolving art landscape in technology and education. One of the fundamental objectives of Tredinnick-Kirby and Divinsky is to remodel how feedback is derived through peer-to-peer art critique. Through the inclusion of 3D virtual spaces into the curriculum, students now have the opportunity to install their final artwork in a virtual gallery space and incorporate peer feedback, enabling students to exhibit their work opening the doors to a collaborative and interactive process. Students can provide constructive suggestions, engage in discussions, and integrate peer commentary into developing their ideas and praxis. This approach not only accelerates the learning process but also promotes a sense of community and growth. In summary, the study conducted by the Penn State faculty members Zena Tredinnick-Kirby, and Anna Divinsky represents innovative use of technology in their courses. By incorporating 3D virtual spaces, they are enriching the learners' experience. Through this inventive pedagogical technique, they nurture critical thinking, collaboration, and the practical application of cutting-edge technology in art. This research holds great promise for the future of online art education, transforming it into a dynamic, inclusive, and interactive experience that transcends the confines of distance learning.

Keywords: Art, community building, distance learning, virtual reality

Procedia PDF Downloads 46
195 Investigation of Linezolid, 127I-Linezolid and 131I-Linezolid Effects on Slime Layer of Staphylococcus with Nuclear Methods

Authors: Hasan Demiroğlu, Uğur Avcıbaşı, Serhan Sakarya, Perihan Ünak

Abstract:

Implanted devices are progressively practiced in innovative medicine to relieve pain or improve a compromised function. Implant-associated infections represent an emerging complication, caused by organisms which adhere to the implant surface and grow embedded in a protective extracellular polymeric matrix, known as a biofilm. In addition, the microorganisms within biofilms enter a stationary growth phase and become phenotypically resistant to most antimicrobials, frequently causing treatment failure. In such cases, surgical removal of the implant is often required, causing high morbidity and substantial healthcare costs. Staphylococcus aureus is the most common pathogen causing implant-associated infections. Successful treatment of these infections includes early surgical intervention and antimicrobial treatment with bactericidal drugs that also act on the surface-adhering microorganisms. Linezolid is a promising anti-microbial with ant-staphylococcal activity, used for the treatment of MRSA infections. Linezolid is a synthetic antimicrobial and member of oxazolidinoni group, with a bacteriostatic or bactericidal dose-dependent antimicrobial mechanism against gram-positive bacteria. Intensive use of antibiotics, have emerged multi-resistant organisms over the years and major problems have begun to be experienced in the treatment of infections occurred with them. While new drugs have been developed worldwide, on the other hand infections formed with microorganisms which gained resistance against these drugs were reported and the scale of the problem increases gradually. Scientific studies about the production of bacterial biofilm increased in recent years. For this purpose, we investigated the activity of Lin, Lin radiolabeled with 131I (131I-Lin) and cold iodinated Lin (127I-Lin) against clinical strains of Staphylococcus aureus DSM 4910 in biofilm. In the first stage, radio and cold labeling studies were performed. Quality-control studies of Lin and iodo (radio and cold) Lin derivatives were carried out by using TLC (Thin Layer Radiochromatography) and HPLC (High Pressure Liquid Chromatography). In this context, it was found that the binding yield was obtained to be about 86±2 % for 131I-Lin. The minimal inhibitory concentration (MIC) of Lin, 127I-Lin and 131I-Lin for Staphylococcus aureus DSM 4910 strain were found to be 1µg/mL. In time-kill studies of Lin, 127I-Lin and 131I-Lin were producing ≥ 3 log10 decreases in viable counts (cfu/ml) within 6 h at 2 and 4 fold of MIC respectively. No viable bacteria were observed within the 24 h of the experiments. Biofilm eradication of S. aureus started with 64 µg/mL of Lin, 127I-Lin and 131I-Lin, and OD630 was 0.507±0.0.092, 0.589±0.058 and 0.266±0.047, respectively. The media control of biofilm producing Staphylococcus was 1.675±0,01 (OD630). 131I and 127I did not have any effects on biofilms. Lin and 127I-Lin were found less effectively than 131I-Lin at killing cells in biofilm and biofilm eradication. Our results demonstrate that the 131I-Lin have potent anti-biofilm activity against S. aureus compare to Lin, 127I-Lin and media control. This is suggested that, 131I may have harmful effect on biofilm structure.

Keywords: iodine-131, linezolid, radiolabeling, slime layer, Staphylococcus

Procedia PDF Downloads 539
194 Keeping under the Hat or Taking off the Lid: Determinants of Social Enterprise Transparency

Authors: Echo Wang, Andrew Li

Abstract:

Transparency could be defined as the voluntary release of information by institutions that is relevant to their own evaluation. Transparency based on information disclosure is recognised to be vital for the Third Sector, as civil society organisations are under pressure to become more transparent to answer the call for accountability. The growing importance of social enterprises as hybrid organisations emerging from the nexus of the public, the private and the Third Sector makes their transparency a topic worth exploring. However, transparency for social enterprises has not yet been studied: as a new form of organisation that combines non-profit missions with commercial means, it is unclear to both the practical and the academic world if the shift in operational logics from non-profit motives to for-profit pursuits has significantly altered their transparency. This is especially so in China, where informational governance and practices of information disclosure by local governments, industries and civil society are notably different from other countries. This study investigates the transparency-seeking behaviour of social enterprises in Greater China to understand what factors at the organisational level may affect their transparency, measured by their willingness to disclose financial information. We make use of the Survey on the Models and Development Status of Social Enterprises in the Greater China Region (MDSSGCR) conducted in 2015-2016. The sample consists of more than 300 social enterprises from the Mainland, Hong Kong and Taiwan. While most respondents have provided complete answers to most of the questions, there is tremendous variation in the respondents’ demonstrated level of transparency in answering those questions related to the financial aspects of their organisations, such as total revenue, net profit, source of revenue and expense. This has led to a lot of missing data on such variables. In this study, we take missing data as data. Specifically, we use missing values as a proxy for an organisation’s level of transparency. Our dependent variables are constructed from missing data on total revenue, net profit, source of revenue and cost breakdown. In addition, we also take into consideration the quality of answers in coding the dependent variables. For example, to be coded as being transparent, an organization must report the sources of at least 50% of its revenue. We have four groups of predictors of transparency, namely nature of organization, decision making body, funding channel and field of concentration. Furthermore, we control for an organisation’s stage of development, self-identity and region. The results show that social enterprises that are at their later stages of organisational development and are funded by financial means are significantly more transparent than others. There is also some evidence that social enterprises located in the Northeast region in China are less transparent than those located in other regions probably because of local political economy features. On the other hand, the nature of the organisation, the decision-making body and field of concentration do not systematically affect the level of transparency. This study provides in-depth empirical insights into the information disclosure behaviour of social enterprises under specific social context. It does not only reveal important characteristics of Third Sector development in China, but also contributes to the general understanding of hybrid institutions.

Keywords: China, information transparency, organisational behaviour, social enterprise

Procedia PDF Downloads 156
193 Designing Short-Term Study Abroad Programs for Graduate Students: The Case of Morocco

Authors: Elaine Crable, Amit Sen

Abstract:

Short-term study abroad programs have become a mainstay of MBA programs. The benefits of international business experiences, along with its exposure to global cultures, are well documented. However, developing a rewarding study, abroad program at the graduate level can be challenging for Faculty, especially when devising such a program for a group of part-time MBA students who come with a wide range of experiences and demographic characteristics. Each student has individual expectations for the study abroad experience. This study provides suggestions and considerations for Faculty that are planning to design a short-term study abroad program, especially for part-time MBA students. Insights are based on a recent experience leading a group of twenty-one students on a ten-day program to Morocco. The trip was designed and facilitated by two faculty members and a local Moroccan facilitator. This experience led to a number of insights and recommendations. First, the choice of location is critical. The choice of Morocco was very deliberate, owing to its multi-faceted cultural landscape and international business interest. It is an Islamic State with close ties to Europe both culturally and geographically and Morocco is a multi-lingual country with some combination of three languages spoken by most – English, Arabic, and French. Second, collaboration with a local ‘academic’ partner allowed the level of instruction to be both rigorous and significantly more engaging. Third, allowing students to participate in the planning of the trip enabled the trip participants to collaborate, negotiate, and share their own experiences and strengths. The pre-trip engagement was structured by creating four sub-groups, each responsible for an assigned city. Each student sub-group had to provide a historical background of the assigned city, plan the itinerary including sites to visit, cuisine to experience, industries to explore, markets to visit, plus provide a budget for that city’s expenses. The pre-planning segment of the course was critical for the success of the program as students were able to contribute to the design of the program through collaboration and negotiation with their peers. Fourth, each student sub-group was assigned industry to study within Morocco. The student sub-group prepared a presentation and a group paper with their analysis of the chosen industries. The pre-planning activities created strong bonds among the trip participants, which was evident when faced with on-ground challenges, especially when it was necessary to quickly evacuate due to a surprise USA COVID evacuation notice. The entire group supported each other when quickly making their way back to the United States. Unfortunately, the trip was cut short by two days due to this emergency exit, but the feedback regarding the program was very positive all around. While the program design put pressure on the Faculty leads regarding planning and coordination upfront, the outcome in terms of student engagement, student learning, collaboration and negotiation were all favorable and worth the effort. Finally, an added value, the cost of the program for the student was significantly lower compared to running a program with a professional provider.

Keywords: business education, experiential learning, international education, study abroad

Procedia PDF Downloads 149
192 Performance of Pilot Test of Geotextile Tube Filled with Lightly Cemented Clay

Authors: S. H. Chew, Z. X. Eng, K. E. Chuah, T. Y. Lim, H. M. A. Yim

Abstract:

In recent years, geotextile tube has been widely used in the hydraulic engineering and dewatering industry. To construct a stable containment bund with geotextile tubes, the sand slurry is always the preference infilling material. However, the shortage of sand supply posts a problem in Singapore to adopt this construction method in the actual construction of long containment bund. Hence, utilizing the soft dredged clay or the excavated soft clay as the infilling material of geotextile tubes has a great economic benefit. There are any technical issues with using this soft clayey material as infilling material, especially on the excessive settlement and stability concerns. To minimize the shape deformation and settlement of geotextile tube associated with the use of this soft clay infilling material, a modified innovative infilling material is proposed – lightly cemented soft clay. The preliminary laboratory studies have shown that the dewatering mechanism via geotextile material of the tube skin, and the introduction of cementitious chemical action of the lightly cemented soft clay will accelerate the consolidation and improve the shear strength of infill material. This study aims to extend the study by conducting a pilot test of the geotextile tube filled with lightly cemented clay. This study consists of testing on a series of miniature geo-tubes and two full-size geotextile tube. In the miniature geo-tube tests, a number of small scaled-down size of geotextile tubes were filled with cemented clay (at water content of 150%) with cement content of 0% to 8% (by weight). The shear strength development of the lightly cemented clay under dewatering mechanism was evaluated using a modified in-situ Cone Penetration Test (CPT) at 0 days, 3 days, 7 days and 28 days after the infilling. The undisturbed soil samples of lightly cemented infilled clay were also extracted at 3-days and 7-days for triaxial tests and evaluation of final water content. The results suggested that the geotextile tubes filled with un-cemented soft clay experienced very significant shape change over the days (as control test). However, geotextile mini-tubes filled with lightly cemented clay experienced only marginal shape changed, even that the strength development of this lightly cemented clay inside the tube may not show significant strength gain at the early stage. The shape stability is believed to be due to the confinement effect of the geotextile tube with clay at non-slurry state. Subsequently, a full-scale instrumented geotextile tube filled with lightly cemented clay was performed. The extensive results of strain gauges and pressure transducers installed on this full-size geotextile tube demonstrated a substantial mobilization of tensile forces on the geotextile skin corresponding to the filling activity and the subsequent dewatering stage. Shape change and the in-fill material strength development was also monitored. In summary, the construction of containment bund with geotextile tube filled with lightly cemented clay is found to be technically feasible and stable with the use of the sufficiently strong (i.e. adequate tensile strength) geotextile tube, the adequate control on the dosage of cement content, and suitable water content of infilling soft clay material.

Keywords: cemented clay, containment bund, dewatering, geotextile tube

Procedia PDF Downloads 248
191 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.

Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration

Procedia PDF Downloads 137
190 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management

Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.

Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities

Procedia PDF Downloads 41
189 Ultrasonic Atomizer for Turbojet Engines

Authors: Aman Johri, Sidhant Sood, Pooja Suresh

Abstract:

This paper suggests a new and more efficient method of atomization of fuel in a combustor nozzle of a high bypass turbofan engine, using ultrasonic vibrations. Since atomization of fuel just before the fuel spray is injected into the combustion chamber is an important and crucial aspect related to functioning of a propulsion system, the technology suggested by this paper and the experimental analysis on the system components eventually proves to assist in complete and rapid combustion of the fuel in the combustor module of the engine. Current propulsion systems use carburetors, atomization nozzles and apertures in air intake pipes for atomization. The idea of this paper is to deploy new age hybrid technology, namely the Ultrasound Field Effect (UFE) to effectively atomize fuel before it enters the combustion chamber, as a viable and effective method to increase efficiency and improve upon existing designs. The Ultrasound Field Effect is applied axially, on diametrically opposite ends of an atomizer tube that gloves onto the combustor nozzle, where the fuel enters and exits under a pre-defined pressure. The Ultrasound energy vibrates the fuel particles to a breakup frequency. At reaching this frequency, the fuel particles start disintegrating into smaller diameter particles perpendicular to the axis of application of the field from the parent boundary layer of fuel flow over the baseplate. These broken up fuel droplets then undergo swirling effect as per the original nozzle design, with a higher breakup ratio than before. A significant reduction of the size of fuel particles eventually results in an increment in the propulsive efficiency of the engine. Moreover, the Ultrasound atomizer operates within a control frequency such that effects of overheating and induced vibrations are least felt on the overall performance of the engine. The design of an electrical manifold for the multiple-nozzle system over a typical can-annular combustor is developed along with this study, such that the product can be installed and removed easily for maintenance and repairing, can allow for easy access for inspections and transmits least amount of vibrational energy to the surface of the combustor. Since near-field ultrasound is used, the vibrations are easily controlled, thereby successfully reducing vibrations on the outer shell of the combustor. Experimental analysis is carried out on the effect of ultrasonic vibrations on flowing jet turbine fuel using an ultrasound generator probe and results of an effective decrease in droplet size across a constant diameter, away from the boundary layer of flow is noted using visual aid by observing under ultraviolet light. The choice of material for the Ultrasound inducer tube and crystal along with the operating range of temperatures, pressures, and frequencies of the Ultrasound field effect are also studied in this paper, while taking into account the losses incurred due to constant vibrations and thermal loads on the tube surface.

Keywords: atomization, ultrasound field effect, titanium mesh, breakup frequency, parent boundary layer, baseplate, propulsive efficiency, jet turbine fuel, induced vibrations

Procedia PDF Downloads 216
188 Numerical Optimization of Cooling System Parameters for Multilayer Lithium Ion Cell and Battery Packs

Authors: Mohammad Alipour, Ekin Esen, Riza Kizilel

Abstract:

Lithium-ion batteries are a commonly used type of rechargeable batteries because of their high specific energy and specific power. With the growing popularity of electric vehicles and hybrid electric vehicles, increasing attentions have been paid to rechargeable Lithium-ion batteries. However, safety problems, high cost and poor performance in low ambient temperatures and high current rates, are big obstacles for commercial utilization of these batteries. By proper thermal management, most of the mentioned limitations could be eliminated. Temperature profile of the Li-ion cells has a significant role in the performance, safety, and cycle life of the battery. That is why little temperature gradient can lead to great loss in the performances of the battery packs. In recent years, numerous researchers are working on new techniques to imply a better thermal management on Li-ion batteries. Keeping the battery cells within an optimum range is the main objective of battery thermal management. Commercial Li-ion cells are composed of several electrochemical layers each consisting negative-current collector, negative electrode, separator, positive electrode, and positive current collector. However, many researchers have adopted a single-layer cell to save in computing time. Their hypothesis is that thermal conductivity of the layer elements is so high and heat transfer rate is so fast. Therefore, instead of several thin layers, they model the cell as one thick layer unit. In previous work, we showed that single-layer model is insufficient to simulate the thermal behavior and temperature nonuniformity of the high-capacity Li-ion cells. We also studied the effects of the number of layers on thermal behavior of the Li-ion batteries. In this work, first thermal and electrochemical behavior of the LiFePO₄ battery is modeled with 3D multilayer cell. The model is validated with the experimental measurements at different current rates and ambient temperatures. Real time heat generation rate is also studied at different discharge rates. Results showed non-uniform temperature distribution along the cell which requires thermal management system. Therefore, aluminum plates with mini-channel system were designed to control the temperature uniformity. Design parameters such as channel number and widths, inlet flow rate, and cooling fluids are optimized. As cooling fluids, water and air are compared. Pressure drop and velocity profiles inside the channels are illustrated. Both surface and internal temperature profiles of single cell and battery packs are investigated with and without cooling systems. Our results show that using optimized Mini-channel cooling plates effectively controls the temperature rise and uniformity of the single cells and battery packs. With increasing the inlet flow rate, cooling efficiency could be reached up to 60%.

Keywords: lithium ion battery, 3D multilayer model, mini-channel cooling plates, thermal management

Procedia PDF Downloads 136
187 The Relevance of Community Involvement in Flood Risk Governance Towards Resilience to Groundwater Flooding. A Case Study of Project Groundwater Buckinghamshire, UK

Authors: Claude Nsobya, Alice Moncaster, Karen Potter, Jed Ramsay

Abstract:

The shift in Flood Risk Governance (FRG) has moved away from traditional approaches that solely relied on centralized decision-making and structural flood defenses. Instead, there is now the adoption of integrated flood risk management measures that involve various actors and stakeholders. This new approach emphasizes people-centered approaches, including adaptation and learning. This shift to a diversity of FRG approaches has been identified as a significant factor in enhancing resilience. Resilience here refers to a community's ability to withstand, absorb, recover, adapt, and potentially transform in the face of flood events. It is argued that if the FRG merely focused on the conventional 'fighting the water' - flood defense - communities would not be resilient. The move to these people-centered approaches also implies that communities will be more involved in FRG. It is suggested that effective flood risk governance influences resilience through meaningful community involvement, and effective community engagement is vital in shaping community resilience to floods. Successful community participation not only uses context-specific indigenous knowledge but also develops a sense of ownership and responsibility. Through capacity development initiatives, it can also raise awareness and all these help in building resilience. Recent Flood Risk Management (FRM) projects have thus had increasing community involvement, with varied conceptualizations of such community engagement in the academic literature on FRM. In the context of overland floods, there has been a substantial body of literature on Flood Risk Governance and Management. Yet, groundwater flooding has gotten little attention despite its unique qualities, such as its persistence for weeks or months, slow onset, and near-invisibility. There has been a little study in this area on how successful community involvement in Flood Risk Governance may improve community resilience to groundwater flooding in particular. This paper focuses on a case study of a flood risk management project in the United Kingdom. Buckinghamshire Council is leading Project Groundwater, which is one of 25 significant initiatives sponsored by England's Department for Environment, Food and Rural Affairs (DEFRA) Flood and Coastal Resilience Innovation Programme. DEFRA awarded Buckinghamshire Council and other councils 150 million to collaborate with communities and implement innovative methods to increase resilience to groundwater flooding. Based on a literature review, this paper proposes a new paradigm for effective community engagement in Flood Risk Governance (FRG). This study contends that effective community participation can have an impact on various resilience capacities identified in the literature, including social capital, institutional capital, physical capital, natural capital, human capital, and economic capital. In the case of social capital, for example, successful community engagement can influence social capital through the process of social learning as well as through developing social networks and trust values, which are vital in influencing communities' capacity to resist, absorb, recover, and adapt. The study examines community engagement in Project Groundwater using surveys with local communities and documentary analysis to test this notion. The outcomes of the study will inform community involvement activities in Project Groundwater and may shape DEFRA policies and guidelines for community engagement in FRM.

Keywords: flood risk governance, community, resilience, groundwater flooding

Procedia PDF Downloads 44
186 Crisis Management and Corporate Political Activism: A Qualitative Analysis of Online Reactions toward Tesla

Authors: Roxana D. Maiorescu-Murphy

Abstract:

In the US, corporations have recently embraced political stances in an attempt to respond to the external pressure exerted by activist groups. To date, research in this area remains in its infancy, and few studies have been conducted on the way stakeholder groups respond to corporate political advocacy in general and in the immediacy of such a corporate announcement in particular. The current study aims to fill in this research void. In addition, the study contributes to an emerging trajectory in the field of crisis management by focusing on the delineation between crises (unexpected events related to products and services) and scandals (crises that spur moral outrage). The present study looked at online reactions in the aftermath of Elon Musk’s endorsement of the Republican party on Twitter. Two data sets were collected from Twitter following two political endorsements made by Elon Musk on May 18, 2022, and June 15, 2022, respectively. The total sample of analysis stemming from the data two sets consisted of N=1,374 user comments written as a response to Musk’s initial tweets. Given the paucity of studies in the preceding research areas, the analysis employed a case study methodology, used in circumstances in which the phenomena to be studied had not been researched before. According to the case study methodology, which answers the questions of how and why a phenomenon occurs, this study responded to the research questions of how online users perceived Tesla and why they did so. The data were analyzed in NVivo by the use of the grounded theory methodology, which implied multiple exposures to the text and the undertaking of an inductive-deductive approach. Through multiple exposures to the data, the researcher ascertained the common themes and subthemes in the online discussion. Each theme and subtheme were later defined and labeled. Additional exposures to the text ensured that these were exhaustive. The results revealed that the CEO’s political endorsements triggered moral outrage, leading to Tesla’s facing a scandal as opposed to a crisis. The moral outrage revolved around the stakeholders’ predominant rejection of a perceived intrusion of an influential figure on a domain reserved for voters. As expected, Musk’s political endorsements led to polarizing opinions, and those who opposed his views engaged in online activism aimed to boycott the Tesla brand. These findings reveal that the moral outrage that characterizes a scandal requires communication practices that differ from those that practitioners currently borrow from the field of crisis management. Specifically, because scandals flourish in online settings, practitioners should regularly monitor stakeholder perceptions and address them in real-time. While promptness is essential when managing crises, it becomes crucial to respond immediately as a scandal is flourishing online. Finally, attempts should be made to distance a brand, its products, and its CEO from the latter’s political views.

Keywords: crisis management, communication management, Tesla, corporate political activism, Elon Musk

Procedia PDF Downloads 64
185 Altered Proteostasis Contributes to Skeletal Muscle Atrophy during Chronic Hypobaric Hypoxia: An Insight into Signaling Mechanisms

Authors: Akanksha Agrawal, Richa Rathor, Geetha Suryakumar

Abstract:

Muscle represents about ¾ of the body mass, and a healthy muscular system is required for human performance. A healthy muscular system is dynamically balanced via the catabolic and anabolic process. High altitude associated hypoxia altered this redox balance via producing reactive oxygen and nitrogen species that ultimately modulates protein structure and function, hence, disrupts proteostasis or protein homeostasis. The mechanism by which proteostasis is clinched includes regulated protein translation, protein folding, and protein degradation machinery. Perturbation in any of these mechanisms could increase proteome imbalance in the cellular processes. Altered proteostasis in skeletal muscle is likely to be responsible for contributing muscular atrophy in response to hypoxia. Therefore, we planned to elucidate the mechanism involving altered proteostasis leading to skeletal muscle atrophy under chronic hypobaric hypoxia. Material and Methods-Male Sprague Dawley rats weighing about 200-220 were divided into five groups - Control (Normoxic animals), 1d, 3d, 7d and 14d hypobaric hypoxia exposed animals. The animals were exposed to simulated hypoxia equivalent to 282 torr pressure (equivalent to an altitude of 7620m, 8% oxygen) at 25°C. On completion of chronic hypobaric hypoxia (CHH) exposure, rats were sacrificed, muscle was excised and biochemical, histopathological and protein synthesis signaling were studied. Results-A number of changes were observed with the CHH exposure time period. ROS was increased significantly on 07 and 14 days which were attributed to protein oxidation via damaging muscle protein structure by oxidation of amino acids moiety. The oxidative damage to the protein further enhanced the various protein degradation pathways. Calcium activated cysteine proteases and other intracellular proteases participate in protein turnover in muscles. Therefore, we analysed calpain and 20S proteosome activity which were noticeably increased at CHH exposure as compared to control group representing enhanced muscle protein catabolism. Since inflammatory markers (myokines) affect protein synthesis and triggers degradation machinery. So, we determined inflammatory pathway regulated under hypoxic environment. Other striking finding of the study was upregulation of Akt/PKB translational machinery that was increased on CHH exposure. Akt, p-Akt, p70 S6kinase, and GSK- 3β expression were upregulated till 7d of CHH exposure. Apoptosis related markers, caspase-3, caspase-9 and annexin V was also increased on CHH exposure. Conclusion: The present study provides evidence of disrupted proteostasis under chronic hypobaric hypoxia. A profound loss of muscle mass is accompanied by the muscle damage leading to apoptosis and cell death under CHH. These cellular stress response pathways may play a pivotal role in hypobaric hypoxia induced skeletal muscle atrophy. Further research in these signaling pathways will lead to development of therapeutic interventions for amelioration of hypoxia induced muscle atrophy.

Keywords: Akt/PKB translational machinery, chronic hypobaric hypoxia, muscle atrophy, protein degradation

Procedia PDF Downloads 242