Search results for: Quaternion offset linear canonical transform
555 Sharing and Developing Cultural Heritage Values through a Co-Creative Approach
Authors: Anna Marie Fisker, Daniele Sepe, Mette Bøgh Jensen, Daniela Rimei
Abstract:
In the space of just a few years, the European policy framework on cultural heritage has been completely overhauled, moving towards a people-centred and holistic approach, and eliminating the divisions between the tangible, intangible and digital dimensions. The European Union regards cultural heritage as a potential shared resource, highlighting that all stakeholders share responsibility for its transmission to future generations. This new framework will potentially change the way in which cultural institutions manage, protect and provide access to their heritage. It will change the way in which citizens and communities engage with their cultural heritage and naturally influence the way that professionals deal with it. Participating in the creation of cultural heritage awareness can lead to an increased perception of its value, be it economic, social, environmental or cultural. It can also strengthen our personal identity, sense of belonging and community citizenship. Open Atelier, a Creative Europe project, is based on this foundation, with the goal through co-creation to develop the use, understanding and engagement with our cultural heritage. The project aim to transform selected parts of the heritage into an “experience lab” – an interactive, co-creative, dynamic and participatory space, where cultural heritage is the point of departure for new interactions and experiences between the audience and the museum and its professionals. Through a workshop-based approach built on interdisciplinary collaboration and co-creative processes, Open Atelier has started to design, develop, test, and evaluate a set of Experiences. The first collaborative initiative was set out in the discourse and knowledge of a highly creative period in Denmark where a specific group of Scandinavian artists, the Skagen Painters, gathered in the village of Skagen, the northernmost part of Denmark from the late 1870s until the turn of the century. The Art Museums of Skagen have a large collection of photos from the period, that has never been the subject of more thorough research. The photos display a variation of many different subjects: community, family photos, reproductions of art works, costume parties, family gatherings etc., and carry with them the energies of those peoples’ work and life and evoke instances of communication with the past. This paper is about how we in Open Atelier connect these special stories, this legacy, with another place, in another time, in another context and with another audience. The first Open Atelier Experience – the performance “Around the Lighthouse” – was an initiative resulted from the collaboration between AMAT, an Italian creative organisation, and the Art Museums of Skagen. A group of Italian artists developed a co-creative investigation and reinterpretation of a selection of these historical photos. A poetic journey through videos and voices, aimed at exploring new perspectives on the museum and its heritage. An experiment on how to create new ways to actively engage audiences in the way cultural heritage is explored, interpreted, mediated, presented, and used to examine contemporary issues. This article is about this experiment and its findings, and how different views and methodologies can be adopted to discuss the cultural heritage in museums around Europe and their connection to the community.Keywords: cultural heritage, community, innovation, museums
Procedia PDF Downloads 81554 Inverted Geometry Ceramic Insulators in High Voltage Direct Current Electron Guns for Accelerators
Authors: C. Hernandez-Garcia, P. Adderley, D. Bullard, J. Grames, M. A. Mamun, G. Palacios-Serrano, M. Poelker, M. Stutzman, R. Suleiman, Y. Wang, , S. Zhang
Abstract:
High-energy nuclear physics experiments performed at the Jefferson Lab (JLab) Continuous Electron Beam Accelerator Facility require a beam of spin-polarized ps-long electron bunches. The electron beam is generated when a circularly polarized laser beam illuminates a GaAs semiconductor photocathode biased at hundreds of kV dc inside an ultra-high vacuum chamber. The photocathode is mounted on highly polished stainless steel electrodes electrically isolated by means of a conical-shape ceramic insulator that extends into the vacuum chamber, serving as the cathode electrode support structure. The assembly is known as a dc photogun, which has to simultaneously meet the following criteria: high voltage to manage space charge forces within the electron bunch, ultra-high vacuum conditions to preserve the photocathode quantum efficiency, no field emission to prevent gas load when field emitted electrons impact the vacuum chamber, and finally no voltage breakdown for robust operation. Over the past decade, JLab has tested and implemented the use of inverted geometry ceramic insulators connected to commercial high voltage cables to operate a photogun at 200kV dc with a 10 cm long insulator, and a larger version at 300kV dc with 20 cm long insulator. Plans to develop a third photogun operating at 400kV dc to meet the stringent requirements of the proposed International Linear Collider are underway at JLab, utilizing even larger inverted insulators. This contribution describes approaches that have been successful in solving challenging problems related to breakdown and field emission, such as triple-point junction screening electrodes, mechanical polishing to achieve mirror-like surface finish and high voltage conditioning procedures with Kr gas to extinguish field emission.Keywords: electron guns, high voltage techniques, insulators, vacuum insulation
Procedia PDF Downloads 113553 Elevated Reductive Defluorination of Branched Per and Polyfluoroalkyl Substances by Soluble Metal-Porphyrins and New Mechanistic Insights on the Degradation
Authors: Jun Sun, Tsz Tin Yu, Maryam Mirabediny, Matthew Lee, Adele Jones, Denis M. O’Carroll, Michael J. Manefield, Björn Åkermark, Biswanath Das, Naresh Kumar
Abstract:
Reductive defluorination has emerged as a sustainable approach to clean water from Per and polyfluoroalkyl substances (PFASs), also known as forever organic containments. For last few decades, nano zero valent metals (nZVMs) have been intensively applied in the reductive remediation of groundwater contaminated with chlorinated organic compounds due to its low redox potential, easy application, and low production cost. However, there is inadequate information on the effective reductive defluorination of linear or branched PFAS using nZVMs as reductants because of the lack of suitable catalysts. CoII-5,10,15,20-Tetraphenyl-21H,23H-porphyrin (CoTPP) has been recently reported for effective catalyzing reductive defluorination of branched (br-) perfluorooctane sulfonate (PFOS) by using TiIII citrate as reductant. However, the low water solubility of CoTPP limited its applicability. Here, we explored a series of structurally related soluble cobalt porphyrin catalysts based on our previously reported best performing CoTPP. All soluble porphyrins [[meso-tetra(4-carboxyphenyl)porphyrinato]cobalt(III)]Cl·₇H₂O (CoTCPP), [[meso-tetra(4-sulfonatophenyl) porphyrinato]cobalt(III)]·9H2O (CoTPPS), and [[meso-tetra(4-N-methylpyridyl) porphyrinato]cobalt(II)](I)₄·₄H₂O (CoTMpyP) displayed better defluorination efficiencies than CoTPP. Especially, CoTMpyP presented the best defluorination efficiency for br-PFOS (94 %), branched perfluorooctanoic acid (PFOA) (89 %), and 3,7-Perfluorodecanoic acid (PFDA) (60 %) after 1 day at 70 0C. CoTMpyP-nZn0 system showed 88-164 times higher defluorination rate than VB12-nZn0 system in terms of all investigated br-PFASs. The CoTMpyP-nZn0 also performed effectively at room temperature, demonstrating the potential prospect for in-situ reductive systems. Based on the analysis of the intermediate products, the calculated bond dissociation energies (BDEs) and possible first interaction between CoTMpyP and PFAS, degradation pathways of 3,7-PFDA and 6-PFOS are proposed.Keywords: cationic, soluble porphyrin, cobalt, vitamin b12, pfas, reductive defluorination
Procedia PDF Downloads 78552 Housing Price Dynamics: Comparative Study of 1980-1999 and the New Millenium
Authors: Janne Engblom, Elias Oikarinen
Abstract:
The understanding of housing price dynamics is of importance to a great number of agents: to portfolio investors, banks, real estate brokers and construction companies as well as to policy makers and households. A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models is dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Common Correlated Effects estimator (CCE) of dynamic panel data which also accounts for cross-sectional dependence which is caused by common structures of the economy. In presence of cross-sectional dependence standard OLS gives biased estimates. In this study, U.S housing price dynamics were examined empirically using the dynamic CCE estimator with first-difference of housing price as the dependent and first-differences of per capita income, interest rate, housing stock and lagged price together with deviation of housing prices from their long-run equilibrium level as independents. These deviations were also estimated from the data. The aim of the analysis was to provide estimates with comparisons of estimates between 1980-1999 and 2000-2012. Based on data of 50 U.S cities over 1980-2012 differences of short-run housing price dynamics estimates were mostly significant when two time periods were compared. Significance tests of differences were provided by the model containing interaction terms of independents and time dummy variable. Residual analysis showed very low cross-sectional correlation of the model residuals compared with the standard OLS approach. This means a good fit of CCE estimator model. Estimates of the dynamic panel data model were in line with the theory of housing price dynamics. Results also suggest that dynamics of a housing market is evolving over time.Keywords: dynamic model, panel data, cross-sectional dependence, interaction model
Procedia PDF Downloads 252551 A Simplified Method to Assess the Damage of an Immersed Cylinder Subjected to Underwater Explosion
Authors: Kevin Brochard, Herve Le Sourne, Guillaume Barras
Abstract:
The design of a submarine’s hull is crucial for its operability and crew’s safety, but also complex. Indeed, engineers need to balance lightness, acoustic discretion and resistance to both immersion pressure and environmental attacks. Submarine explosions represent a first-rate threat for the integrity of the hull, whose behavior needs to be properly analyzed. The presented work is focused on the development of a simplified analytical method to study the structural response of a deeply immersed cylinder submitted to an underwater explosion. This method aims to provide engineers a quick estimation of the resulting damage, allowing them to simulate a large number of explosion scenarios. The present research relies on the so-called plastic string on plastic foundation model. A two-dimensional boundary value problem for a cylindrical shell is converted to an equivalent one-dimensional problem of a plastic string resting on a non-linear plastic foundation. For this purpose, equivalence parameters are defined and evaluated by making assumptions on the shape of the displacement and velocity field in the cross-sectional plane of the cylinder. Closed-form solutions for the deformation and velocity profile of the shell are obtained for explosive loading, and compare well with numerical and experimental results. However, the plastic-string model has not yet been adapted for a cylinder in immersion subjected to an explosive loading. In fact, the effects of fluid-structure interaction have to be taken into account. Moreover, when an underwater explosion occurs, several pressure waves are emitted by the gas bubble pulsations, called secondary waves. The corresponding loads, which may produce significant damages to the cylinder, must also be accounted for. The analytical developments carried out to solve the above problem of a shock wave impacting a cylinder, considering fluid-structure interaction will be presented for an unstiffened cylinder. The resulting deformations are compared to experimental and numerical results for different shock factors and different standoff distances.Keywords: immersed cylinder, rigid plastic material, shock loading, underwater explosion
Procedia PDF Downloads 340550 A Statistical-Algorithmic Approach for the Design and Evaluation of a Fresnel Solar Concentrator-Receiver System
Authors: Hassan Qandil
Abstract:
Using a statistical algorithm incorporated in MATLAB, four types of non-imaging Fresnel lenses are designed; spot-flat, linear-flat, dome-shaped and semi-cylindrical-shaped. The optimization employs a statistical ray-tracing methodology of the incident light, mainly considering effects of chromatic aberration, varying focal lengths, solar inclination and azimuth angles, lens and receiver apertures, and the optimum number of prism grooves. While adopting an equal-groove-width assumption of the Poly-methyl-methacrylate (PMMA) prisms, the main target is to maximize the ray intensity on the receiver’s aperture and therefore achieving higher values of heat flux. The algorithm outputs prism angles and 2D sketches. 3D drawings are then generated via AutoCAD and linked to COMSOL Multiphysics software to simulate the lenses under solar ray conditions, which provides optical and thermal analysis at both the lens’ and the receiver’s apertures while setting conditions as per the Dallas-TX weather data. Once the lenses’ characterization is finalized, receivers are designed based on its optimized aperture size. Several cavity shapes; including triangular, arc-shaped and trapezoidal, are tested while coupled with a variety of receiver materials, working fluids, heat transfer mechanisms, and enclosure designs. A vacuum-reflective enclosure is also simulated for an enhanced thermal absorption efficiency. Each receiver type is simulated via COMSOL while coupled with the optimized lens. A lab-scale prototype for the optimum lens-receiver configuration is then fabricated for experimental evaluation. Application-based testing is also performed for the selected configuration, including that of a photovoltaic-thermal cogeneration system and solar furnace system. Finally, some future research work is pointed out, including the coupling of the collector-receiver system with an end-user power generator, and the use of a multi-layered genetic algorithm for comparative studies.Keywords: COMSOL, concentrator, energy, fresnel, optics, renewable, solar
Procedia PDF Downloads 155549 Wave State of Self: Findings of Synchronistic Patterns in the Collective Unconscious
Authors: R. Dimitri Halley
Abstract:
The research within Jungian Psychology presented here is on the wave state of Self. What has been discovered via shared dreaming, independently correlating dreams across dreamers, is beyond the Self stage into the deepest layer or the wave state Self: the very quantum ocean, the Self archetype is embedded in. A quantum wave or rhyming of meaning constituting synergy across several dreamers was discovered in dreams and in extensively shared dream work with small groups at a post therapy stage. Within the format of shared dreaming, we find synergy patterns beyond what Jung called the Self archetype. Jung led us up to the phase of Individuation and delivered the baton to Von Franz to work out the next synchronistic stage, here proposed as the finding of the quantum patterns making up the wave state of Self. These enfolded synchronistic patterns have been found in group format of shared dreaming of individuals approximating individuation, and the unfolding of it is carried by belief and faith. The reason for this format and operating system is because beyond therapy and of living reality, we find no science – no thinking or even awareness in the therapeutic sense – but rather a state of mental processing resembling more like that of spiritual attitude. Thinking as such is linear and cannot contain the deepest layer of Self, the quantum core of the human being. It is self reflection which is the container for the process at the wave state of Self. Observation locks us in an outside-in reactive flow from a first-person perspective and hence toward the surface we see to believe, whereas here, the direction of focus shifts to inside out/intrinsic. The operating system or language at the wave level of Self is thus belief and synchronicity. Belief has up to now been almost the sole province of organized religions but was viewed by Jung as an inherent property in the process of Individuation. The shared dreaming stage of the synchronistic patterns forms a larger story constituting a deep connectivity unfolding around individual Selves. Dreams of independent dreamers form larger patterns that come together as puzzles forming a larger story, and in this sense, this group work level builds on Jung as a post individuation collective stage. Shared dream correlations will be presented, illustrating a larger story in terms of trails of shared synchronicity.Keywords: belief, shared dreaming, synchronistic patterns, wave state of self
Procedia PDF Downloads 197548 Using Motives of Sports Consumption to Explain Team Identity: A Comparison between Football Fans across the Pond
Authors: G. Scremin, I. Y. Suh, S. Doukas
Abstract:
Spectators follow their favorite sports teams for different reasons. While some attend a sporting event simply for its entertainment value, others do so because of the personal sense of achievement and accomplishment their connection with a sports team creates. Moreover, the level of identity spectators feel toward their favorite sports team falls in a broad continuum. Some are mere spectators. For those spectators, their association to a sports team has little impact on their self-image. Others are die-hard fans who are proud of their association with their team and whose connection with that team is an important reflection of who they are. Several motives for sports consumption can be used to explain the level of spectator support in a variety of sports. Those motives can also be used to explain the variance in the identification, attachment, and loyalty spectators feel toward their favorite sports team. Motives for sports consumption can be used to discriminate the degree of identification spectators have with their favorite sports team. In this study, motives for sports consumption was used to discriminate the level of identity spectators feel toward their sports team. It was hypothesized that spectators with a strong level of team identity would report higher rates of interest in player, interest in sports, and interest in team than spectators with a low level of team identity. And spectators with a low level of team identity would report higher rates for entertainment value, bonding with friends or family, and wholesome environment. Football spectators in the United States and England were surveyed about their motives for football consumption and their level of identification with their favorite football team. To assess if the motives of sports fans differed by level of team identity and allegiance to an American or English football team, a Multivariate Analysis of Variance (MANOVA) under the General Linear Model (GLM) procedure found in SPSS was performed. The independent variables were level of team identity and allegiance to an American or English football team, and the dependent variables were the sport fan motives. A tripartite split (low, moderate, high) was used on a composite measure for team identity. Preliminary results show that effect of team identity is statistically significant (p < .001) for at least nine of the 17 motives for sports consumption assessed in this investigation. These results indicate that the motives of spectators with a strong level of team identity differ significantly from spectators with a low level of team identity. Those differences can be used to discriminate the degree of identification spectators have with their favorite sports team. Sports marketers can use these methods and results to develop identity profiles of spectators and create marketing strategies specifically designed to attract those spectators based on their unique motives for consumption and their level of team identification.Keywords: fan identification, market segmentation of sports fans, motives for sports consumption, team identity
Procedia PDF Downloads 170547 LTE Modelling of a DC Arc Ignition on Cold Electrodes
Authors: O. Ojeda Mena, Y. Cressault, P. Teulet, J. P. Gonnet, D. F. N. Santos, MD. Cunha, M. S. Benilov
Abstract:
The assumption of plasma in local thermal equilibrium (LTE) is commonly used to perform electric arc simulations for industrial applications. This assumption allows to model the arc using a set of magneto-hydromagnetic equations that can be solved with a computational fluid dynamic code. However, the LTE description is only valid in the arc column, whereas in the regions close to the electrodes the plasma deviates from the LTE state. The importance of these near-electrode regions is non-trivial since they define the energy and current transfer between the arc and the electrodes. Therefore, any accurate modelling of the arc must include a good description of the arc-electrode phenomena. Due to the modelling complexity and computational cost of solving the near-electrode layers, a simplified description of the arc-electrode interaction was developed in a previous work to study a steady high-pressure arc discharge, where the near-electrode regions are introduced at the interface between arc and electrode as boundary conditions. The present work proposes a similar approach to simulate the arc ignition in a free-burning arc configuration following an LTE description of the plasma. To obtain the transient evolution of the arc characteristics, appropriate boundary conditions for both the near-cathode and the near-anode regions are used based on recent publications. The arc-cathode interaction is modeled using a non-linear surface heating approach considering the secondary electron emission. On the other hand, the interaction between the arc and the anode is taken into account by means of the heating voltage approach. From the numerical modelling, three main stages can be identified during the arc ignition. Initially, a glow discharge is observed, where the cold non-thermionic cathode is uniformly heated at its surface and the near-cathode voltage drop is in the order of a few hundred volts. Next, a spot with high temperature is formed at the cathode tip followed by a sudden decrease of the near-cathode voltage drop, marking the glow-to-arc discharge transition. During this stage, the LTE plasma also presents an important increase of the temperature in the region adjacent to the hot spot. Finally, the near-cathode voltage drop stabilizes at a few volts and both the electrode and plasma temperatures reach the steady solution. The results after some seconds are similar to those presented for thermionic cathodes.Keywords: arc-electrode interaction, thermal plasmas, electric arc simulation, cold electrodes
Procedia PDF Downloads 125546 Non-Linear Finite Element Investigation on the Behavior of CFRP Strengthened Steel Square HSS Columns under Eccentric Loading
Authors: Tasnuba Binte Jamal, Khan Mahmud Amanat
Abstract:
Carbon Fiber-Reinforced Polymer (CFRP) composite materials have proven to have valuable properties and suitability to be used in the construction of new buildings and in upgrading the existing ones due to its effectiveness, ease of implementation and many more. In the present study, a numerical finite element investigation has been conducted using ANSYS 18.1 to study the behavior of square HSS AISC sections under eccentric compressive loading strengthened with CFRP materials. A three-dimensional finite element model for square HSS section using shell element was developed. Application of CFRP strengthening was incorporated in the finite element model by adding an additional layer of shell elements. Both material and geometric nonlinearities were incorporated in the model. The developed finite element model was applied to simulate experimental studies done by past researchers and it was found that good agreement exists between the current analysis and past experimental results, which established the acceptability and validity of the developed finite element model to carry out further investigation. Study was then focused on some selected non-compact AISC square HSS columns and the effects of number of CFRP layers, amount of eccentricities and cross-sectional geometry on the strength gain of those columns were observed. Load was applied at a distance equal to the column dimension and twice that of column dimension. It was observed that CFRP strengthening is comparatively effective for smaller eccentricities. For medium sized sections, strengthening tends to be effective at smaller eccentricities as well. For relatively large AISC square HSS columns, with increasing number of CFRP layers (from 1 to 3 layers) the gain in strength is approximately 1 to 38% to that of unstrengthened section for smaller eccentricities and slenderness ratio ranging from 27 to 54. For medium sized square HSS sections, effectiveness of CFRP strengthening increases approximately by about 12 to 162%. The findings of the present study provide a better understanding of the behavior of HSS sections strengthened with CFRP subjected to eccentric compressive load.Keywords: CFRP strengthening, eccentricity, finite element model, square hollow section
Procedia PDF Downloads 144545 Seismic Active Earth Pressure on Retaining Walls with Reinforced Backfill
Authors: Jagdish Prasad Sahoo
Abstract:
The increase in active earth pressure during the event of an earthquake results sliding, overturning and tilting of earth retaining structures. In order to improve upon the stability of structures, the soil mass is often reinforced with various types of reinforcements such as metal strips, geotextiles, and geogrids etc. The stresses generated in the soil mass are transferred to the reinforcements through the interface friction between the earth and the reinforcement, which in turn reduces the lateral earth pressure on the retaining walls. Hence, the evaluation of earth pressure in the presence of seismic forces with an inclusion of reinforcements is important for the design retaining walls in the seismically active zones. In the present analysis, the effect of reinforcing horizontal layers of reinforcements in the form of sheets (Geotextiles and Geogrids) in sand used as backfill, on reducing the active earth pressure due to earthquake body forces has been studied. For carrying out the analysis, pseudo-static approach has been adopted by employing upper bound theorem of limit analysis in combination with finite elements and linear optimization. The computations have been performed with and out reinforcements for different internal friction angle of sand varying from 30 ° to 45 °. The effectiveness of the reinforcement in reducing the active earth pressure on the retaining walls is examined in terms of active earth pressure coefficient for presenting the solutions in a non-dimensional form. The active earth pressure coefficient is expressed as functions of internal friction angle of sand, interface friction angle between sand and reinforcement, soil-wall interface roughness conditions, and coefficient of horizontal seismic acceleration. It has been found that (i) there always exists a certain optimum depth of the reinforcement layers corresponding to which the value of active earth pressure coefficient becomes always the minimum, and (ii) the active earth pressure coefficient decreases significantly with an increase in length of reinforcements only up to a certain length beyond which a further increase in length hardly causes any reduction in the values active earth pressure. The optimum depth of the reinforcement layers and the required length of reinforcements corresponding to the optimum depth of reinforcements have been established. The numerical results developed in this analysis are expected to be useful for purpose of design of retaining walls.Keywords: active, finite elements, limit analysis, presudo-static, reinforcement
Procedia PDF Downloads 365544 Optimal Placement of the Unified Power Controller to Improve the Power System Restoration
Authors: Mohammad Reza Esmaili
Abstract:
One of the most important parts of the restoration process of a power network is the synchronizing of its subsystems. In this situation, the biggest concern of the system operators will be the reduction of the standing phase angle (SPA) between the endpoints of the two islands. In this regard, the system operators perform various actions and maneuvers so that the synchronization operation of the subsystems is successfully carried out and the system finally reaches acceptable stability. The most common of these actions include load control, generation control and, in some cases, changing the network topology. Although these maneuvers are simple and common, due to the weak network and extreme load changes, the restoration will be associated with low speed. One of the best ways to control the SPA is to use FACTS devices. By applying a soft control signal, these tools can reduce the SPA between two subsystems with more speed and accuracy, and the synchronization process can be done in less time. Meanwhile, the unified power controller (UPFC), a series-parallel compensator device with the change of transmission line power and proper adjustment of the phase angle, will be the proposed option in order to realize the subject of this research. Therefore, with the optimal placement of UPFC in a power system, in addition to improving the normal conditions of the system, it is expected to be effective in reducing the SPA during power system restoration. Therefore, the presented paper provides an optimal structure to coordinate the three problems of improving the division of subsystems, reducing the SPA and optimal power flow with the aim of determining the optimal location of UPFC and optimal subsystems. The proposed objective functions in this paper include maximizing the quality of the subsystems, reducing the SPA at the endpoints of the subsystems, and reducing the losses of the power system. Since there will be a possibility of creating contradictions in the simultaneous optimization of the proposed objective functions, the structure of the proposed optimization problem is introduced as a non-linear multi-objective problem, and the Pareto optimization method is used to solve it. The innovative technique proposed to implement the optimization process of the mentioned problem is an optimization algorithm called the water cycle (WCA). To evaluate the proposed method, the IEEE 39 bus power system will be used.Keywords: UPFC, SPA, water cycle algorithm, multi-objective problem, pareto
Procedia PDF Downloads 67543 Monitoring of 53 Contaminants of Emerging Concern: Occurrence in Effluents, Sludges, and Surface Waters Upstream and Downstream of 7 Wastewater Treatment Plants
Authors: Azziz Assoumani, Francois Lestremau, Celine Ferret, Benedicte Lepot, Morgane Salomon, Helene Budzinski, Marie-Helene Devier, Pierre Labadie, Karyn Le Menach, Patrick Pardon, Laure Wiest, Emmanuelle Vulliet, Pierre-Francois Staub
Abstract:
Seven French wastewater treatment plants (WWTP) were monitored for 53 contaminants of emerging concern within a nation-wide monitoring campaign in surface waters, which took place in 2018. The overall objective of the 2018 campaign was to provide the exercise of prioritization of emerging substances, which is being carried out in 2021, with monitoring data. This exercise should make it possible to update the list of relevant substances to be monitored (SPAS) as part of future water framework directive monitoring programmes, which will be implemented in the next water body management cycle (2022). One sampling campaign was performed in October 2018 in the seven WWTP, where affluent and sludge samples were collected. Surface water samples were collected in September 2018 at three to five sites upstream and downstream the point of effluent discharge of each WWTP. The contaminants (36 biocides and 17 surfactants, selected by the Prioritization Experts Committee) were determined in the seven WWTP effluent and sludge samples and in surface water samples by liquid or gas chromatography coupled with tandem mass spectrometry, depending on the contaminant. Nine surfactants and three biocides were quantified at least in one WWTP effluent sample. Linear alkylbenzene sulfonic acids (LAS) and fipronil were quantified in all samples; the LAS were quantified at the highest median concentrations. Twelve surfactants and 13 biocides were quantified in at least one sludge sample. The LAS and didecyldimethylammonium were quantified in all samples and at the highest median concentrations. Higher concentration levels of the substances quantified in WWTP effluent samples were observed in the surface water samples collected downstream the effluents discharge points, compared with the samples collected upstream, suggesting a contribution of the WWTP effluents in the contamination of surface waters.Keywords: contaminants of emerging concern, effluent, monitoring, river water, sludge
Procedia PDF Downloads 148542 Prediction of Finned Projectile Aerodynamics Using a Lattice-Boltzmann Method CFD Solution
Authors: Zaki Abiza, Miguel Chavez, David M. Holman, Ruddy Brionnaud
Abstract:
In this paper, the prediction of the aerodynamic behavior of the flow around a Finned Projectile will be validated using a Computational Fluid Dynamics (CFD) solution, XFlow, based on the Lattice-Boltzmann Method (LBM). XFlow is an innovative CFD software developed by Next Limit Dynamics. It is based on a state-of-the-art Lattice-Boltzmann Method which uses a proprietary particle-based kinetic solver and a LES turbulent model coupled with the generalized law of the wall (WMLES). The Lattice-Boltzmann method discretizes the continuous Boltzmann equation, a transport equation for the particle probability distribution function. From the Boltzmann transport equation, and by means of the Chapman-Enskog expansion, the compressible Navier-Stokes equations can be recovered. However to simulate compressible flows, this method has a Mach number limitation because of the lattice discretization. Thanks to this flexible particle-based approach the traditional meshing process is avoided, the discretization stage is strongly accelerated reducing engineering costs, and computations on complex geometries are affordable in a straightforward way. The projectile that will be used in this work is the Army-Navy Basic Finned Missile (ANF) with a caliber of 0.03 m. The analysis will consist in varying the Mach number from M=0.5 comparing the axial force coefficient, normal force slope coefficient and the pitch moment slope coefficient of the Finned Projectile obtained by XFlow with the experimental data. The slope coefficients will be obtained using finite difference techniques in the linear range of the polar curve. The aim of such an analysis is to find out the limiting Mach number value starting from which the effects of high fluid compressibility (related to transonic flow regime) lead the XFlow simulations to differ from the experimental results. This will allow identifying the critical Mach number which limits the validity of the isothermal formulation of XFlow and beyond which a fully compressible solver implementing a coupled momentum-energy equations would be required.Keywords: CFD, computational fluid dynamics, drag, finned projectile, lattice-boltzmann method, LBM, lift, mach, pitch
Procedia PDF Downloads 421541 Towards a Measuring Tool to Encourage Knowledge Sharing in Emerging Knowledge Organizations: The Who, the What and the How
Authors: Rachel Barker
Abstract:
The exponential velocity in the truly knowledge-intensive world today has increasingly bombarded organizations with unfathomable challenges. Hence organizations are introduced to strange lexicons of descriptors belonging to a new paradigm of who, what and how knowledge at individual and organizational levels should be managed. Although organizational knowledge has been recognized as a valuable intangible resource that holds the key to competitive advantage, little progress has been made in understanding how knowledge sharing at individual level could benefit knowledge use at collective level to ensure added value. The research problem is that a lack of research exists to measure knowledge sharing through a multi-layered structure of ideas with at its foundation, philosophical assumptions to support presuppositions and commitment which requires actual findings from measured variables to confirm observed and expected events. The purpose of this paper is to address this problem by presenting a theoretical approach to measure knowledge sharing in emerging knowledge organizations. The research question is that despite the competitive necessity of becoming a knowledge-based organization, leaders have found it difficult to transform their organizations due to a lack of knowledge on who, what and how it should be done. The main premise of this research is based on the challenge for knowledge leaders to develop an organizational culture conducive to the sharing of knowledge and where learning becomes the norm. The theoretical constructs were derived and based on the three components of the knowledge management theory, namely technical, communication and human components where it is suggested that this knowledge infrastructure could ensure effective management. While it is realised that it might be a little problematic to implement and measure all relevant concepts, this paper presents effect of eight critical success factors (CSFs) namely: organizational strategy, organizational culture, systems and infrastructure, intellectual capital, knowledge integration, organizational learning, motivation/performance measures and innovation. These CSFs have been identified based on a comprehensive literature review of existing research and tested in a new framework adapted from four perspectives of the balanced score card (BSC). Based on these CSFs and their items, an instrument was designed and tested among managers and employees of a purposefully selected engineering company in South Africa who relies on knowledge sharing to ensure their competitive advantage. Rigorous pretesting through personal interviews with executives and a number of academics took place to validate the instrument and to improve the quality of items and correct wording of issues. Through analysis of surveys collected, this research empirically models and uncovers key aspects of these dimensions based on the CSFs. Reliability of the instrument was calculated by Cronbach’s a for the two sections of the instrument on organizational and individual levels.The construct validity was confirmed by using factor analysis. The impact of the results was tested using structural equation modelling and proved to be a basis for implementing and understanding the competitive predisposition of the organization as it enters the process of knowledge management. In addition, they realised the importance to consolidate their knowledge assets to create value that is sustainable over time.Keywords: innovation, intellectual capital, knowledge sharing, performance measures
Procedia PDF Downloads 196540 Effects of Fe Addition and Process Parameters on the Wear and Corrosion Characteristics of Icosahedral Al-Cu-Fe Coatings on Ti-6Al-4V Alloy
Authors: Olawale S. Fatoba, Stephen A. Akinlabi, Esther T. Akinlabi, Rezvan Gharehbaghi
Abstract:
The performance of material surface under wear and corrosion environments cannot be fulfilled by the conventional surface modifications and coatings. Therefore, different industrial sectors need an alternative technique for enhanced surface properties. Titanium and its alloys possess poor tribological properties which limit their use in certain industries. This paper focuses on the effect of hybrid coatings Al-Cu-Fe on a grade five titanium alloy using laser metal deposition (LMD) process. Icosahedral Al-Cu-Fe as quasicrystals is a relatively new class of materials which exhibit unusual atomic structure and useful physical and chemical properties. A 3kW continuous wave ytterbium laser system (YLS) attached to a KUKA robot which controls the movement of the cladding process was utilized for the fabrication of the coatings. The titanium cladded surfaces were investigated for its hardness, corrosion and tribological behaviour at different laser processing conditions. The samples were cut to corrosion coupons, and immersed into 3.65% NaCl solution at 28oC using Electrochemical Impedance Spectroscopy (EIS) and Linear Polarization (LP) techniques. The cross-sectional view of the samples was analysed. It was found that the geometrical properties of the deposits such as width, height and the Heat Affected Zone (HAZ) of each sample remarkably increased with increasing laser power due to the laser-material interaction. It was observed that there are higher number of aluminum and titanium presented in the formation of the composite. The indentation testing reveals that for both scanning speed of 0.8 m/min and 1m/min, the mean hardness value decreases with increasing laser power. The low coefficient of friction, excellent wear resistance and high microhardness were attributed to the formation of hard intermetallic compounds (TiCu, Ti2Cu, Ti3Al, Al3Ti) produced through the in situ metallurgical reactions during the LMD process. The load-bearing capability of the substrate was improved due to the excellent wear resistance of the coatings. The cladded layer showed a uniform crack free surface due to optimized laser process parameters which led to the refinement of the coatings.Keywords: Al-Cu-Fe coating, corrosion, intermetallics, laser metal deposition, Ti-6Al-4V alloy, wear resistance
Procedia PDF Downloads 178539 Ectopic Osteoinduction of Porous Composite Scaffolds Reinforced with Graphene Oxide and Hydroxyapatite Gradient Density
Authors: G. M. Vlasceanu, H. Iovu, E. Vasile, M. Ionita
Abstract:
Herein, the synthesis and characterization of chitosan-gelatin highly porous scaffold reinforced with graphene oxide, and hydroxyapatite (HAp), crosslinked with genipin was targeted. In tissue engineering, chitosan and gelatin are two of the most robust biopolymers with wide applicability due to intrinsic biocompatibility, biodegradability, low antigenicity properties, affordability, and ease of processing. HAp, per its exceptional activity in tuning cell-matrix interactions, is acknowledged for its capability of sustaining cellular proliferation by promoting bone-like native micro-media for cell adjustment. Genipin is regarded as a top class cross-linker, while graphene oxide (GO) is viewed as one of the most performant and versatile fillers. The composites with natural bone HAp/biopolymer ratio were obtained by cascading sonochemical treatments, followed by uncomplicated casting methods and by freeze-drying. Their structure was characterized by Fourier Transform Infrared Spectroscopy and X-ray Diffraction, while overall morphology was investigated by Scanning Electron Microscopy (SEM) and micro-Computer Tomography (µ-CT). Ensuing that, in vitro enzyme degradation was performed to detect the most promising compositions for the development of in vivo assays. Suitable GO dispersion was ascertained within the biopolymer mix as nanolayers specific signals lack in both FTIR and XRD spectra, and the specific spectral features of the polymers persisted with GO load enhancement. Overall, correlations between the GO induced material structuration, crystallinity variations, and chemical interaction of the compounds can be correlated with the physical features and bioactivity of each composite formulation. Moreover, the HAp distribution within follows an auspicious density gradient tuned for hybrid osseous/cartilage matter architectures, which were mirrored in the mice model tests. Hence, the synthesis route of a natural polymer blend/hydroxyapatite-graphene oxide composite material is anticipated to emerge as influential formulation in bone tissue engineering. Acknowledgement: This work was supported by the project 'Work-based learning systems using entrepreneurship grants for doctoral and post-doctoral students' (Sisteme de invatare bazate pe munca prin burse antreprenor pentru doctoranzi si postdoctoranzi) - SIMBA, SMIS code 124705 and by a grant of the National Authority for Scientific Research and Innovation, Operational Program Competitiveness Axis 1 - Section E, Program co-financed from European Regional Development Fund 'Investments for your future' under the project number 154/25.11.2016, P_37_221/2015. The nano-CT experiments were possible due to European Regional Development Fund through Competitiveness Operational Program 2014-2020, Priority axis 1, ID P_36_611, MySMIS code 107066, INOVABIOMED.Keywords: biopolymer blend, ectopic osteoinduction, graphene oxide composite, hydroxyapatite
Procedia PDF Downloads 104538 Analysis of Extreme Rainfall Trends in Central Italy
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Marco Cifrodelli, Corrado Corradini
Abstract:
The trend of magnitude and frequency of extreme rainfalls seems to be different depending on the investigated area of the world. In this work, the impact of climate change on extreme rainfalls in Umbria, an inland region of central Italy, is examined using data recorded during the period 1921-2015 by 10 representative rain gauge stations. The study area is characterized by a complex orography, with altitude ranging from 200 to more than 2000 m asl. The climate is very different from zone to zone, with mean annual rainfall ranging from 650 to 1450 mm and mean annual air temperature from 3.3 to 14.2°C. Over the past 15 years, this region has been affected by four significant droughts as well as by six dangerous flood events, all with very large impact in economic terms. A least-squares linear trend analysis of annual maximums over 60 time series selected considering 6 different durations (1 h, 3 h, 6 h, 12 h, 24 h, 48 h) showed about 50% of positive and 50% of negative cases. For the same time series the non-parametrical Mann-Kendall test with a significance level 0.05 evidenced only 3% of cases characterized by a negative trend and no positive case. Further investigations have also demonstrated that the variance and covariance of each time series can be considered almost stationary. Therefore, the analysis on the magnitude of extreme rainfalls supplies the indication that an evident trend in the change of values in the Umbria region does not exist. However, also the frequency of rainfall events, with particularly high rainfall depths values, occurred during a fixed period has also to be considered. For all selected stations the 2-day rainfall events that exceed 50 mm were counted for each year, starting from the first monitored year to the end of 2015. Also, this analysis did not show predominant trends. Specifically, for all selected rain gauge stations the annual number of 2-day rainfall events that exceed the threshold value (50 mm) was slowly decreasing in time, while the annual cumulated rainfall depths corresponding to the same events evidenced trends that were not statistically significant. Overall, by using a wide available dataset and adopting simple methods, the influence of climate change on the heavy rainfalls in the Umbria region is not detected.Keywords: climate changes, rainfall extremes, rainfall magnitude and frequency, central Italy
Procedia PDF Downloads 236537 Characterising Performative Technological Innovation: Developing a Strategic Framework That Incorporates the Social Mechanisms That Promote Change within a Technological Environment
Authors: Joan Edwards, J. Lawlor
Abstract:
Technological innovation is frequently defined in terms of bringing a new invention to market through a relatively straightforward process of diffusion. In reality, this process is complex and non-linear in nature, and includes social and cognitive factors that influence the development of an emerging technology and its related market or environment. As recent studies contend technological trajectory is part of technological paradigms, which arise from the expectations and desires of industry agents and results in co-evolution, it may be realised that social factors play a major role in the development of a technology. It is conjectured that collective social behaviour is fuelled by individual motivations and expectations, which inform the possibilities and uses for a new technology. The individual outlook highlights the issues present at the micro-level of developing a technology. Accordingly, this may be zoomed out to realise how these embedded social structures, influence activities and expectations at a macro level and can ultimately strategically shape the development and use of a technology. These social factors rely on communication to foster the innovation process. As innovation may be defined as the implementation of inventions, technological change results from the complex interactions and feedback occurring within an extended environment. The framework presented in this paper, recognises that social mechanisms provide the basis for an iterative dialogue between an innovator, a new technology, and an environment - within which social and cognitive ‘identity-shaping’ elements of the innovation process occur. Identity-shaping characteristics indicate that an emerging technology has a performative nature that transforms, alters, and ultimately configures the environment to which it joins. This identity–shaping quality is termed as ‘performative’. This paper examines how technologies evolve within a socio-technological sphere and how 'performativity' facilitates the process. A framework is proposed that incorporates the performative elements which are identified as feedback, iteration, routine, expectations, and motivations. Additionally, the concept of affordances is employed to determine how the role of the innovator and technology change over time - constituting a more conducive environment for successful innovation.Keywords: affordances, framework, performativity, strategic innovation
Procedia PDF Downloads 206536 Federalizing the Philippines: What Does It Mean for the Igorot Indigenous Peoples?
Authors: Shierwin Agagen Cabunilas
Abstract:
The unitary form of Philippine government has built a tradition of bureaucracy that strengthened oligarch and clientele politics. Consequently, the Philippines is lagged behind development. There is so much poverty, unemployment, and inadequate social services. In addition, it seems that the rights of national ethnic minority groups like the Igorots to develop their political and economic interests, linguistic and cultural heritage are neglected. Given these circumstances, a paradigm shift is inevitable. The author advocates a transition from a unitary to a federal system of government. Contrary to the notion that a unitary system facilitates better governance, it actually stifles it. As a unitary government, the Philippines seems (a) to exhibit incompetence in delivering efficient, necessary services to the people and (b) to exclude the minority from political participation and policy making. This shows that Philippine unitary system is highly centralized and operates from a top-bottom scheme. However, a federal system encourages decentralization, plurality and political participation. In my view, federalism is beneficial to the Philippine society and congenial to the Igorot indigenous peoples insofar as participative decision-making and development goals are concerned. This research employs critical and constructive analyses. The former interprets some complex practices of Philippine politics while the latter investigates how theories of federalism can be appropriated to deal with political deficits, ethnic diversity, and indigenous peoples’ rights to self-determination. The topic is developed accordingly: First, the author briefly examines the unitary structure of the Philippines and its impact on inter-governmental affairs and processes, asserting that bureaucracy and corruption, for example, are counterproductive to a participative political life, to economic development and to the recognition of national ethnic minorities. Second, he scrutinizes why federalism might transform this. Here, he assesses various opposing philosophical contentions on federal system in managing ethnically diverse society, like the Philippines, and argue that decentralization of political power, economic and cultural developments are reasons to exit from unitary government. Third, he suggests that federalism can be instrumental to Igorots self-determination. Self-determination is neither opposed to national development nor to the ideals of democracy – liberty, justice, solidarity. For example, as others have already noted, a politics in the vernacular facilitates greater participation among the people. Hence, there is a greater chance to arrive at policies that serve the interest of the people. Some may wary that decentralization disintegrates a nation. According to the author, however, the recognition of minority rights which includes self-determination may promote filial devotion to the state. If Igorot indigenous peoples have access to suitable institutions to determine their political life, economic goals, social needs, i.e., education, culture, language, chances are it moves the country forward to development fostering national unity. Remarkably, federal system thus best responds to the Philippines’s democratic and development deficits. Federalism can also significantly rectify the practices that oppress and dislocate national ethnic minorities as it ensures the creation of localized institutions for optimum political, economic, cultural determination and maximizes representation in the public sphere.Keywords: federalism, Igorot, indigenous peoples, self-determination
Procedia PDF Downloads 340535 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy
Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani
Abstract:
Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.Keywords: QCL, automation, microplastics, tissues, infrared, speed
Procedia PDF Downloads 67534 Effect of Immunocastration Vaccine Administration at Different Doses on Performance of Feedlot Holstein Bulls
Authors: M. Bolacali
Abstract:
The aim of the study is to determine the effect of immunocastration vaccine administration at different doses on fattening performance of feedlot Holstein bulls. Bopriva® is a vaccine that stimulates the animals' own immune system to produce specific antibodies against gonadotropin releasing factor (GnRF). Ninety four Holstein male calves (309.5 ± 2.58 kg body live weight and 267 d-old) assigned to the 4 treatments. Control group; 1 mL of 0.9% saline solution was subcutaneously injected to intact bulls on 1st and 60th days of the feedlot as placebo. On the same days of the feedlot, Bopriva® at two doses of 1 mL and 1 mL for Trial-1 group, 1.5 mL, and 1.5 mL for Trial-2 group, 1.5 mL, and 1 mL for Trial-3 group were subcutaneously injected to bulls. The study was conducted in a private establishment in the Sirvan district of Siirt province and lasted 180 days. The animals were weighed at the beginning of fattening and at 30-day intervals to determine their live weights at various periods. The statistical analysis for normal distribution data of the treatment groups was carried out with the general linear model procedure of SPSS software. The fattening initial live weight in Control, Trial-1, Trial-2 and Trial-3 groups was respectively 309.21, 306.62, 312.11, and 315.39 kg. The fattening final live weight was respectively 560.88, 536.67, 548.56, and 548.25 kg. The daily live weight gain during the trial was respectively 1.40, 1.28, 1.31, and 1.29 kg/day. The cold carcass yield was respectively 51.59%, 50.32%, 50.85%, and 50.77%. Immunocastration vaccine administration at different doses did not affect the live weights and cold carcass yields of Holstein male calves reared under intensive conditions (P > 0.05). However, it was determined to reduce fattening performance between 61-120 days (P < 0.05) and 1-180 days (P < 0.01). In addition, it was determined that the best performance among the vaccine-treated groups occurred in the group administered a 1.5 mL of vaccine on the 1st and 60th study days. In animals, castration is used to control fertility, aggressive and sexual behaviors. As a result, the fact that stress is induced by physical castration in animals and active immunization against GnRF maintains performance by maximizing welfare in bulls improves carcass and meat quality and controls unwanted sexual and aggressive behavior. Considering such features, it may be suggested that immunocastration vaccine with Bopriva® can be administered as a 1.5 mL dose on the 1st and 60th days of the fattening period in Holstein bulls.Keywords: anti-GnRF, fattening, growth, immunocastration
Procedia PDF Downloads 193533 Conflation Methodology Applied to Flood Recovery
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.Keywords: community resilience, conflation, flood risk, nuisance flooding
Procedia PDF Downloads 105532 Preparation and Characterization of Anti-Acne Dermal Products Based on Erythromycin β-Cyclodextrin Lactide Complex
Authors: Lacramioara Ochiuz, Manuela Hortolomei, Aurelia Vasile, Iulian Stoleriu, Marcel Popa, Cristian Peptu
Abstract:
Local antibiotherapy is one of the most effective acne therapies. Erythromycin (ER) is a macrolide antibiotic topically administered for over 30 years in the form of gel, ointment or hydroalcoholic solution for the acne therapy. The use of ER as a base for topical dosage forms raises some technological challenges due to the physicochemical properties of this substance. The main disadvantage of ER is the poor water solubility (2 mg/mL) that limits both formulation using hydrophilic bases and skin permeability. Cyclodextrins (CDs) are biocompatible cyclic oligomers of glucose, with hydrophobic core and hydrophilic exterior. CDs are used to improve the bioavailability of drugs by increasing their solubility and/or their rate of dissolution after including the poorly water soluble substances (such as ER) in the hydrophobic cavity of CDs. Adding CDs leads to the increase of solubility and improved stability of the drug substance, increased permeability of substances of low water solubility, decreased toxicity and even to active dose reduction as a result of increased bioavailability. CDs increase skin tolerability by reducing the irritant effect of certain substances. We have included ER to lactide modified β-cyclodextrin, in order to improve the therapeutic effect of topically administered ER. The aims of the present study were to synthesise and describe a new complex with prolonged release of ER with lactide modified β-cyclodextrin (CD-LA_E), to investigate the CD-LA_E complex by scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FTIR), to analyse the effect of semisolid base on the in vitro and ex vivo release characteristics of ER in the CD-LA_E complex by assessing the permeability coefficient and the release kinetics by fitting on mathematical models. SEM showed that, by complexation, ER changes its crystal structure and enters the amorphous phase. FTIR analysis has shown that certain specific bands of some groups in the ER structure move during the incapsulation process. The structure of the CD-LA_E complex has a molar ratio of 2.12 to 1 between lactide modified β-cyclodextrin and ER. The three semisolid bases (2% Carbopol, 13% Lutrol 127 and organogel based on Lutrol and isopropyl myristate) show a good capacity for incorporating the CD-LA_E complex, having a content of active ingredient ranging from 98.3% to 101.5% as compared to the declared value of 2% ER. The results of the in vitro dissolution test showed that the ER solubility was significantly increased by CDs incapsulation. The amount of ER released from the CD-LA_E gels was in the range of 76.23% to 89.01%, whereas gels based on ER released a maximum percentage of 26.01% ER. The ex vivo dissolution test confirms the increased ER solubility achieved by complexation, and supports the assumption that the use of this process might increase ER permeability. The highest permeability coefficient was obtained in ER released from gel based on 2% Carbopol: in vitro 33.33 μg/cm2/h, and ex vivo 26.82 μg/cm2/h, respectively. The release kinetics of complexed ER is performed by Fickian diffusion, according to the results obtained by fitting the data in the Korsmeyer-Peppas model.Keywords: erythromycin, acne, lactide, cyclodextrin
Procedia PDF Downloads 268531 Developing Environmental Engineering Alternatives for Deep Desulphurization of Transportation Fuels
Authors: Nalinee B. Suryawanshi, Vinay M. Bhandari, Laxmi Gayatri Sorokhaibam, Vivek V. Ranade
Abstract:
Deep desulphurization of transportation fuels is a major environmental concern all over the world and recently prescribed norms for the sulphur content require below 10 ppm sulphur concentrations in fuels such as diesel and gasoline. The existing technologies largely based on catalytic processes such as hydrodesulphurization, oxidation require newer catalysts and demand high cost of deep desulphurization whereas adsorption based processes have limitations due to lower capacity of sulphur removal. The present work is an attempt to provide alternatives for the existing methodologies using a newer non-catalytic process based on hydrodynamic cavitation. The developed process requires appropriate combining of organic and aqueous phases under ambient conditions and passing through a cavitating device such as orifice, venturi or vortex diode. The implosion of vapour cavities formed in the cavitating device generates (in-situ) oxidizing species which react with the sulphur moiety resulting in the removal of sulphur from the organic phase. In this work, orifice was used as a cavitating device and deep desulphurization was demonstrated for removal of thiophene as a model sulphur compound from synthetic fuel of n-octane, toluene and n-octanol. The effect of concentration of sulphur (up to 300 ppm), nature of organic phase and effect of pressure drop (0.5 to 10 bar) was discussed. A very high removal of sulphur content of more than 90% was demonstrated. The process is easy to operate, essentially works at ambient conditions and the ratio of aqueous to organic phase can be easily adjusted to maximise sulphur removal. Experimental studies were also carried out using commercial diesel as a solvent and the results substantiate similar high sulphur removal. A comparison of the two cavitating devices- one with a linear flow and one using vortex flow for effecting pressure drop and cavitation indicates similar trends in terms of sulphur removal behaviour. The developed process is expected to provide an attractive environmental engineering alternative for deep desulphurization of transportation fuels.Keywords: cavitation, petroleum, separation, sulphur removal
Procedia PDF Downloads 381530 Fully Autonomous Vertical Farm to Increase Crop Production
Authors: Simone Cinquemani, Lorenzo Mantovani, Aleksander Dabek
Abstract:
New technologies in agriculture are opening new challenges and new opportunities. Among these, certainly, robotics, vision, and artificial intelligence are the ones that will make a significant leap, compared to traditional agricultural techniques, possible. In particular, the indoor farming sector will be the one that will benefit the most from these solutions. Vertical farming is a new field of research where mechanical engineering can bring knowledge and know-how to transform a highly labor-based business into a fully autonomous system. The aim of the research is to develop a multi-purpose, modular, and perfectly integrated platform for crop production in indoor vertical farming. Activities will be based both on hardware development such as automatic tools to perform different activities on soil and plants, as well as research to introduce an extensive use of monitoring techniques based on machine learning algorithms. This paper presents the preliminary results of a research project of a vertical farm living lab designed to (i) develop and test vertical farming cultivation practices, (ii) introduce a very high degree of mechanization and automation that makes all processes replicable, fully measurable, standardized and automated, (iii) develop a coordinated control and management environment for autonomous multiplatform or tele-operated robots in environments with the aim of carrying out complex tasks in the presence of environmental and cultivation constraints, (iv) integrate AI-based algorithms as decision support system to improve quality production. The coordinated management of multiplatform systems still presents innumerable challenges that require a strongly multidisciplinary approach right from the design, development, and implementation phases. The methodology is based on (i) the development of models capable of describing the dynamics of the various platforms and their interactions, (ii) the integrated design of mechatronic systems able to respond to the needs of the context and to exploit the strength characteristics highlighted by the models, (iii) implementation and experimental tests performed to test the real effectiveness of the systems created, evaluate any weaknesses so as to proceed with a targeted development. To these aims, a fully automated laboratory for growing plants in vertical farming has been developed and tested. The living lab makes extensive use of sensors to determine the overall state of the structure, crops, and systems used. The possibility of having specific measurements for each element involved in the cultivation process makes it possible to evaluate the effects of each variable of interest and allows for the creation of a robust model of the system as a whole. The automation of the laboratory is completed with the use of robots to carry out all the necessary operations, from sowing to handling to harvesting. These systems work synergistically thanks to the knowledge of detailed models developed based on the information collected, which allows for deepening the knowledge of these types of crops and guarantees the possibility of tracing every action performed on each single plant. To this end, artificial intelligence algorithms have been developed to allow synergistic operation of all systems.Keywords: automation, vertical farming, robot, artificial intelligence, vision, control
Procedia PDF Downloads 45529 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience
Authors: Amanda Kavner, Richard Lamb
Abstract:
Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience
Procedia PDF Downloads 121528 The Evolving Changes of Religious Behavior: an Exploratory Study on Guanyin Worship of Contemporary Chinese Societies
Authors: Judith Sue Hwa Joo
Abstract:
Guanyin (Avalokiteśvara in Sanskrit), the Bodhisattva of Mercy and Compassion, is the most widely worshipped Buddhist Divinity in Chinese societies and is also believed by more than half of Asian populations across various countries. The most overwhelming reason for the popularity of Guanyin in Chinese societies is, according to the Lotus Sutra, that Guanyin would apperceive voices of those suffering from immense afflictions and troubles, and liberate them upon crying for his/her holy name with wholeheartedness. Its pervasive social influence has spanned more than two thousand years and is still deeply affecting the lives of most Chinese people. This study aimed to investigate whether Guanyin Worship has evolved and changed in modern Chinese societies across the Taiwan Strait. Taiwan and China, albeit having the same language and culture, have been territorially divided and governed by two different political regimes for over 70 years. It would be scientifically intriguing to unveil any substantial changes in religious behaviors in the context of Guanyin Worship. A comprehensive anonymous questionnaire survey in Chinese communities was conducted from October 2017 to May 2019 across various countries, mostly in China, Taiwan, and Hong Kong areas. Since the religious survey is officially prohibited in China, the study was difficult and could only be exercised by means of snowball sampling. Demographic data (age, sex, education, religious belief) were registered and Guanyin’s salvation functions under various confronting situations were investigated. Psychological dimensions of religious belief in Guanyin were probed in terms of the worship experience, the willingness of veneration, and egoistic or altruistic ideations. A literature review on documented functional attributes was carried out in parallel for comparison analyses with traditional roles. Effective 1123 out of 1139 samples were obtained. Statistical analysis revealed that Guanyin Worship is still commonly practiced and deeply rooted in the hearts of all Chinese people regardless of gender, age, education, and residential area, even though they may not enshrine Guanyin at home nowadays. The conventional roles of Guanyin Bodhisattva are still valid and best satisfy the real interests of lifestyles in modern times. When comparing the traditional Buddhist Sutra and the documented literature, the divine power of modern Guanyin has notably empowered to recover, protect and transform fetal and infant spirits due to the sexual liberation, increased abortion rate, gender awakening and enhanced female autonomy in the reproductive decision. However, the One-Child policy may have critically impacted the trajectory of Guanyin Worship so that people in China prevail over those in Taiwan praying for aborted lives or premature deaths. Furthermore, particularly in Hong Kong and Macao, Guanyin not only serves as the sea guardian for the fishermen but also additional services a new function as the God of Wealth. The divine powers and salvation functions of Guanyin are indeed evolving and expanding to comply with the modern psychosocial, cultural and societal needs. This study sheds light on the modernization process of the two-thousand-year-old Guanyin Worship of contemporary Chinese societies.Keywords: Buddhism, Guanyin, religious behavior, salvation function
Procedia PDF Downloads 114527 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data
Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene
Abstract:
Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging
Procedia PDF Downloads 270526 Microscale observations of a gas cell wall rupture in bread dough during baking and confrontation to 2/3D Finite Element simulations of stress concentration
Authors: Kossigan Bernard Dedey, David Grenier, Tiphaine Lucas
Abstract:
Bread dough is often described as a dispersion of gas cells in a continuous gluten/starch matrix. The final bread crumb structure is strongly related to gas cell walls (GCWs) rupture during baking. At the end of proofing and during baking, part of the thinnest GCWs between expanding gas cells is reduced to a gluten film of about the size of a starch granule. When such size is reached gluten and starch granules must be considered as interacting phases in order to account for heterogeneities and appropriately describe GCW rupture. Among experimental investigations carried out to assess GCW rupture, no experimental work was performed to observe the GCW rupture in the baking conditions at GCW scale. In addition, attempts to numerically understand GCW rupture are usually not performed at the GCW scale and often considered GCWs as continuous. The most relevant paper that accounted for heterogeneities dealt with the gluten/starch interactions and their impact on the mechanical behavior of dough film. However, stress concentration in GCW was not discussed. In this study, both experimental and numerical approaches were used to better understand GCW rupture in bread dough during baking. Experimentally, a macro-scope placed in front of a two-chamber device was used to observe the rupture of a real GCW of 200 micrometers in thickness. Special attention was paid in order to mimic baking conditions as far as possible (temperature, gas pressure and moisture). Various differences in pressure between both sides of GCW were applied and different modes of fracture initiation and propagation in GCWs were observed. Numerically, the impact of gluten/starch interactions (cohesion or non-cohesion) and rheological moduli ratio on the mechanical behavior of GCW under unidirectional extension was assessed in 2D/3D. A non-linear viscoelastic and hyperelastic approach was performed to match the finite strain involved in GCW during baking. Stress concentration within GCW was identified. Simulated stresses concentration was discussed at the light of GCW failure observed in the device. The gluten/starch granule interactions and rheological modulus ratio were found to have a great effect on the amount of stress possibly reached in the GCW.Keywords: dough, experimental, numerical, rupture
Procedia PDF Downloads 122