Search results for: weak formulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2032

Search results for: weak formulation

52 Keratin Reconstruction: Evaluation of Green Peptides Technology on Hair Performance

Authors: R. Di Lorenzo, S. Laneri, A. Sacchi

Abstract:

Hair surface properties affect hair texture and shine, whereas the healthy state of the hair cortex sways hair ends. Even if cosmetic treatments are intrinsically safe, there is potentially damaging action on the hair fibers. Loss of luster, frizz, split ends, and other hair problems are particularly prevalent among people who repeatedly alter the natural style of their hair or among people with intrinsically weak hair. Technological and scientific innovations in hair care thus become invaluable allies to preserve their natural well-being and shine. The study evaluated restoring keratin-like ingredients that improve hair fibers' structural integrity, increase tensile strength, improve hair manageability and moisturizing. The hair shaft is composed of 65 - 95% of keratin. It gives the hair resistance, elasticity, and plastic properties and also contributes to their waterproofing. Providing exogenous keratin is, therefore, a practical approach to protect and nourish the hair. By analyzing the amino acid composition of keratin, we find a high frequency of hydrophobic amino acids. It confirms the critical role interactions, mainly hydrophobic, between cosmetic products and hair. The active ingredient analyzed comes from vegetable proteins through an enzymatic cut process that selected only oligo- and polypeptides (> 3500 KDa) rich in amino acids with hydrocarbon side chains apolar or sulfur. These chemical components are the most expressed amino acids at the level of the capillary keratin structure, and it determines the most significant possible compatibility with the target substrate. Given the biological variability of the sources, it isn't easy to define a constant and reproducible molecular formula of the product. Still, it consists of hydroxypropiltrimonium vegetable peptides with keratin-like performances. 20 natural hair tresses (30 cm in length and 0.50 g weight) were treated with the investigated products (5 % v/v aqueous solution) following a specific protocol and compared with non-treated (Control) and benchmark-keratin-treated strands (Benchmark). Their brightness, moisture content, cortical and surface integrity, and tensile strength were evaluated and statistically compared. Keratin-like treated hair tresses showed better results than the other two groups (Control and Benchmark). The product improves the surface with significant regularization of the cuticle closure, improves the cortex and the peri-medullar area filling, gives a highly organized and tidy structure, delivers a significant amount of sulfur on the hair, and is more efficient moisturization and imbibition power, increases hair brightness. The hydroxypropyltrimonium quaternized group added to the C-terminal end interacts with the negative charges that form on the hair after washing when disheveled and tangled. The interactions anchor the product to the hair surface, keeping the cuticles adhered to the shaft. The small size allows the peptides to penetrate and give body to the hair, together with a conditioning effect that gives an image of healthy hair. Results suggest that the product is a valid ally in numerous restructuring/conditioning, shaft protection, straightener/dryer-damage prevention hair care product.

Keywords: conditioning, hair damage, hair, keratin, polarized light microscopy, scanning electron microscope, thermogravimetric analysis

Procedia PDF Downloads 101
51 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 150
50 Triassic and Liassic Paleoenvironments during the Central Atlantic Magmatique Province (CAMP) Effusion in the Moroccan Coastal Meseta: The Mohammedia-Benslimane-El Gara-Berrechid Basin

Authors: Rachid Essamoud, Abdelkrim Afenzar, Ahmed Belqadi

Abstract:

During the Early Mesozoic, the northwestern part of the African continent was affected by initial fracturing associated with the early stages of the opening of the Central Atlantic (Atlantic Rift). During this rifting phase, the Moroccan Meseta experienced an extensive tectonic regime. This extension favored the formation of a set of rift-type basins, including the Mohammedia-Benslimane-ElGara-Berrechid basin. Thus, it is essential to know the nature of the deposits in this basin and their evolution over time as well as their relationship with the basaltic effusion of the Central Atlantic Magmatic Province (CAMP). These deposits are subdivided into two large series: The Lower clay-salt series attributed to the Triassic and the Upper clay-salt series attributed to the Liassic. The two series are separated by the Upper Triassic-Lower Liassic basaltic complex. The detailed sedimentological analysis made it possible to characterize four mega-sequences, fifteen types of facies and eight architectural elements and facies associations in the Triassic series. A progressive decrease observed in paleo-slope over time led to the evolution of the paleoenvironment from a proximal system of alluvial fans to a braided fluvial style, then to an anastomosed system. These environments eventually evolved into an alluvial plain associated with a coastal plain where playa lakes, mudflats and lagoons had developed. The pure and massive halitic facies at the top of the series probably indicate an evolution of the depositional environment towards a shallow subtidal environment. The presence of these evaporites indicates a climate that favored their precipitation, in this case, a fairly hot and humid climate. The sedimentological analysis of the supra-basaltic part shows that during the Lower Liassic, the paleopente after basaltic effusion remained weak with distal environments. The faciological analysis revealed the presence of four major sandstone, silty, clayey and evaporitic lithofacies organized in two mega-sequences: the sedimentation of the first rock-salt mega-sequence took place in a brine depression system free, followed by saline mudflats under continental influences. The upper clay mega-sequence displays facies documenting sea level fluctuations from the final transgression of the Tethys or the opening Atlantic. Saliferous sedimentation is therefore favored from the Upper Triassic, but experienced a sudden rupture by the emission of basaltic flows which are interstratified in the azoic salt clays of very shallow seas. This basaltic emission which belongs to the CAMP would come from a fissural volcanism probably carried out through transfer faults located in the NW and SE of the basin. Their emplacement is probably subaquatic to subaerial. From a chronological and paleogeographic point of view, this main volcanism, dated between the Upper Triassic and the Lower Liassic (180-200 MA), is linked to the fragmentation of Pangea and managed by a progressive expansion triggered in the West in close relation with the initial phases of Central Atlantic rifting and seems to coincide with the major mass extinction at the Triassic-Jurassic boundary.

Keywords: Basalt, CAMP, Liassic, sedimentology, Triassic, Morocco

Procedia PDF Downloads 41
49 Membrane Permeability of Middle Molecules: A Computational Chemistry Approach

Authors: Sundaram Arulmozhiraja, Kanade Shimizu, Yuta Yamamoto, Satoshi Ichikawa, Maenaka Katsumi, Hiroaki Tokiwa

Abstract:

Drug discovery is shifting from small molecule based drugs targeting local active site to middle molecules (MM) targeting large, flat, and groove-shaped binding sites, for example, protein-protein interface because at least half of all targets assumed to be involved in human disease have been classified as “difficult to drug” with traditional small molecules. Hence, MMs such as peptides, natural products, glycans, nucleic acids with various high potent bioactivities become important targets for drug discovery programs in the recent years as they could be used for ‘undruggable” intracellular targets. Cell membrane permeability is one of the key properties of pharmacodynamically active MM drug compounds and so evaluating this property for the potential MMs is crucial. Computational prediction for cell membrane permeability of molecules is very challenging; however, recent advancement in the molecular dynamics simulations help to solve this issue partially. It is expected that MMs with high membrane permeability will enable drug discovery research to expand its borders towards intracellular targets. Further to understand the chemistry behind the permeability of MMs, it is necessary to investigate their conformational changes during the permeation through membrane and for that their interactions with the membrane field should be studied reliably because these interactions involve various non-bonding interactions such as hydrogen bonding, -stacking, charge-transfer, polarization dispersion, and non-classical weak hydrogen bonding. Therefore, parameters-based classical mechanics calculations are hardly sufficient to investigate these interactions rather, quantum mechanical (QM) calculations are essential. Fragment molecular orbital (FMO) method could be used for such purpose as it performs ab initio QM calculations by dividing the system into fragments. The present work is aimed to study the cell permeability of middle molecules using molecular dynamics simulations and FMO-QM calculations. For this purpose, a natural compound syringolin and its analogues were considered in this study. Molecular simulations were performed using NAMD and Gromacs programs with CHARMM force field. FMO calculations were performed using the PAICS program at the correlated Resolution-of-Identity second-order Moller Plesset (RI-MP2) level with the cc-pVDZ basis set. The simulations clearly show that while syringolin could not permeate the membrane, its selected analogues go through the medium in nano second scale. These correlates well with the existing experimental evidences that these syringolin analogues are membrane-permeable compounds. Further analyses indicate that intramolecular -stacking interactions in the syringolin analogues influenced their permeability positively. These intramolecular interactions reduce the polarity of these analogues so that they could permeate the lipophilic cell membrane. Conclusively, the cell membrane permeability of various middle molecules with potent bioactivities is efficiently studied using molecular dynamics simulations. Insight of this behavior is thoroughly investigated using FMO-QM calculations. Results obtained in the present study indicate that non-bonding intramolecular interactions such as hydrogen-bonding and -stacking along with the conformational flexibility of MMs are essential for amicable membrane permeation. These results are interesting and are nice example for this theoretical calculation approach that could be used to study the permeability of other middle molecules. This work was supported by Japan Agency for Medical Research and Development (AMED) under Grant Number 18ae0101047.

Keywords: fragment molecular orbital theory, membrane permeability, middle molecules, molecular dynamics simulation

Procedia PDF Downloads 145
48 Ethical Decision-Making in AI and Robotics Research: A Proposed Model

Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet

Abstract:

Researchers in the fields of AI and Robotics frequently encounter ethical dilemmas throughout their research endeavors. Various ethical challenges have been pinpointed in the existing literature, including biases and discriminatory outcomes, diffusion of responsibility, and a deficit in transparency within AI operations. This research aims to pinpoint these ethical quandaries faced by researchers and shed light on the mechanisms behind ethical decision-making in the research process. By synthesizing insights from existing literature and acknowledging prevalent shortcomings, such as overlooking the heterogeneous nature of decision-making, non-accumulative results, and a lack of consensus on numerous factors due to limited empirical research, the objective is to conceptualize and validate a model. This model will incorporate influences from individual perspectives and situational contexts, considering potential moderating factors in the ethical decision-making process. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focusing on collaborative robotics for several months. Subsequently, semi-structured interviews with 16 team members were conducted. The entire process took place during the first semester of 2023. Observations were analyzed using an analysis grid, and the interviews underwent thematic analysis using Nvivo software. An initial finding involves identifying the ethical challenges that AI/robotics researchers confront, underlining a disparity between practical applications and theoretical considerations regarding ethical dilemmas in the realm of AI. Notably, researchers in AI prioritize the publication and recognition of their work, sparking the genesis of these ethical inquiries. Furthermore, this article illustrated that researchers tend to embrace a consequentialist ethical framework concerning safety (for humans engaging with robots/AI), worker autonomy in relation to robots, and the societal implications of labor (can robots displace jobs?). A second significant contribution entails proposing a model for ethical decision-making within the AI/Robotics research sphere. The model proposed adopts a process-oriented approach, delineating various research stages (topic proposal, hypothesis formulation, experimentation, conclusion, and valorization). Across these stages and the ethical queries, they entail, a comprehensive four-point comprehension of ethical decision-making is presented: recognition of the moral quandary; moral judgment, signifying the decision-maker's aptitude to discern the morally righteous course of action; moral intention, reflecting the ability to prioritize moral values above others; and moral behavior, denoting the application of moral intention to the situation. Variables such as political inclinations ((anti)-capitalism, environmentalism, veganism) seem to wield significant influence. Moreover, age emerges as a noteworthy moderating factor. AI and robotics researchers are continually confronted with ethical dilemmas during their research endeavors, necessitating thoughtful decision-making. The contribution involves introducing a contextually tailored model, derived from meticulous observations and insightful interviews, enabling the identification of factors that shape ethical decision-making at different stages of the research process.

Keywords: ethical decision making, artificial intelligence, robotics, research

Procedia PDF Downloads 42
47 Essential Oils of Polygonum L. Plants Growing in Kazakhstan and Their Antibacterial and Antifungal Activity

Authors: Dmitry Yu. Korulkin, Raissa A. Muzychkina

Abstract:

Bioactive substances of plant origin can be one of the advanced means of solution to the issue of combined therapy to inflammation. The main advantages of medical plants are softness and width of their therapeutic effect on an organism, the absence of side effects and complications even if the used continuously, high tolerability by patients. Moreover, medial plants are often the only and (or) cost-effective sources of natural biologically active substances and medicines. Along with other biologically active groups of chemical compounds, essential oils with wide range of pharmacological effects became very ingrained in medical practice. Essential oil was obtained by the method hydrodistillation air-dry aerial part of Polygonum L. plants using Clevenger apparatus. Qualitative composition of essential oils was analyzed by chromatography-mass-spectrometry method using Agilent 6890N apparatus. The qualitative analysis is based on the comparison of retention time and full mass-spectra with respective data on components of reference oils and pure compounds, if there were any, and with the data of libraries of mass-spectra Wiley 7th edition and NIST 02. The main components of essential oil are for: Polygonum amphibium L. - γ-terpinene, borneol, piperitol, 1,8-cyneole, α-pinene, linalool, terpinolene and sabinene; Polygonum minus Huds. Fl. Angl. – linalool, terpinolene, camphene, borneol, 1,8-cyneole, α-pinene, 4-terpineol and 1-octen-3-ol; Polygonum alpinum All. – camphene, sabinene, 1-octen-3-ol, 4-carene, p- and o-cymol, γ-terpinene, borneol, -terpineol; Polygonum persicaria L. - α-pinene, sabinene, -terpinene, 4-carene, 1,8-cyneole, borneol, 4-terpineol. Antibacterial activity was researched relating to strains of gram-positive bacteria Staphylococcus aureus, Bacillus subtilis, Streptococcus agalacticae, relating to gram-negative strain Escherichia coli and to yeast fungus Сandida albicans using agar diffusion method. The medicines of comparison were gentamicin for bacteria and nystatin for yeast fungus Сandida albicans. It has been shown that Polygonum L. essential oils has moderate antibacterial effect to gram-positive microorganisms and weak antifungal activity to Candida albicans yeast fungus. At the second stage of our researches wound healing properties of ointment form of 3% essential oil was researched on the model of flat dermal wounds. To assess the influence of essential oil on healing processes the model of flat dermal wound. The speed of wound healing on rats of different groups was judged based on assessment the area of a wound from time to time. During research of wound healing properties disturbance of integral in neither group: general condition and behavior of animals, food intake, and excretion. Wound healing action of 3% ointment on base of Polygonum L. essential oil and polyethyleneglycol is comparable with the action of reference substances. As more favorable healing dynamics was observed in the experimental group than in control group, the tested ointment can be deemed more promising for further detailed study as wound healing means.

Keywords: antibacterial, antifungal, bioactive substances, essential oils, isolation, Polygonum L.

Procedia PDF Downloads 502
46 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition

Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan

Abstract:

Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.

Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models

Procedia PDF Downloads 318
45 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 106
44 International Coffee Trade in Solidarity with the Zapatista Rebellion: Anthropological Perspectives on Commercial Ethics within Political Antagonistic Movements

Authors: Miria Gambardella

Abstract:

The influence of solidarity demonstrations towards the Zapatista National Liberation Army has been constantly present over the years, both locally and internationally, guaranteeing visibility to the cause, shaping the movement’s choices, and influencing its hopes of impact worldwide. Most of the coffee produced by the autonomous cooperatives from Chiapas is exported, therefore making coffee trade the main income from international solidarity networks. The question arises about the implications of the relations established between the communities in resistance in Southeastern Mexico and international solidarity movements, specifically on the strategies adopted to conciliate army's demands for autonomy and economic asymmetries between Zapatista cooperatives producing coffee and European collectives who hold purchasing power. In order to deepen the inquiry on those topics, a year-long multi-site investigation was carried out. The first six months of fieldwork were based in Barcelona, where Zapatista coffee was first traded in Spain and where one of the historical and most important European solidarity groups can be found. The last six months of fieldwork were carried out directly in Chiapas, in contact with coffee producers, Zapatista political authorities, international activists as well as vendors, and the rest of the network implicated in coffee production, roasting, and sale. The investigation was based on qualitative research methods, including participatory observation, focus groups, and semi-structured interviews. The analysis did not only focus on retracing the steps of the market chain as if it could be considered a linear and unilateral process, but it rather aimed at exploring actors’ reciprocal perceptions, roles, and dynamics of power. Demonstrations of solidarity and the money circulation they imply aim at changing the system in place and building alternatives, among other things, on the economic level. This work analyzes the formulation of discourse and the organization of solidarity activities that aim at building opportunities for action within a highly politicized economic sphere to which access must be regularly legitimized. The meaning conveyed by coffee is constructed on a symbolic level by the attribution of moral criteria to transactions. The latter participate in the construction of imaginaries that circulate through solidarity movements with the Zapatista rebellion. Commercial exchanges linked to solidarity networks turned out to represent much more than monetary transactions. The social, cultural, and political spheres are invested by ethics, which penetrates all aspects of militant action. It is at this level that the boundaries of different collective actors connect, contaminating each other: merely following the money flow would have been limiting in order to account for a reality within which imaginary is one of the main currencies. The notions of “trust”, “dignity” and “reciprocity” are repeatedly mobilized to negotiate discontinuous and multidirectional flows in the attempt to balance and justify commercial relations in a politicized context that characterizes its own identity through demonizing “market economy” and its dehumanizing powers.

Keywords: coffee trade, economic anthropology, international cooperation, Zapatista National Liberation Army

Procedia PDF Downloads 53
43 Thermodynamic Modeling of Cryogenic Fuel Tanks with a Model-Based Inverse Method

Authors: Pedro A. Marques, Francisco Monteiro, Alessandra Zumbo, Alessia Simonini, Miguel A. Mendez

Abstract:

Cryogenic fuels such as Liquid Hydrogen (LH₂) must be transported and stored at extremely low temperatures. Without expensive active cooling solutions, preventing fuel boil-off over time is impossible. Hence, one must resort to venting systems at the cost of significant energy and fuel mass loss. These losses increase significantly in propellant tanks installed on vehicles, as the presence of external accelerations induces sloshing. Sloshing increases heat and mass transfer rates and leads to significant pressure oscillations, which might further trigger propellant venting. To make LH₂ economically viable, it is essential to minimize these factors by using advanced control techniques. However, these require accurate modelling and a full understanding of the tank's thermodynamics. The present research aims to implement a simple thermodynamic model capable of predicting the state of a cryogenic fuel tank under different operating conditions (i.e., filling, pressurization, fuel extraction, long-term storage, and sloshing). Since this model relies on a set of closure parameters to drive the system's transient response, it must be calibrated using experimental or numerical data. This work focuses on the former approach, wherein the model is calibrated through an experimental campaign carried out on a reduced-scale model of a cryogenic tank. The thermodynamic model of the system is composed of three control volumes: the ullage, the liquid, and the insulating walls. Under this lumped formulation, the governing equations are derived from energy and mass balances in each region, with mass-averaged properties assigned to each of them. The gas-liquid interface is treated as an infinitesimally thin region across which both phases can exchange mass and heat. This results in a coupled system of ordinary differential equations, which must be closed with heat and mass transfer coefficients between each control volume. These parameters are linked to the system evolution via empirical relations derived from different operating regimes of the tank. The derivation of these relations is carried out using an inverse method to find the optimal relations that allow the model to reproduce the available data. This approach extends classic system identification methods beyond linear dynamical systems via a nonlinear optimization step. Thanks to the data-driven assimilation of the closure problem, the resulting model accurately predicts the evolution of the tank's thermodynamics at a negligible computational cost. The lumped model can thus be easily integrated with other submodels to perform complete system simulations in real time. Moreover, by setting the model in a dimensionless form, a scaling analysis allowed us to relate the tested configurations to a representative full-size tank for naval applications. It was thus possible to compare the relative importance of different transport phenomena between the laboratory model and the full-size prototype among the different operating regimes.

Keywords: destratification, hydrogen, modeling, pressure-drop, pressurization, sloshing, thermodynamics

Procedia PDF Downloads 63
42 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 238
41 Destination Management Organization in the Digital Era: A Data Framework to Leverage Collective Intelligence

Authors: Alfredo Fortunato, Carmelofrancesco Origlia, Sara Laurita, Rossella Nicoletti

Abstract:

In the post-pandemic recovery phase of tourism, the role of a Destination Management Organization (DMO) as a coordinated management system of all the elements that make up a destination (attractions, access, marketing, human resources, brand, pricing, etc.) is also becoming relevant for local territories. The objective of a DMO is to maximize the visitor's perception of value and quality while ensuring the competitiveness and sustainability of the destination, as well as the long-term preservation of its natural and cultural assets, and to catalyze benefits for the local economy and residents. In carrying out the multiple functions to which it is called, the DMO can leverage a collective intelligence that comes from the ability to pool information, explicit and tacit knowledge, and relationships of the various stakeholders: policymakers, public managers and officials, entrepreneurs in the tourism supply chain, researchers, data journalists, schools, associations and committees, citizens, etc. The DMO potentially has at its disposal large volumes of data and many of them at low cost, that need to be properly processed to produce value. Based on these assumptions, the paper presents a conceptual framework for building an information system to support the DMO in the intelligent management of a tourist destination tested in an area of southern Italy. The approach adopted is data-informed and consists of four phases: (1) formulation of the knowledge problem (analysis of policy documents and industry reports; focus groups and co-design with stakeholders; definition of information needs and key questions); (2) research and metadatation of relevant sources (reconnaissance of official sources, administrative archives and internal DMO sources); (3) gap analysis and identification of unconventional information sources (evaluation of traditional sources with respect to the level of consistency with information needs, the freshness of information and granularity of data; enrichment of the information base by identifying and studying web sources such as Wikipedia, Google Trends, Booking.com, Tripadvisor, websites of accommodation facilities and online newspapers); (4) definition of the set of indicators and construction of the information base (specific definition of indicators and procedures for data acquisition, transformation, and analysis). The framework derived consists of 6 thematic areas (accommodation supply, cultural heritage, flows, value, sustainability, and enabling factors), each of which is divided into three domains that gather a specific information need to be represented by a scheme of questions to be answered through the analysis of available indicators. The framework is characterized by a high degree of flexibility in the European context, given that it can be customized for each destination by adapting the part related to internal sources. Application to the case study led to the creation of a decision support system that allows: •integration of data from heterogeneous sources, including through the execution of automated web crawling procedures for data ingestion of social and web information; •reading and interpretation of data and metadata through guided navigation paths in the key of digital story-telling; •implementation of complex analysis capabilities through the use of data mining algorithms such as for the prediction of tourist flows.

Keywords: collective intelligence, data framework, destination management, smart tourism

Procedia PDF Downloads 94
40 Application of Electrical Resistivity Surveys on Constraining Causes of Highway Pavement Failure along Ajaokuta-Anyigba Road, North Central Nigeria

Authors: Moroof, O. Oloruntola, Sunday Oladele, Daniel, O. Obasaju, Victor, O Ojekunle, Olateju, O. Bayewu, Ganiyu, O. Mosuro

Abstract:

Integrated geophysical methods involving Vertical Electrical Sounding (VES) and 2D resistivity survey were deployed to gain an insight into the influence of the two varying rock types (mica-schist and granite gneiss) underlying the road alignment to the incessant highway failure along Ajaokuta-Anyigba, North-central Nigeria. The highway serves as a link-road for the single largest cement factory in Africa (Dangote Cement Factory) and two major ceramic industries to the capital (Abuja) via Lokoja. 2D Electrical Resistivity survey (Dipole-Dipole Array) and Vertical Electrical Sounding (VES) (Schlumberger array) were employed. Twenty-two (22) 2D profiles were occupied, twenty (20) conducted about 1 m away from the unstable section underlain by mica-schist with profile length each of approximately 100 m. Two (2) profiles were conducted about 1 m away from the stable section with a profile length of 100 m each due to barriers caused by the drainage system and outcropping granite gneiss at the flanks of the road. A spacing of 2 m was used for good image resolution of the near-surface. On each 2D profile, a range of 1-3 VES was conducted; thus, forty-eight (48) soundings were acquired. Partial curve matching and WinResist software were used to obtain the apparent and true resistivity values of the 1D survey, while DiprofWin software was used for processing the 2-D survey. Two exposed lithologic sections caused by abandoned river channels adjacent to two profiles as well as the knowledge of the geology of the area helped to constrain the VES and 2D processing and interpretation. Generally, the resistivity values obtained reflect the parent rock type, degree of weathering, moisture content and competency of the tested area. Resistivity values < 100; 100 – 950; 1000 – 2000 and > 2500 ohms-m were interpreted as clay, weathered layer, partly weathered layer and fresh basement respectively. The VES results and 2-D resistivity structures along the unstable segment showed similar lithologic characteristics and sequences dominated by clayey substratum for depths range of 0 – 42.2 m. The clayey substratum is a product of intensive weathering of the parent rock (mica-schist) and constitutes weak foundation soils, causing highway failure. This failure is further exacerbated by several heavy-duty trucks which ply the section round the clock due to proximity to two major ceramic industries in the state and lack of drainage system. The two profiles on the stable section show 2D structures that are remarkably different from those of the unstable section with very thin topsoils, higher resistivity weathered substratum (indicating the presence of coarse fragments from the parent rock) and shallow depth to the basement (1.0 – 7. 1 m). Also, the presence of drainage and lower volume of heavy-duty trucks are contributors to the pavement stability of this section of the highway. The resistivity surveys effectively delineated two contrasting soil profiles of the subbase/subgrade that reflect variation in the mineralogy of underlying parent rocks.

Keywords: clay, geophysical methods, pavement, resistivity

Procedia PDF Downloads 138
39 Influence of Thermal Annealing on Phase Composition and Structure of Quartz-Sericite Minerale

Authors: Atabaev I. G., Fayziev Sh. A., Irmatova Sh. K.

Abstract:

Raw materials with high content of Kalium oxide widely used in ceramic technology for prevention or decreasing of deformation of ceramic goods during drying process and under thermal annealing. Becouse to low melting temperature it is also used to decreasing of the temperature of thermal annealing during fabrication of ceramic goods [1,2]. So called “Porceline or China stones” - quartz-sericite (muscovite) minerals is also can be used for prevention of deformation as the content of Kalium oxide in muscovite is rather high (SiO2, + KAl2[AlSi3O10](OH)2). [3] . To estimation of possibility of use of this mineral for ceramic manufacture, in the presented article the influence of thermal processing on phase and a chemical content of this raw material is investigated. As well as to other ceramic raw materials (kaoline, white burning clays) the basic requirements of the industry to quality of "a porcelain stone» are following: small size of particles, relative high uniformity of disrtribution of components and phase, white color after burning, small content of colorant oxides or chromophores (Fe2O3, FeO, TiO2, etc) [4,5]. In the presented work natural minerale from the Boynaksay deposit (Uzbekistan) is investigated. The samples was mechanically polished for investigation by Scanning Electron Microscope. Powder with size of particle up to 63 μm was used to X-ray diffractometry and chemical analysis. The annealing of samples was performed at 900, 1120, 1350oC during 1 hour. Chemical composition of Boynaksay raw material according to chemical analysis presented in the table 1. For comparison the composition of raw materials from Russia and USA are also presented. In the Boynaksay quartz – sericite the average parity of quartz and sericite makes 55-60 and 30-35 % accordingly. The distribution of quartz and sericite phases in raw material was investigated using electron probe scanning electronic microscope «JEOL» JXA-8800R. In the figure 1 the scanning electron microscope (SEM) micrograps of the surface and the distributions of Al, Si and K atoms in the sample are presented. As it seen small granular, white and dense mineral includes quartz, sericite and small content of impurity minerals. Basically, crystals of quartz have the sizes from 80 up to 500 μm. Between quartz crystals the sericite inclusions having a tablet form with radiant structure are located. The size of sericite crystals is ~ 40-250 μm. Using data on interplanar distance [6,7] and ASTM Powder X-ray Diffraction Data it is shown that natural «a porcelain stone» quartz – sericite consists the quartz SiO2, sericite (muscovite type) KAl2[AlSi3O10](OH)2 and kaolinite Al203SiO22Н2О (See Figure 2 and Table 2). As it seen in the figure 3 and table 3a after annealing at 900oC the quartz – sericite contains quartz – SiO2 and muscovite - KAl2[AlSi3O10](OH)2, the peaks related with Kaolinite are absent. After annealing at 1120oC the full disintegration of muscovite and formation of mullite phase Al203 SiO2 is observed (the weak peaks of mullite appears in fig 3b and table 3b). After annealing at 1350oC the samples contains crystal phase of quartz and mullite (figure 3c and table 3с). Well known Mullite gives to ceramics high density, abrasive and chemical stability. Thus the obtained experimental data on formation of various phases during thermal annealing can be used for development of fabrication technology of advanced materials. Conclusion: The influence of thermal annealing in the interval 900-1350oC on phase composition and structure of quartz-sericite minerale is investigated. It is shown that during annealing the phase content of raw material is changed. After annealing at 1350oC the samples contains crystal phase of quartz and mullite (which gives gives to ceramics high density, abrasive and chemical stability).

Keywords: quartz-sericite, kaolinite, mullite, thermal processing

Procedia PDF Downloads 381
38 Structural Behavior of Subsoil Depending on Constitutive Model in Calculation Model of Pavement Structure-Subsoil System

Authors: M. Kadela

Abstract:

The load caused by the traffic movement should be transferred in the road constructions in a harmless way to the pavement as follows: − on the stiff upper layers of the structure (e.g. layers of asphalt: abrading and binding), and − through the layers of principal and secondary substructure, − on the subsoil, directly or through an improved subsoil layer. Reliable description of the interaction proceeding in a system “road construction – subsoil” should be in such case one of the basic requirements of the assessment of the size of internal forces of structure and its durability. Analyses of road constructions are based on: − elements of mechanics, which allows to create computational models, and − results of the experiments included in the criteria of fatigue life analyses. Above approach is a fundamental feature of commonly used mechanistic methods. They allow to use in the conducted evaluations of the fatigue life of structures arbitrarily complex numerical computational models. Considering the work of the system “road construction – subsoil”, it is commonly accepted that, as a result of repetitive loads on the subsoil under pavement, the growth of relatively small deformation in the initial phase is recognized, then this increase disappears, and the deformation takes the character completely reversible. The reliability of calculation model is combined with appropriate use (for a given type of analysis) of constitutive relationships. Phenomena occurring in the initial stage of the system “road construction – subsoil” is unfortunately difficult to interpret in the modeling process. The classic interpretation of the behavior of the material in the elastic-plastic model (e-p) is that elastic phase of the work (e) is undergoing to phase (e-p) by increasing the load (or growth of deformation in the damaging structure). The paper presents the essence of the calibration process of cooperating subsystem in the calculation model of the system “road construction – subsoil”, created for the mechanistic analysis. Calibration process was directed to show the impact of applied constitutive models on its deformation and stress response. The proper comparative base for assessing the reliability of created. This work was supported by the on-going research project “Stabilization of weak soil by application of layer of foamed concrete used in contact with subsoil” (LIDER/022/537/L-4/NCBR/2013) financed by The National Centre for Research and Development within the LIDER Programme. M. Kadela is with the Department of Building Construction Elements and Building Structures on Mining Areas, Building Research Institute, Silesian Branch, Katowice, Poland (phone: +48 32 730 29 47; fax: +48 32 730 25 22; e-mail: m.kadela@ itb.pl). models should be, however, the actual, monitored system “road construction – subsoil”. The paper presents too behavior of subsoil under cyclic load transmitted by pavement layers. The response of subsoil to cyclic load is recorded in situ by the observation system (sensors) installed on the testing ground prepared for this purpose, being a part of the test road near Katowice, in Poland. A different behavior of the homogeneous subsoil under pavement is observed for different seasons of the year, when pavement construction works as a flexible structure in summer, and as a rigid plate in winter. Albeit the observed character of subsoil response is the same regardless of the applied load and area values, this response can be divided into: - zone of indirect action of the applied load; this zone extends to the depth of 1,0 m under the pavement, - zone of a small strain, extending to about 2,0 m.

Keywords: road structure, constitutive model, calculation model, pavement, soil, FEA, response of soil, monitored system

Procedia PDF Downloads 321
37 Modeling Competition Between Subpopulations with Variable DNA Content in Resource-Limited Microenvironments

Authors: Parag Katira, Frederika Rentzeperis, Zuzanna Nowicka, Giada Fiandaca, Thomas Veith, Jack Farinhas, Noemi Andor

Abstract:

Resource limitations shape the outcome of competitions between genetically heterogeneous pre-malignant cells. One example of such heterogeneity is in the ploidy (DNA content) of pre-malignant cells. A whole-genome duplication (WGD) transforms a diploid cell into a tetraploid one and has been detected in 28-56% of human cancers. If a tetraploid subclone expands, it consistently does so early in tumor evolution, when cell density is still low, and competition for nutrients is comparatively weak – an observation confirmed for several tumor types. WGD+ cells need more resources to synthesize increasing amounts of DNA, RNA, and proteins. To quantify resource limitations and how they relate to ploidy, we performed a PAN cancer analysis of WGD, PET/CT, and MRI scans. Segmentation of >20 different organs from >900 PET/CT scans were performed with MOOSE. We observed a strong correlation between organ-wide population-average estimates of Oxygen and the average ploidy of cancers growing in the respective organ (Pearson R = 0.66; P= 0.001). In-vitro experiments using near-diploid and near-tetraploid lineages derived from a breast cancer cell line supported the hypothesis that DNA content influences Glucose- and Oxygen-dependent proliferation-, death- and migration rates. To model how subpopulations with variable DNA content compete in the resource-limited environment of the human brain, we developed a stochastic state-space model of the brain (S3MB). The model discretizes the brain into voxels, whereby the state of each voxel is defined by 8+ variables that are updated over time: stiffness, Oxygen, phosphate, glucose, vasculature, dead cells, migrating cells and proliferating cells of various DNA content, and treat conditions such as radiotherapy and chemotherapy. Well-established Fokker-Planck partial differential equations govern the distribution of resources and cells across voxels. We applied S3MB on sequencing and imaging data obtained from a primary GBM patient. We performed whole genome sequencing (WGS) of four surgical specimens collected during the 1ˢᵗ and 2ⁿᵈ surgeries of the GBM and used HATCHET to quantify its clonal composition and how it changes between the two surgeries. HATCHET identified two aneuploid subpopulations of ploidy 1.98 and 2.29, respectively. The low-ploidy clone was dominant at the time of the first surgery and became even more dominant upon recurrence. MRI images were available before and after each surgery and registered to MNI space. The S3MB domain was initiated from 4mm³ voxels of the MNI space. T1 post and T2 flair scan acquired after the 1ˢᵗ surgery informed tumor cell densities per voxel. Magnetic Resonance Elastography scans and PET/CT scans informed stiffness and Glucose access per voxel. We performed a parameter search to recapitulate the GBM’s tumor cell density and ploidy composition before the 2ⁿᵈ surgery. Results suggest that the high-ploidy subpopulation had a higher Glucose-dependent proliferation rate (0.70 vs. 0.49), but a lower Glucose-dependent death rate (0.47 vs. 1.42). These differences resulted in spatial differences in the distribution of the two subpopulations. Our results contribute to a better understanding of how genomics and microenvironments interact to shape cell fate decisions and could help pave the way to therapeutic strategies that mimic prognostically favorable environments.

Keywords: tumor evolution, intra-tumor heterogeneity, whole-genome doubling, mathematical modeling

Procedia PDF Downloads 43
36 Multifunctional Epoxy/Carbon Laminates Containing Carbon Nanotubes-Confined Paraffin for Thermal Energy Storage

Authors: Giulia Fredi, Andrea Dorigato, Luca Fambri, Alessandro Pegoretti

Abstract:

Thermal energy storage (TES) is the storage of heat for later use, thus filling the gap between energy request and supply. The most widely used materials for TES are the organic solid-liquid phase change materials (PCMs), such as paraffin. These materials store/release a high amount of latent heat thanks to their high specific melting enthalpy, operate in a narrow temperature range and have a tunable working temperature. However, they suffer from a low thermal conductivity and need to be confined to prevent leakage. These two issues can be tackled by confining PCMs with carbon nanotubes (CNTs). TES applications include the buildings industry, solar thermal energy collection and thermal management of electronics. In most cases, TES systems are an additional component to be added to the main structure, but if weight and volume savings are key issues, it would be advantageous to embed the TES functionality directly in the structure. Such multifunctional materials could be employed in the automotive industry, where the diffusion of lightweight structures could complicate the thermal management of the cockpit environment or of other temperature sensitive components. This work aims to produce epoxy/carbon structural laminates containing CNT-stabilized paraffin. CNTs were added to molten paraffin in a fraction of 10 wt%, as this was the minimum amount at which no leakage was detected above the melting temperature (45°C). The paraffin/CNT blend was cryogenically milled to obtain particles with an average size of 50 µm. They were added in various percentages (20, 30 and 40 wt%) to an epoxy/hardener formulation, which was used as a matrix to produce laminates through a wet layup technique, by stacking five plies of a plain carbon fiber fabric. The samples were characterized microstructurally, thermally and mechanically. Differential scanning calorimetry (DSC) tests showed that the paraffin kept its ability to melt and crystallize also in the laminates, and the melting enthalpy was almost proportional to the paraffin weight fraction. These thermal properties were retained after fifty heating/cooling cycles. Laser flash analysis showed that the thermal conductivity through the thickness increased with an increase of the PCM, due to the presence of CNTs. The ability of the developed laminates to contribute to the thermal management was also assessed by monitoring their cooling rates through a thermal camera. Three-point bending tests showed that the flexural modulus was only slightly impaired by the presence of the paraffin/CNT particles, while a more sensible decrease of the stress and strain at break and the interlaminar shear strength was detected. Optical and scanning electron microscope images revealed that these could be attributed to the preferential location of the PCM in the interlaminar region. These results demonstrated the feasibility of multifunctional structural TES composites and highlighted that the PCM size and distribution affect the mechanical properties. In this perspective, this group is working on the encapsulation of paraffin in a sol-gel derived organosilica shell. Submicron spheres have been produced, and the current activity focuses on the optimization of the synthesis parameters to increase the emulsion efficiency.

Keywords: carbon fibers, carbon nanotubes, lightweight materials, multifunctional composites, thermal energy storage

Procedia PDF Downloads 122
35 The Development of User Behavior in Urban Regeneration Areas by Utilizing the Floating Population Data

Authors: Jung-Hun Cho, Tae-Heon Moon, Sun-Young Heo

Abstract:

A lot of urban problems, caused by urbanization and industrialization, have occurred around the world. In particular, the creation of satellite towns, which was attributed to the explicit expansion of the city, has led to the traffic problems and the hollowization of old towns, raising the necessity of urban regeneration in old towns along with the aging of existing urban infrastructure. To select urban regeneration priority regions for the strategic execution of urban regeneration in Korea, the number of population, the number of businesses, and deterioration degree were chosen as standards. Existing standards had a limit in coping with solving urban problems fundamentally and rapidly changing reality. Therefore, it was necessary to add new indicators that can reflect the decline in relevant cities and conditions. In this regard, this study selected Busan Metropolitan City, Korea as the target area as a leading city, where urban regeneration such as an international port city has been activated like Yokohama, Japan. Prior to setting the urban regeneration priority region, the conditions of reality should be reflected because uniform and uncharacterized projects have been implemented without a quantitative analysis about population behavior within the region. For this reason, this study conducted a characterization analysis and type classification, based on the user behaviors by using representative floating population of the big data, which is a hot issue all over the society in recent days. The target areas were analyzed in this study. While 23 regions were classified as three types in existing Busan Metropolitan City urban regeneration priority region, 23 regions were classified as four types in existing Busan Metropolitan City urban regeneration priority region in terms of the type classification on the basis of user behaviors. Four types were classified as follows; type (Ⅰ) of young people - morning type, Type (Ⅱ) of the old and middle-aged- general type with sharp floating population, type (Ⅲ) of the old and middle aged-24hour-type, and type (Ⅳ) of the old and middle aged with less floating population. Characteristics were shown in each region of four types, and the study results of user behaviors were different from those of existing urban regeneration priority region. According to the results, in type (Ⅰ) young people were the majority around the existing old built-up area, where floating population at dawn is four times more than in other areas. In Type (Ⅱ), there were many old and middle-aged people around the existing built-up area and general neighborhoods, where the average floating population was more than in other areas due to commuting, while in type (Ⅲ), there was no change in the floating population throughout 24 hours, although there were many old and middle aged people in population around the existing general neighborhoods. Type (Ⅳ) includes existing economy-based type, central built-up area type, and general neighborhood type, where old and middle aged people were the majority as a general type of commuting with less floating population. Unlike existing urban regeneration priority region, these types were sub-divided according to types, and in this study, approach methods and basic orientations of urban regeneration were set to reflect the reality to a certain degree including the indicators of effective floating population to identify the dynamic activity of urban areas and existing regeneration priority areas in connection with urban regeneration projects by regions. Therefore, it is possible to make effective urban plans through offering the substantial ground by utilizing scientific and quantitative data. To induce more realistic and effective regeneration projects, the regeneration projects tailored to the present local conditions should be developed by reflecting the present conditions on the formulation of urban regeneration strategic plans.

Keywords: floating population, big data, urban regeneration, urban regeneration priority region, type classification

Procedia PDF Downloads 179
34 Autonomous Strategic Aircraft Deconfliction in a Multi-Vehicle Low Altitude Urban Environment

Authors: Loyd R. Hook, Maryam Moharek

Abstract:

With the envisioned future growth of low altitude urban aircraft operations for airborne delivery service and advanced air mobility, strategies to coordinate and deconflict aircraft flight paths must be prioritized. Autonomous coordination and planning of flight trajectories is the preferred approach to the future vision in order to increase safety, density, and efficiency over manual methods employed today. Difficulties arise because any conflict resolution must be constrained by all other aircraft, all airspace restrictions, and all ground-based obstacles in the vicinity. These considerations make pair-wise tactical deconfliction difficult at best and unlikely to find a suitable solution for the entire system of vehicles. In addition, more traditional methods which rely on long time scales and large protected zones will artificially limit vehicle density and drastically decrease efficiency. Instead, strategic planning, which is able to respond to highly dynamic conditions and still account for high density operations, will be required to coordinate multiple vehicles in the highly constrained low altitude urban environment. This paper develops and evaluates such a planning algorithm which can be implemented autonomously across multiple aircraft and situations. Data from this evaluation provide promising results with simulations showing up to 10 aircraft deconflicted through a relatively narrow low-altitude urban canyon without any vehicle to vehicle or obstacle conflict. The algorithm achieves this level of coordination beginning with the assumption that each vehicle is controlled to follow an independently constructed flight path, which is itself free of obstacle conflict and restricted airspace. Then, by preferencing speed change deconfliction maneuvers constrained by the vehicles flight envelope, vehicles can remain as close to the original planned path and prevent cascading vehicle to vehicle conflicts. Performing the search for a set of commands which can simultaneously ensure separation for each pair-wise aircraft interaction and optimize the total velocities of all the aircraft is further complicated by the fact that each aircraft's flight plan could contain multiple segments. This means that relative velocities will change when any aircraft achieves a waypoint and changes course. Additionally, the timing of when that aircraft will achieve a waypoint (or, more directly, the order upon which all of the aircraft will achieve their respective waypoints) will change with the commanded speed. Put all together, the continuous relative velocity of each vehicle pair and the discretized change in relative velocity at waypoints resembles a hybrid reachability problem - a form of control reachability. This paper proposes two methods for finding solutions to these multi-body problems. First, an analytical formulation of the continuous problem is developed with an exhaustive search of the combined state space. However, because of computational complexity, this technique is only computable for pairwise interactions. For more complicated scenarios, including the proposed 10 vehicle example, a discretized search space is used, and a depth-first search with early stopping is employed to find the first solution that solves the constraints.

Keywords: strategic planning, autonomous, aircraft, deconfliction

Procedia PDF Downloads 68
33 Coil-Over Shock Absorbers Compared to Inherent Material Damping

Authors: Carina Emminger, Umut D. Cakmak, Evrim Burkut, Rene Preuer, Ingrid Graz, Zoltan Major

Abstract:

Damping accompanies us daily in everyday life and is used to protect (e.g., in shoes) and make our life more comfortable (damping of unwanted motion) and calm (noise reduction). In general, damping is the absorption of energy which is either stored in the material (vibration isolation systems) or changed into heat (vibration absorbers). In case of the last, the damping mechanism can be split in active, passive, as well as semi-active (a combination of active and passive). Active damping is required to enable an almost perfect damping over the whole application range and is used, for instance, in sport cars. In contrast, passive damping is a response of the material due to external loading. Consequently, the material composition has a huge influence on the damping behavior. For elastomers, the material behavior is inherent viscoelastic, temperature, and frequency dependent. However, passive damping is not adjustable during application. Therefore, it is of importance to understand the fundamental viscoelastic behavior and the dissipation capability due to external loading. The objective of this work is to assess the limitation and applicability of viscoelastic material damping for applications in which currently coil-over shock absorbers are utilized. Coil-over shock absorbers are usually made of various mechanical parts and incorporate fluids within the damper. These shock absorbers are well-known and studied in the industry, and when needed, they can be easily adjusted during their product lifetime. In contrary, dampers made of – ideally – a single material are more resource efficient, have an easier serviceability, and are easier manufactured. However, they lack of adaptability and adjustability in service. Therefore, a case study with a remote-controlled sport car was conducted. The original shock absorbers were redesigned, and the spring-dashpot system was replaced by both an elastomer and a thermoplastic-elastomer, respectively. Here, five different formulations of elastomers were used, including a pure and an iron-particle filled thermoplastic poly(urethan) (TPU) and blends of two different poly(dimethyl siloxane) (PDMS). In addition, the TPUs were investigated as full and hollow dampers to investigate the difference between solid and structured material. To get comparative results each material formulation was comprehensively characterized, by monotonic uniaxial compression tests, dynamic thermomechanical analysis (DTMA), and rebound resilience. Moreover, the new material-based shock absorbers were compared with spring-dashpot shock absorbers. The shock absorbers were analyzed under monotonic and cyclic loading. In addition, an impact loading was applied on the remote-controlled car to measure the damping properties in operation. A servo-hydraulic high-speed linear actuator was utilized to apply the loads. The acceleration of the car and the displacement of specific measurement points were recorded while testing by a sensor and high-speed camera, respectively. The results prove that elastomers are suitable in damping applications, but they are temperature and frequency dependent. This is a limitation in applicability of viscous material damper. Feasible fields of application may be in the case of micromobility, like bicycles, e-scooters, and e-skateboards. Furthermore, the viscous material damping could be used to increase the inherent damping of a whole structure, e.g., in bicycle-frames.

Keywords: damper structures, material damping, PDMS, TPU

Procedia PDF Downloads 89
32 Next-Generation Lunar and Martian Laser Retro-Reflectors

Authors: Simone Dell'Agnello

Abstract:

There are laser retroreflectors on the Moon and no laser retroreflectors on Mars. Here we describe the design, construction, qualification and imminent deployment of next-generation, optimized laser retroreflectors on the Moon and on Mars (where they will be the first ones). These instruments are positioned by time-of-flight measurements of short laser pulses, the so-called 'laser ranging' technique. Data analysis is carried out with PEP, the Planetary Ephemeris Program of CfA (Center for Astrophysics). Since 1969 Lunar Laser Ranging (LLR) to Apollo/Lunokhod laser retro-reflector (CCR) arrays supplied accurate tests of General Relativity (GR) and new gravitational physics: possible changes of the gravitational constant Gdot/G, weak and strong equivalence principle, gravitational self-energy (Parametrized Post Newtonian parameter beta), geodetic precession, inverse-square force-law; it can also constraint gravitomagnetism. Some of these measurements also allowed for testing extensions of GR, including spacetime torsion, non-minimally coupled gravity. LLR has also provides significant information on the composition of the deep interior of the Moon. In fact, LLR first provided evidence of the existence of a fluid component of the deep lunar interior. In 1969 CCR arrays contributed a negligible fraction of the LLR error budget. Since laser station range accuracy improved by more than a factor 100, now, because of lunar librations, current array dominate the error due to their multi-CCR geometry. We developed a next-generation, single, large CCR, MoonLIGHT (Moon Laser Instrumentation for General relativity high-accuracy test) unaffected by librations that supports an improvement of the space segment of the LLR accuracy up to a factor 100. INFN also developed INRRI (INstrument for landing-Roving laser Retro-reflector Investigations), a microreflector to be laser-ranged by orbiters. Their performance is characterized at the SCF_Lab (Satellite/lunar laser ranging Characterization Facilities Lab, INFN-LNF, Frascati, Italy) for their deployment on the lunar surface or the cislunar space. They will be used to accurately position landers, rovers, hoppers, orbiters of Google Lunar X Prize and space agency missions, thanks to LLR observations from station of the International Laser Ranging Service in the USA, in France and in Italy. INRRI was launched in 2016 with the ESA mission ExoMars (Exobiology on Mars) EDM (Entry, descent and landing Demonstration Module), deployed on the Schiaparelli lander and is proposed for the ExoMars 2020 Rover. Based on an agreement between NASA and ASI (Agenzia Spaziale Italiana), another microreflector, LaRRI (Laser Retro-Reflector for InSight), was delivered to JPL (Jet Propulsion Laboratory) and integrated on NASA’s InSight Mars Lander in August 2017 (launch scheduled in May 2018). Another microreflector, LaRA (Laser Retro-reflector Array) will be delivered to JPL for deployment on the NASA Mars 2020 Rover. The first lunar landing opportunities will be from early 2018 (with TeamIndus) to late 2018 with commercial missions, followed by opportunities with space agency missions, including the proposed deployment of MoonLIGHT and INRRI on NASA’s Resource Prospectors and its evolutions. In conclusion, we will extend significantly the CCR Lunar Geophysical Network and populate the Mars Geophysical Network. These networks will enable very significantly improved tests of GR.

Keywords: general relativity, laser retroreflectors, lunar laser ranging, Mars geodesy

Procedia PDF Downloads 242
31 An Elasto-Viscoplastic Constitutive Model for Unsaturated Soils: Numerical Implementation and Validation

Authors: Maria Lazari, Lorenzo Sanavia

Abstract:

Mechanics of unsaturated soils has been an active field of research in the last decades. Efficient constitutive models that take into account the partial saturation of soil are necessary to solve a number of engineering problems e.g. instability of slopes and cuts due to heavy rainfalls. A large number of constitutive models can now be found in the literature that considers fundamental issues associated with the unsaturated soil behaviour, like the volume change and shear strength behaviour with suction or saturation changes. Partially saturated soils may either expand or collapse upon wetting depending on the stress level, and it is also possible that a soil might experience a reversal in the volumetric behaviour during wetting. Shear strength of soils also changes dramatically with changes in the degree of saturation, and a related engineering problem is slope failures caused by rainfall. There are several states of the art reviews over the last years for studying the topic, usually providing a thorough discussion of the stress state, the advantages, and disadvantages of specific constitutive models as well as the latest developments in the area of unsaturated soil modelling. However, only a few studies focused on the coupling between partial saturation states and time effects on the behaviour of geomaterials. Rate dependency is experimentally observed in the mechanical response of granular materials, and a viscoplastic constitutive model is capable of reproducing creep and relaxation processes. Therefore, in this work an elasto-viscoplastic constitutive model for unsaturated soils is proposed and validated on the basis of experimental data. The model constitutes an extension of an existing elastoplastic strain-hardening constitutive model capable of capturing the behaviour of variably saturated soils, based on energy conjugated stress variables in the framework of superposed continua. The purpose was to develop a model able to deal with possible mechanical instabilities within a consistent energy framework. The model shares the same conceptual structure of the elastoplastic laws proposed to deal with bonded geomaterials subject to weathering or diagenesis and is capable of modelling several kinds of instabilities induced by the loss of hydraulic bonding contributions. The novelty of the proposed formulation is enhanced with the incorporation of density dependent stiffness and hardening coefficients in order to allow the modeling of the pycnotropy behaviour of granular materials with a single set of material constants. The model has been implemented in the commercial FE platform PLAXIS, widely used in Europe for advanced geotechnical design. The algorithmic strategies adopted for the stress-point algorithm had to be revised to take into account the different approach adopted by PLAXIS developers in the solution of the discrete non-linear equilibrium equations. An extensive comparison between models with a series of experimental data reported by different authors is presented to validate the model and illustrate the capability of the newly developed model. After the validation, the effectiveness of the viscoplastic model is displayed by numerical simulations of a partially saturated slope failure of the laboratory scale and the effect of viscosity and degree of saturation on slope’s stability is discussed.

Keywords: PLAXIS software, slope, unsaturated soils, Viscoplasticity

Procedia PDF Downloads 194
30 The Return of the Rejected Kings: A Comparative Study of Governance and Procedures of Standards Development Organizations under the Theory of Private Ordering

Authors: Olia Kanevskaia

Abstract:

Standardization has been in the limelight of numerous academic studies. Typically described as ‘any set of technical specifications that either provides or is intended to provide a common design for a product or process’, standards do not only set quality benchmarks for products and services, but also spur competition and innovation, resulting in advantages for manufacturers and consumers. Their contribution to globalization and technology advancement is especially crucial in the Information and Communication Technology (ICT) and telecommunications sector, which is also characterized by a weaker state-regulation and expert-based rule-making. Most of the standards developed in that area are interoperability standards, which allow technological devices to establish ‘invisible communications’ and to ensure their compatibility and proper functioning. This type of standard supports a large share of our daily activities, ranging from traffic coordination by traffic lights to the connection to Wi-Fi networks, transmission of data via Bluetooth or USB and building the network architecture for the Internet of Things (IoT). A large share of ICT standards is developed in the specialized voluntary platforms, commonly referred to as Standards Development Organizations (SDOs), which gather experts from various industry sectors, private enterprises, governmental agencies and academia. The institutional architecture of these bodies can vary from semi-public bodies, such as European Telecommunications Standards Institute (ETSI), to industry-driven consortia, such as the Internet Engineering Task Force (IETF). The past decades witnessed a significant shift of standard setting to those institutions: while operating independently from the states regulation, they offer a rather informal setting, which enables fast-paced standardization and places technical supremacy and flexibility of standards above other considerations. Although technical norms and specifications developed by such nongovernmental platforms are not binding, they appear to create significant regulatory impact. In the United States (US), private voluntary standards can be used by regulators to achieve their policy objectives; in the European Union (EU), compliance with harmonized standards developed by voluntary European Standards Organizations (ESOs) can grant a product a free-movement pass. Moreover, standards can de facto manage the functioning of the market when other regulative alternatives are not available. Hence, by establishing (potentially) mandatory norms, SDOs assume regulatory functions commonly exercised by States and shape their own legal order. The purpose of this paper is threefold: First, it attempts to shed some light on SDOs’ institutional architecture, focusing on private, industry-driven platforms and comparing their regulatory frameworks with those of formal organizations. Drawing upon the relevant scholarship, the paper then discusses the extent to which the formulation of technological standards within SDOs constitutes a private legal order, operating in the shadow of governmental regulation. Ultimately, this contribution seeks to advise whether a state-intervention in industry-driven standard setting is desirable, and whether the increasing regulatory importance of SDOs should be addressed in legislation on standardization.

Keywords: private order, standardization, standard-setting organizations, transnational law

Procedia PDF Downloads 122
29 Preliminary Characterization of Hericium Species Sampled in Tuscany, Italy

Authors: V. Cesaroni, C. Girometta, A. Bernicchia, M. Brusoni, F. Corana, R. M. Baiguera, C. M. Cusaro, M. L. Guglielminetti, B. Mannucci, H. Kawagishi, C. Perini, A. M. Picco, P. Rossi, E. Salerni, E. Savino

Abstract:

Fungi of the genus Hericium contain various compounds with antibacterial activity, cytotoxic effect on cancer cells and bioactive molecules. Some of the active metabolites stimulate the synthesis of the Nerve Growth Factor (NGF). Recently, the effect of dietary supplement based on Hericium erinaceus on recognition memory and on hippocampal mossy fiber-CA3 neurotransmission was published. The aim of this study was to investigate the presence of Hericium species on Italian territory in order to isolate the strains for further studies and applications. The first step was to collect Hericium sporophores in Tuscany: H. alpestre Pers., H. coralloides (Scop.) Pers. and H. erinaceus (Bull.) Pers. were the species present. The strains of H. alpestre (H.a.1), H. coralloides (H.c.1) and H. erinaceus (H.e.1 & H.e.2) have been isolated in pure culture and preserved in the collection of the University of Pavia (MicUNIPV). The DNA sequences obtained from the strains were compared to other sequences found in international databases. Therefore, it was possible to construct a phylogenetic tree that highlights the clear separation in clades of the sequences and the molecular identification of our strains with the species of Hericium considered. The second step was to cultivate indoor and outdoor H. erinaceus in order to obtain as many sporophores as possible for further chemical analysis. All the procedures for H. erinaceus cultivation have been followed. Among the available recipes for indoor H. erinaceus cultivation, it was used a substrate formulation contained 70% oak sawdust, 20% rice bran, 10% wheat straw, 1% CaCO3 and 1% sucrose. The bioactive compounds present in the mycelia and in the sporophores of H. erinaceus were chemically analyzed in collaboration with the Centro Grandi Strumenti of the University of Pavia using high-performance liquid chromatography/electrospray ionization tandem mass spectrometry (HPLC/ESI-MS/MS). The materials to be analyzed were previously freeze-dried and then extracted with an alcoholic procedure. Preliminary chromatographic analysis revealed the presence of potentially bioactive and structurally different secondary metabolites such as polysaccharides, erinacins, ericenones, steroids and other terpenoids. Ericenones C and D (in sporophores) and erinacin A (in mycelium) have been identified by comparison with the respective standards. These molecules are known to have effects on the Central Nervous System (CNS) cells, which is the main objective of our studies. Thanks to the high sensitivity in the detection of bioactive compounds of H. erinaceus, it will be possible to use the To obtain lyophilized mycelium and the respective culture broth, 4 small pieces (about 5 mm2) of the respective H.e.1 or H.c.1 strains, taken from the margin of growing cultures (MEA), were inoculated into 1 liter of 2% ME (malt extract, Biokar Diagnostics). The static liquid cultures were kept at 24 °C in the dark chamber and fungi grew for one month. 10 replicates for each strain have been done. The method proposed as an analytical screening protocol to determine the optimal growth conditions of the fungus and to improve the production chain of H. erinaceus. These results encourage to carry out chemical analyzes also on H. alpestre and H. coralloides in order to evaluate the presence of bioactive compounds in these two species.

Keywords: Hericium species, Hercium erinaceus bioactive compounds, medicinal mushrooms, mushroom cultivation

Procedia PDF Downloads 114
28 Phytochemical Profile and in Vitro Bioactivity Studies on Two Underutilized Vegetables in Nigeria

Authors: Borokini Funmilayo Boede

Abstract:

B. alba L., commonly called ‘Amunututu’ and Solanecio biafrae called ‘Worowo’ among the Yoruba tribe in the southwest part of Nigeria are reported to be of great ethnomedicinal importance but are among many underutilized green leafy vegetables in the country. Many studies have established the nutritional values of these vegetables, utilization are very poor and indepth information on their chemical profiles is scarce. The aqueous, methanolic and ethanolic extracts of these vegetables were subjected to phytochemical screening and phenolic profiles of the alcoholic extracts were characterized by using high-performance liquid chromatography coupled with diode array detector (HPLC-DAD). Total phenol and flavonoid contents were determined, antioxidant activities were evaluated using five in vitro assays to assess DPPH, nitric oxide and hydroxyl radical-scavenging abilities, as well as reducing power with ferric reducing antioxidant assay and phosphomolybdate method. The antibacterial activities of the extracts against Staphylococcus aureus, Pseudomonas aeruginosa, and Salmonella typhi were evaluated by using agar well diffusion method and the antifungal activity evaluated against food-associated filamentous fungi by using poisoned food technique with the aim of assessing their nutraceutical potentials to encourage their production and utilization. The results revealed the presence of saponnin, steroids, tannin, terpenoid and flavonoid as well as phenolic compounds: gallic acid, chlorogenic acid, caffeic acid, coumarin, rutin, quercitrin, quercetin and kaemferol. The vegetables showed varying concentration dependent reducing and radical scavenging abilities from weak to strong compared with gallic acid, rutin, trolox and ascorbic acid used as positive controls; the aqueous extracts which gave higher concentrations of total phenol displayed higher ability to reduce Fe (lll) to Fe (ll) and stronger inhibiting power against hydroxyl radical than the alcoholic extracts and in most cases exhibited more potency than the ascorbic acids used as positive controls, at the same concentrations, whereas, methanol and / or ethanol extracts were found to be more effective in scavenging 2, 2-diphenyl-1-picryl hydrazyl radical and showed higher ability to reduce Mo (VI) to Mo (V) in total antioxidant assay than the aqueous extracts. However, the inhibition abilities of all the extracts against nitric oxide were comparable with the ascorbic acid control at the same concentrations. There were strong positive correlations with total phenol (mg GAE/g) and total flavonoid (mg RE/g) contents in the range TFC (r=0.857- 0999 and r= 0.904-1.000) and TPC (r= 0.844- 0.992 and r= 0.900 -0.999) for Basella alba and Senecio biafrae respectively. Inhibition concentration at 50 % (IC50) for each extract to scavenge DPPH, OH and NO radicals ranged from 32.73 to 1.52 compared with control (0.846 - -6.42) mg/ml. At 0.05g/ml, the vegetables were found to exhibit mild antibacterial activities against Staphylococcus aureus, Pseudomonas aeruginosa and Salmonella typhi compared with streptomycin sulphate used as control but appreciable antifungi activities against (Trichoderma rubrum and Aspergillus fumigates) compared with bonlate antibiotic positive control. The vegetables possess appreciable antioxidant and antimicrobial properties for promoting good health, their cultivation and utilization should be encouraged especially in the face of increasing health and economic challenges and food insecurity in many parts of the world.

Keywords: antimicrobial, antioxidants, extracts, phytochemicals

Procedia PDF Downloads 289
27 A Multivariate Exploratory Data Analysis of a Crisis Text Messaging Service in Order to Analyse the Impact of the COVID-19 Pandemic on Mental Health in Ireland

Authors: Hamda Ajmal, Karen Young, Ruth Melia, John Bogue, Mary O'Sullivan, Jim Duggan, Hannah Wood

Abstract:

The Covid-19 pandemic led to a range of public health mitigation strategies in order to suppress the SARS-CoV-2 virus. The drastic changes in everyday life due to lockdowns had the potential for a significant negative impact on public mental health, and a key public health goal is to now assess the evidence from available Irish datasets to provide useful insights on this issue. Text-50808 is an online text-based mental health support service, established in Ireland in 2020, and can provide a measure of revealed distress and mental health concerns across the population. The aim of this study is to explore statistical associations between public mental health in Ireland and the Covid-19 pandemic. Uniquely, this study combines two measures of emotional wellbeing in Ireland: (1) weekly text volume at Text-50808, and (2) emotional wellbeing indicators reported by respondents of the Amárach public opinion survey, carried out on behalf of the Department of Health, Ireland. For this analysis, a multivariate graphical exploratory data analysis (EDA) was performed on the Text-50808 dataset dated from 15th June 2020 to 30th June 2021. This was followed by time-series analysis of key mental health indicators including: (1) the percentage of daily/weekly texts at Text-50808 that mention Covid-19 related issues; (2) the weekly percentage of people experiencing anxiety, boredom, enjoyment, happiness, worry, fear and stress in Amárach survey; and Covid-19 related factors: (3) daily new Covid-19 case numbers; (4) daily stringency index capturing the effect of government non-pharmaceutical interventions (NPIs) in Ireland. The cross-correlation function was applied to measure the relationship between the different time series. EDA of the Text-50808 dataset reveals significant peaks in the volume of texts on days prior to level 3 lockdown and level 5 lockdown in October 2020, and full level 5 lockdown in December 2020. A significantly high positive correlation was observed between the percentage of texts at Text-50808 that reported Covid-19 related issues and the percentage of respondents experiencing anxiety, worry and boredom (at a lag of 1 week) in Amárach survey data. There is a significant negative correlation between percentage of texts with Covid-19 related issues and percentage of respondents experiencing happiness in Amárach survey. Daily percentage of texts at Text-50808 that reported Covid-19 related issues to have a weak positive correlation with daily new Covid-19 cases in Ireland at a lag of 10 days and with daily stringency index of NPIs in Ireland at a lag of 2 days. The sudden peaks in text volume at Text-50808 immediately prior to new restrictions in Ireland indicate an association between a rise in mental health concerns following the announcement of new restrictions. There is also a high correlation between emotional wellbeing variables in the Amárach dataset and the number of weekly texts at Text-50808, and this confirms that Text-50808 reflects overall public sentiment. This analysis confirms the benefits of the texting service as a community surveillance tool for mental health in the population. This initial EDA will be extended to use multivariate modeling to predict the effect of additional Covid-19 related factors on public mental health in Ireland.

Keywords: COVID-19 pandemic, data analysis, digital health, mental health, public health, digital health

Procedia PDF Downloads 108
26 Academic Achievement in Argentinean College Students: Major Findings in Psychological Assessment

Authors: F. Uriel, M. M. Fernandez Liporace

Abstract:

In the last decade, academic achievement in higher education has become a topic of agenda in Argentina, regarding the high figures of adjustment problems, academic failure and dropout, and the low graduation rates in the context of massive classes and traditional teaching methods. Psychological variables, such as perceived social support, academic motivation and learning styles and strategies have much to offer since their measurement by tests allows a proper diagnose of their influence on academic achievement. Framed in a major research, several studies analysed multiple samples, totalizing 5135 students attending Argentinean public universities. The first goal was aimed at the identification of statistically significant differences in psychological variables -perceived social support, learning styles, learning strategies, and academic motivation- by age, gender, and degree of academic advance (freshmen versus sophomores). Thus, an inferential group differences study for each psychological dependent variable was developed by means of student’s T tests, given the features of data distribution. The second goal, aimed at examining associations between the four psychological variables on the one hand, and academic achievement on the other, was responded by correlational studies, calculating Pearson’s coefficients, employing grades as the quantitative indicator of academic achievement. The positive and significant results that were obtained led to the formulation of different predictive models of academic achievement which had to be tested in terms of adjustment and predictive power. These models took the four psychological variables above mentioned as predictors, using regression equations, examining predictors individually, in groups of two, and together, analysing indirect effects as well, and adding the degree of academic advance and gender, which had shown their importance within the first goal’s findings. The most relevant results were: first, gender showed no influence on any dependent variable. Second, only good achievers perceived high social support from teachers, and male students were prone to perceive less social support. Third, freshmen exhibited a pragmatic learning style, preferring unstructured environments, the use of examples and simultaneous-visual processing in learning, whereas sophomores manifest an assimilative learning style, choosing sequential and analytic processing modes. Despite these features, freshmen have to deal with abstract contents and sophomores, with practical learning situations due to study programs in force. Fifth, no differences in academic motivation were found between freshmen and sophomores. However, the latter employ a higher number of more efficient learning strategies. Sixth, freshmen low achievers lack intrinsic motivation. Seventh, models testing showed that social support, learning styles and academic motivation influence learning strategies, which affect academic achievement in freshmen, particularly males; only learning styles influence achievement in sophomores of both genders with direct effects. These findings led to conclude that educational psychologists, education specialists, teachers, and universities must plan urgent and major changes. These must be applied in renewed and better study programs, syllabi and classes, as well as tutoring and training systems. Such developments should be targeted to the support and empowerment of students in their academic pathways, and therefore to the upgrade of learning quality, especially in the case of freshmen, male freshmen, and low achievers.

Keywords: academic achievement, academic motivation, coping, learning strategies, learning styles, perceived social support

Procedia PDF Downloads 93
25 Equity And Inclusivity In Sustainable Urban Planning: Addressing Social Disparities In Eco-City Development

Authors: Olayeye Olubunmi Shola

Abstract:

Amidst increasing global environmental concerns, sustainable urban planning has emerged as a vital strategy in counteracting the negative impacts of urbanization on the environment. However, the emphasis on sustainability often disregards crucial elements of fairness and equal participation within urban settings. This abstract presents a comprehensive overview of the challenges, objectives, significance, and methodologies for addressing social inequalities in the development of eco-cities, with a specific focus on Abuja, Nigeria. Sustainable urban planning, particularly in the context of developing eco-cities, aims to construct cities prioritizing environmental sustainability and resilience. Nonetheless, a significant gap exists in addressing the enduring social disparities within these initiatives. Equitable distribution of resources, access to services, and social inclusivity are essential components that must be integrated into urban planning frameworks for cities that are genuinely sustainable and habitable. Abuja, the capital city of Nigeria, provides a distinctive case for examining the intersection of sustainability and social justice in urban planning. Despite the urban development, Abuja grapples with challenges such as socio-economic disparities, unequal access to essential services, and inadequate housing among its residents. Recognizing and redressing these disparities within the framework of eco-city development is critical for nurturing an inclusive and sustainable urban environment. The primary aim of this study is to scrutinize and pinpoint the social discrepancies within Abuja's initiatives for eco-city development. Specific objectives include: Evaluating the current socio-economic landscape of Abuja to identify disparities in resource, service, and infrastructure access. Comprehending the existing sustainable urban planning initiatives and their influence on social fairness. Suggesting strategies and recommendations to improve fairness and inclusivity within Abuja's plans for eco-city development. This research holds substantial importance for urban planning practices and policy formulation, not only in Abuja but also on a global scale. By highlighting the crucial role of social equity and inclusivity in the development of eco-cities, this study aims to provide insights that can steer more comprehensive, people-centered urban planning practices. Addressing social disparities within sustainability initiatives is crucial for achieving genuinely sustainable and fair urban spaces. The study will employ qualitative and quantitative methodologies. Data collection will involve surveys, interviews, and observations to capture the diverse experiences and perspectives of various social groups within Abuja. Furthermore, quantitative data on infrastructure, service access, and socio-economic indicators will be collated from government reports, academic sources, and non-governmental organizations. Analytical tools such as Geographic Information Systems (GIS) will be utilized to map and visualize spatial disparities in resource allocation and service access. Comparative analyses and case studies of successful interventions in other cities will be conducted to derive applicable strategies for Abuja's context. In conclusion, this study aims to contribute to the discourse on sustainable urban planning by advocating for equity and inclusivity in the development of eco-cities. By centering on Abuja as a case study, it aims to provide practical insights and solutions for the creation of more fair and sustainable urban environments.

Keywords: fairness, sustainability, geographical information system, equity

Procedia PDF Downloads 36
24 Revolutionizing Manufacturing: Embracing Additive Manufacturing with Eggshell Polylactide (PLA) Polymer

Authors: Choy Sonny Yip Hong

Abstract:

This abstract presents an exploration into the creation of a sustainable bio-polymer compound for additive manufacturing, specifically 3D printing, with a focus on eggshells and polylactide (PLA) polymer. The project initially conducted experiments using a variety of food by-products to create bio-polymers, and promising results were obtained when combining eggshells with PLA polymer. The research journey involved precise measurements, drying of PLA to remove moisture, and the utilization of a filament-making machine to produce 3D printable filaments. The project began with exploratory research and experiments, testing various combinations of food by-products to create bio-polymers. After careful evaluation, it was discovered that eggshells and PLA polymer produced promising results. The initial mixing of the two materials involved heating them just above the melting point. To make the compound 3D printable, the research focused on finding the optimal formulation and production process. The process started with precise measurements of the PLA and eggshell materials. The PLA was placed in a heating oven to remove any absorbed moisture. Handmade testing samples were created to guide the planning for 3D-printed versions. The scrap PLA was recycled and ground into a powdered state. The drying process involved gradual moisture evaporation, which required several hours. The PLA and eggshell materials were then placed into the hopper of a filament-making machine. The machine's four heating elements controlled the temperature of the melted compound mixture, allowing for optimal filament production with accurate and consistent thickness. The filament-making machine extruded the compound, producing filament that could be wound on a wheel. During the testing phase, trials were conducted with different percentages of eggshell in the PLA mixture, including a high percentage (20%). However, poor extrusion results were observed for high eggshell percentage mixtures. Samples were created, and continuous improvement and optimization were pursued to achieve filaments with good performance. To test the 3D printability of the DIY filament, a 3D printer was utilized, set to print the DIY filament smoothly and consistently. Samples were printed and mechanically tested using a universal testing machine to determine their mechanical properties. This testing process allowed for the evaluation of the filament's performance and suitability for additive manufacturing applications. In conclusion, the project explores the creation of a sustainable bio-polymer compound using eggshells and PLA polymer for 3D printing. The research journey involved precise measurements, drying of PLA, and the utilization of a filament-making machine to produce 3D printable filaments. Continuous improvement and optimization were pursued to achieve filaments with good performance. The project's findings contribute to the advancement of additive manufacturing, offering opportunities for design innovation, carbon footprint reduction, supply chain optimization, and collaborative potential. The utilization of eggshell PLA polymer in additive manufacturing has the potential to revolutionize the manufacturing industry, providing a sustainable alternative and enabling the production of intricate and customized products.

Keywords: additive manufacturing, 3D printing, eggshell PLA polymer, design innovation, carbon footprint reduction, supply chain optimization, collaborative potential

Procedia PDF Downloads 37
23 Modern Technology for Strengthening Concrete Structures Makes Them Resistant to Earthquakes

Authors: Mohsen Abdelrazek Khorshid Ali Selim

Abstract:

Disadvantages and errors of current concrete reinforcement methodsL: Current concrete reinforcement methods are adopted in most parts of the world in their various doctrines and names. They adopt the so-called concrete slab system, where these slabs are semi-independent and isolated from each other and from the surrounding environment of concrete columns or beams, so that the reinforcing steel does not cross from one slab to another or from one slab to adjacent columns. It or the beams surrounding it and vice versa are only a few centimeters and no more. The same applies exactly to the concrete columns that support the building, where the reinforcing steel does not extend from the slabs or beams to the inside of the columns or vice versa except for a few centimeters and no more, just as the reinforcing steel does not extend from inside the column at the top. The ceiling is only a few centimetres, and the same thing is literally repeated in the concrete beams that connect the columns and separate the slabs, where the reinforcing steel does not cross from one beam to another or from one beam to the slabs or columns adjacent to it and vice versa, except for a few centimeters, which makes the basic building elements of columns, slabs and beams They all work in isolation from each other and from the environment surrounding them from all sides. This traditional method of reinforcement may be valid and lasting in geographical areas that are not exposed to earthquakes and earthquakes, where all the loads and tensile forces in the building are constantly directed vertically downward due to gravity and are borne directly by the vertical reinforcement of the building. However, in the case of earthquakes and earthquakes, the loads and tensile forces in the building shift from the vertical direction to the horizontal direction at an angle of inclination that depends on the strength of the earthquake, and most of them are borne by the horizontal reinforcement extending between the basic elements of the building, such as columns, slabs and beams, and since the crossing of the reinforcement between each of the columns, slabs and beams between them And each other, and vice versa, does not exceed several centimeters. In any case, the tensile strength, cohesion and bonding between the various parts of the building are very weak, which causes the buildings to disintegrate and collapse in the horrific manner that we saw in the earthquake in Turkey and Syria in February 2023, which caused the collapse of tens of thousands of buildings in A few seconds later, it left more than 50,000 dead, hundreds of thousands injured, and millions displaced. Description of the new earthquake-resistant model: The idea of the new model in the reinforcement of concrete buildings and constructions is based on the theory that we have formulated as follows: [The tensile strength, cohesion and bonding between the basic parts of the concrete building (columns, beams and slabs) increases as the lengths of the reinforcing steel bars increase and they extend and branch and the different parts of the building share them with each other.] . In other words, the strength, solidity, and cohesion of concrete buildings increase and they become resistant to earthquakes as the lengths of the reinforcing steel bars increase, extend, branch, and share with the various parts of the building, such as columns, beams, and slabs. That is, the reinforcing skewers of the columns must extend in their lengths without cutting to cross from one floor to another until their end. Likewise, the reinforcing skewers of the beams must extend in their lengths without cutting to cross from one beam to another. The ends of these skewers must rest at the bottom of the columns adjacent to the beams. The same thing applies to the reinforcing skewers of the slabs where they must These skewers should be extended in their lengths without cutting to cross from one tile to another, and the ends of these skewers should rest either under the adjacent columns or inside the beams adjacent to the slabs as follows: First, reinforce the columns: The columns have the lion's share of the reinforcing steel in this model in terms of type and quantity, as the columns contain two types of reinforcing bars. The first type is large-diameter bars that emerge from the base of the building, which are the nerves of the column. These bars must extend over their normal length of 12 meters or more and extend to a height of three floors, if desired. In raising other floors, bars with the same diameter and the same length are added to the top after the second floor. The second type is bars with a smaller diameter, and they are the same ones that are used to reinforce beams and slabs, so that the bars that reinforce the beams and slabs facing each column are bent down inside this column and along the entire length of the column. This requires an order. Most engineers do not prefer it, which is to pour the entire columns and pour the roof at once, but we prefer this method because it enables us to extend the reinforcing bars of both the beams and slabs to the bottom of the columns so that the entire building becomes one concrete block that is cohesive and resistant to earthquakes. Secondly, arming the cameras: The beams' reinforcing skewers must also extend to a full length of 12 meters or more without cutting. The ends of the skewers are bent and dropped inside the column at the beginning of the beam to its bottom. Then the skewers are extended inside the beam so that their other end falls under the facing column at the end of the beam. The skewers may cross over the head of a column. Another passes through another adjacent beam and rests at the bottom of a third column, according to the lengths of each of the skewers and beams. Third, reinforcement of slabs: The slab reinforcing skewers must also extend their entire length, 12 meters or more, without cutting, distinguishing between two cases. The first case is the skewers opposite the columns, and their ends are dropped inside one of the columns. Then the skewers cross inside the adjacent slab and their other end falls below the opposite column. The skewers may cross over The head of the adjacent column passes through another adjacent slab and rests at the bottom of a third column, according to the dimensions of the slabs and the lengths of the skewers. The second case is the skewers opposite the beams, and their ends must be bent in the form of a square or rectangle according to the dimensions of the beam’s width and height, and this square or rectangle is dropped inside the beam at the beginning of the slab, and it serves as The skewers are for the beams, then the skewers are extended along the length of the slab, and at the end of the slab, the skewers are bent down to the bottom of the adjacent beam in the shape of the letter U, after which the skewers are extended inside the adjacent slab, and this is repeated in the same way inside the other adjacent beams until the end of the skewer, then it is bent downward in the form of a square or rectangle inside the beam, as happened. In its beginning.

Keywords: earthquake resistant buildings, earthquake resistant concrete constructions, new technology for reinforcement of concrete buildings, new technology in concrete reinforcement

Procedia PDF Downloads 29