Search results for: analytical modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5966

Search results for: analytical modeling

536 Braille Code Matrix

Authors: Mohammed E. A. Brixi Nigassa, Nassima Labdelli, Ahmed Slami, Arnaud Pothier, Sofiane Soulimane

Abstract:

According to the world health organization (WHO), there are almost 285 million people with visual disability, 39 million of these people are blind. Nevertheless, there is a code for these people that make their life easier and allow them to access information more easily; this code is the Braille code. There are several commercial devices allowing braille reading, unfortunately, most of these devices are not ergonomic and too expensive. Moreover, we know that 90 % of blind people in the world live in low-incomes countries. Our contribution aim is to concept an original microactuator for Braille reading, as well as being ergonomic, inexpensive and lowest possible energy consumption. Nowadays, the piezoelectric device gives the better actuation for low actuation voltage. In this study, we focus on piezoelectric (PZT) material which can bring together all these conditions. Here, we propose to use one matrix composed by six actuators to form the 63 basic combinations of the Braille code that contain letters, numbers, and special characters in compliance with the standards of the braille code. In this work, we use a finite element model with Comsol Multiphysics software for designing and modeling this type of miniature actuator in order to integrate it into a test device. To define the geometry and the design of our actuator, we used physiological limits of perception of human being. Our results demonstrate in our study that piezoelectric actuator could bring a large deflection out-of-plain. Also, we show that microactuators can exhibit non uniform compression. This deformation depends on thin film thickness and the design of membrane arm. The actuator composed of four arms gives the higher deflexion and it always gives a domed deformation at the center of the deviceas in case of the Braille system. The maximal deflection can be estimated around ten micron per Volt (~ 10µm/V). We noticed that the deflection according to the voltage is a linear function, and this deflection not depends only on the voltage the voltage, but also depends on the thickness of the film used and the design of the anchoring arm. Then, we were able to simulate the behavior of the entire matrix and thus display different characters in Braille code. We used these simulations results to achieve our demonstrator. This demonstrator is composed of a layer of PDMS on which we put our piezoelectric material, and then added another layer of PDMS to isolate our actuator. In this contribution, we compare our results to optimize the final demonstrator.

Keywords: Braille code, comsol software, microactuators, piezoelectric

Procedia PDF Downloads 347
535 Antecedents and Consequents of Organizational Politics: A Select Study of a Central University

Authors: Poonam Mishra, Shiv Kumar Sharma, Sanjeev Swami

Abstract:

Purpose: The Purpose of this paper is to investigate the relationship of percieved organizational politics with three levels of antecedents (i.e., organizational level, work environment level and individual level)and its consequents simultaneously. The study addresses antecedents and consequents of percieved political behavior in the higher education sector of India with specific reference to a central university. Design/ Methodology/ Approach: A conceptual framework and hypotheses were first developed on the basis of review of previous studies on organizational politics. A questionnaire was then developed carrying 66 items related to 8-constructs and demographic characteristics of respondents. Jundegemental sampling was used to select respondents. Primary data is collected through structured questionnaire from 45 faculty members of a central university. The sample constitutes Professors, Associate Professors and Assistant Professors from various departments of the University. To test hypotheses data was analyzed statistically using partial least square-structural equations modeling (PLS-SEM). Findings: Results indicated a strong support for OP’s relationship with three of the four proposed antecedents that are, workforce diversity, relationship conflict and need for power with relationship conflict having the strongest impact. No significant relationship was found between role conflict and perception of organizational politics. The three consequences that is, intention to turnover, job anxiety, and organizational commitment are significantly impacted by perception of organizational politics. Practical Implications– This study will be helpful in motivating future research for improving the quality of higher education in India by reducing the level of antecedents that adds to the level of perception of organizational politics, ultimately resulting in unfavorable outcomes. Originality/value: Although a large number of studies on atecedents and consequents of percieved organizational politics have been reported, little attention has been paid to test all the separate but interdependent relationships simultaneously; in this paper organizational politics will be simultaneously treated as a dependent variable and same will be treated as independent variable in subsequent relationships.

Keywords: organizational politics, workforce diversity, relationship conflict, role conflict, need for power, intention to turnover, job anxiety, organizational commitment

Procedia PDF Downloads 481
534 Engaging Employees in Innovation - A Quantitative Study on The Role of Affective Commitment to Change Among Norwegian Employees in Higher Education.

Authors: Barbara Rebecca Mutonyi, Chukwuemeka Echebiri, Terje Slåtten, Gudbrand Lien

Abstract:

The concept of affective commitment to change has been scarcely explored among employees in the higher education literature. The present study addresses this knowledge gap in the literature by examining how various psychological factors, such as psychological empowerment (PsyEmp), and psychological capital (PsyCap), promotes affective commitment to change. As affective commitment to change has been identified by previous studies as an important aspect to implementation behavior, the study examines the correlation of affective commitment to change on employee innovative behavior (EIB) in higher education. The study proposes mediation relationship between PsyEmp, PsyCap, and affective commitment to change. 250 employees in higher education in Norway were sampled for this study. The study employed online survey for data collection, utilizing Stata software to perform Partial least square equation modeling to test the proposed hypotheses of the study. Through bootstrapping, the study was able to test for mediating effects. Findings of the study shows a strong direct relationship between the leadership factor PsyEmp on the individual factor PsyCap ( = 0.453). In addition, the findings of the study reveal that both PsyEmp and PsyCap are related to affective commitment to change ( = 0.28 and  = 0.249, respectively). In total, PsyEmp and PsyCap explains about 10% of the variance in the concept of affective commitment to change. Further, the direct effect of effective commitment to change and EIB is also supported ( = 0.183). The three factors, PsyEmp, PsyCap, and affective commitment to change, explains nearly 40% (R2 = 0.39) of the variance found in EIB. The relationship between PsyEmp, PsyCap, and affective commitment to change are mediated through the individual factor PsyCap. In order to effectively promote affective commitment to change among higher education employees, higher education managers should focus on both the leadership factor, PsyEmp, as well as the individual factor, PsyCap, of their employees. In this regard, higher education managers should strengthen employees EIB through providing autonomy, creating a safe environment that encourages innovation thinking and action, and providing employees in higher education opportunities to be involved in changes occurring at work. This contributes to strengthening employees´ affective commitment to change, that further improves their EIB in their work roles as higher education employees. As such, the results of this study implicate the ambidextrous nature of the concepts of affective commitment to change and EIB that should be considered in future studies of innovation in higher education research.

Keywords: affective commitment to change, psychological capital, innovative behavior, psychological empowerment, higher education

Procedia PDF Downloads 105
533 Description of Decision Inconsistency in Intertemporal Choices and Representation of Impatience as a Reflection of Irrationality: Consequences in the Field of Personalized Behavioral Finance

Authors: Roberta Martino, Viviana Ventre

Abstract:

Empirical evidence has, over time, confirmed that the behavior of individuals is inconsistent with the descriptions provided by the Discounted Utility Model, an essential reference for calculating the utility of intertemporal prospects. The model assumes that individuals calculate the utility of intertemporal prospectuses by adding up the values of all outcomes obtained by multiplying the cardinal utility of the outcome by the discount function estimated at the time the outcome is received. The trend of the discount function is crucial for the preferences of the decision maker because it represents the perception of the future, and its trend causes temporally consistent or temporally inconsistent preferences. In particular, because different formulations of the discount function lead to various conclusions in predicting choice, the descriptive ability of models with a hyperbolic trend is greater than linear or exponential models. Suboptimal choices from any time point of view are the consequence of this mechanism, the psychological factors of which are encapsulated in the discount rate trend. In addition, analyzing the decision-making process from a psychological perspective, there is an equivalence between the selection of dominated prospects and a degree of impatience that decreases over time. The first part of the paper describes and investigates the anomalies of the discounted utility model by relating the cognitive distortions of the decision-maker to the emotional factors that are generated during the evaluation and selection of alternatives. Specifically, by studying the degree to which impatience decreases, it’s possible to quantify how the psychological and emotional mechanisms of the decision-maker result in a lack of decision persistence. In addition, this description presents inconsistency as the consequence of an inconsistent attitude towards time-delayed choices. The second part of the paper presents an experimental phase in which we show the relationship between inconsistency and impatience in different contexts. Analysis of the degree to which impatience decreases confirms the influence of the decision maker's emotional impulses for each anomaly in the utility model discussed in the first part of the paper. This work provides an application in the field of personalized behavioral finance. Indeed, the numerous behavioral diversities, evident even in the degrees of decrease in impatience in the experimental phase, support the idea that optimal strategies may not satisfy individuals in the same way. With the aim of homogenizing the categories of investors and to provide a personalized approach to advice, the results proven in the experimental phase are used in a complementary way with the information in the field of behavioral finance to implement the Analytical Hierarchy Process model in intertemporal choices, useful for strategic personalization. In the construction of the Analytic Hierarchy Process, the degree of decrease in impatience is understood as reflecting irrationality in decision-making and is therefore used for the construction of weights between anomalies and behavioral traits.

Keywords: analytic hierarchy process, behavioral finance, financial anomalies, impatience, time inconsistency

Procedia PDF Downloads 59
532 Cascade Multilevel Inverter-Based Grid-Tie Single-Phase and Three-Phase-Photovoltaic Power System Controlling and Modeling

Authors: Syed Masood Hussain

Abstract:

An effective control method, including system-level control and pulse width modulation for quasi-Z-source cascade multilevel inverter (qZS-CMI) based grid-tie photovoltaic (PV) power system is proposed. The system-level control achieves the grid-tie current injection, independent maximum power point tracking (MPPT) for separate PV panels, and dc-link voltage balance for all quasi-Z-source H-bridge inverter (qZS-HBI) modules. A recent upsurge in the study of photovoltaic (PV) power generation emerges, since they directly convert the solar radiation into electric power without hampering the environment. However, the stochastic fluctuation of solar power is inconsistent with the desired stable power injected to the grid, owing to variations of solar irradiation and temperature. To fully exploit the solar energy, extracting the PV panels’ maximum power and feeding them into grids at unity power factor become the most important. The contributions have been made by the cascade multilevel inverter (CMI). Nevertheless, the H-bridge inverter (HBI) module lacks boost function so that the inverter KVA rating requirement has to be increased twice with a PV voltage range of 1:2; and the different PV panel output voltages result in imbalanced dc-link voltages. However, each HBI module is a two-stage inverter, and many extra dc–dc converters not only increase the complexity of the power circuit and control and the system cost, but also decrease the efficiency. Recently, the Z-source/quasi-Z-source cascade multilevel inverter (ZS/qZS-CMI)-based PV systems were proposed. They possess the advantages of both traditional CMI and Z-source topologies. In order to properly operate the ZS/qZS-CMI, the power injection, independent control of dc-link voltages, and the pulse width modulation (PWM) are necessary. The main contributions of this paper include: 1) a novel multilevel space vector modulation (SVM) technique for the single phase qZS-CMI is proposed, which is implemented without additional resources; 2) a grid-connected control for the qZS-CMI based PV system is proposed, where the all PV panel voltage references from their independent MPPTs are used to control the grid-tie current; the dual-loop dc-link peak voltage control.

Keywords: Quzi-Z source inverter, Photo voltaic power system, space vector modulation, cascade multilevel inverter

Procedia PDF Downloads 534
531 Mathematical Modeling on Capturing of Magnetic Nanoparticles in an Implant Assisted Channel for Magnetic Drug Targeting

Authors: Shashi Sharma, V. K. Katiyar, Uaday Singh

Abstract:

The ability to manipulate magnetic particles in fluid flows by means of inhomogeneous magnetic fields is used in a wide range of biomedical applications including magnetic drug targeting (MDT). In MDT, magnetic carrier particles bounded with drug molecules are injected into the vascular system up-stream from the malignant tissue and attracted or retained at the specific region in the body with the help of an external magnetic field. Although the concept of MDT has been around for many years, however, wide spread acceptance of the technique is still looming despite the fact that it has shown some promise in both in vivo and clinical studies. This is because traditional MDT has some inherent limitations. Typically, the magnetic force is not very strong and it is also very short ranged. Since the magnetic force must overcome rather large hydrodynamic forces in the body, MDT applications have been limited to sites located close to the surface of the skin. Even in this most favorable situation, studies have shown that it is difficult to collect appreciable amounts of the MDCPs at the target site. To overcome these limitations of the traditional MDT approach, Ritter and co-workers reported the implant assisted magnetic drug targeting (IA-MDT). In IA-MDT, the magnetic implants are placed strategically at the target site to greatly and locally increase the magnetic force on MDCPs and help to attract and retain the MDCPs at the targeted region. In the present work, we develop a mathematical model to study the capturing of magnetic nanoparticles flowing in a fluid in an implant assisted cylindrical channel under the magnetic field. A coil of ferromagnetic SS 430 has been implanted inside the cylindrical channel to enhance the capturing of magnetic nanoparticles under the magnetic field. The dominant magnetic and drag forces, which significantly affect the capturing of nanoparticles, are incorporated in the model. It is observed through model results that capture efficiency increases from 23 to 51 % as we increase the magnetic field from 0.1 to 0.5 T, respectively. The increase in capture efficiency by increase in magnetic field is because as the magnetic field increases, the magnetization force, which is attractive in nature and responsible to attract or capture the magnetic particles, increases and results the capturing of large number of magnetic particles due to high strength of attractive magnetic force.

Keywords: capture efficiency, implant assisted-magnetic drug targeting (IA-MDT), magnetic nanoparticles, modelling

Procedia PDF Downloads 451
530 The Influence of Infiltration and Exfiltration Processes on Maximum Wave Run-Up: A Field Study on Trinidad Beaches

Authors: Shani Brathwaite, Deborah Villarroel-Lamb

Abstract:

Wave run-up may be defined as the time-varying position of the landward extent of the water’s edge, measured vertically from the mean water level position. The hydrodynamics of the swash zone and the accurate prediction of maximum wave run-up, play a critical role in the study of coastal engineering. The understanding of these processes is necessary for the modeling of sediment transport, beach recovery and the design and maintenance of coastal engineering structures. However, due to the complex nature of the swash zone, there remains a lack of detailed knowledge in this area. Particularly, there has been found to be insufficient consideration of bed porosity and ultimately infiltration/exfiltration processes, in the development of wave run-up models. Theoretically, there should be an inverse relationship between maximum wave run-up and beach porosity. The greater the rate of infiltration during an event, associated with a larger bed porosity, the lower the magnitude of the maximum wave run-up. Additionally, most models have been developed using data collected on North American or Australian beaches and may have limitations when used for operational forecasting in Trinidad. This paper aims to assess the influence and significance of infiltration and exfiltration processes on wave run-up magnitudes within the swash zone. It also seeks to pay particular attention to how well various empirical formulae can predict maximum run-up on contrasting beaches in Trinidad. Traditional surveying techniques will be used to collect wave run-up and cross-sectional data on various beaches. Wave data from wave gauges and wave models will be used as well as porosity measurements collected using a double ring infiltrometer. The relationship between maximum wave run-up and differing physical parameters will be investigated using correlation analyses. These physical parameters comprise wave and beach characteristics such as wave height, wave direction, period, beach slope, the magnitude of wave setup, and beach porosity. Most parameterizations to determine the maximum wave run-up are described using differing parameters and do not always have a good predictive capability. This study seeks to improve the formulation of wave run-up by using the aforementioned parameters to generate a formulation with a special focus on the influence of infiltration/exfiltration processes. This will further contribute to the improvement of the prediction of sediment transport, beach recovery and design of coastal engineering structures in Trinidad.

Keywords: beach porosity, empirical models, infiltration, swash, wave run-up

Procedia PDF Downloads 341
529 Engineering Topology of Photonic Systems for Sustainable Molecular Structure: Autopoiesis Systems

Authors: Moustafa Osman Mohammed

Abstract:

This paper introduces topological order in descried social systems starting with the original concept of autopoiesis by biologists and scientists, including the modification of general systems based on socialized medicine. Topological order is important in describing the physical systems for exploiting optical systems and improving photonic devices. The stats of topological order have some interesting properties of topological degeneracy and fractional statistics that reveal the entanglement origin of topological order, etc. Topological ideas in photonics form exciting developments in solid-state materials, that being; insulating in the bulk, conducting electricity on their surface without dissipation or back-scattering, even in the presence of large impurities. A specific type of autopoiesis system is interrelated to the main categories amongst existing groups of the ecological phenomena interaction social and medical sciences. The hypothesis, nevertheless, has a nonlinear interaction with its natural environment 'interactional cycle' for exchange photon energy with molecules without changes in topology. The engineering topology of a biosensor is based on the excitation boundary of surface electromagnetic waves in photonic band gap multilayer films. The device operation is similar to surface Plasmonic biosensors in which a photonic band gap film replaces metal film as the medium when surface electromagnetic waves are excited. The use of photonic band gap film offers sharper surface wave resonance leading to the potential of greatly enhanced sensitivity. So, the properties of the photonic band gap material are engineered to operate a sensor at any wavelength and conduct a surface wave resonance that ranges up to 470 nm. The wavelength is not generally accessible with surface Plasmon sensing. Lastly, the photonic band gap films have robust mechanical functions that offer new substrates for surface chemistry to understand the molecular design structure and create sensing chips surface with different concentrations of DNA sequences in the solution to observe and track the surface mode resonance under the influences of processes that take place in the spectroscopic environment. These processes led to the development of several advanced analytical technologies: which are; automated, real-time, reliable, reproducible, and cost-effective. This results in faster and more accurate monitoring and detection of biomolecules on refractive index sensing, antibody-antigen reactions with a DNA or protein binding. Ultimately, the controversial aspect of molecular frictional properties is adjusted to each other in order to form unique spatial structure and dynamics of biological molecules for providing the environment mutual contribution in investigation of changes due to the pathogenic archival architecture of cell clusters.

Keywords: autopoiesis, photonics systems, quantum topology, molecular structure, biosensing

Procedia PDF Downloads 76
528 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.

Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications

Procedia PDF Downloads 78
527 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 51
526 Cleaning of Polycyclic Aromatic Hydrocarbons (PAH) Obtained from Ferroalloys Plant

Authors: Stefan Andersson, Balram Panjwani, Bernd Wittgens, Jan Erik Olsen

Abstract:

Polycyclic Aromatic hydrocarbons are organic compounds consisting of only hydrogen and carbon aromatic rings. PAH are neutral, non-polar molecules that are produced due to incomplete combustion of organic matter. These compounds are carcinogenic and interact with biological nucleophiles to inhibit the normal metabolic functions of the cells. Norways, the most important sources of PAH pollution is considered to be aluminum plants, the metallurgical industry, offshore oil activity, transport, and wood burning. Stricter governmental regulations regarding emissions to the outer and internal environment combined with increased awareness of the potential health effects have motivated Norwegian metal industries to increase their efforts to reduce emissions considerably. One of the objective of the ongoing industry and Norwegian research council supported "SCORE" project is to reduce potential PAH emissions from an off gas stream of a ferroalloy furnace through controlled combustion. In a dedicated combustion chamber. The sizing and configuration of the combustion chamber depends on the combined properties of the bulk gas stream and the properties of the PAH itself. In order to achieve efficient and complete combustion the residence time and minimum temperature need to be optimized. For this design approach reliable kinetic data of the individual PAH-species and/or groups thereof are necessary. However, kinetic data on the combustion of PAH are difficult to obtain and there is only a limited number of studies. The paper presents an evaluation of the kinetic data for some of the PAH obtained from literature. In the present study, the oxidation is modelled for pure PAH and also for PAH mixed with process gas. Using a perfectly stirred reactor modelling approach the oxidation is modelled including advanced reaction kinetics to study influence of residence time and temperature on the conversion of PAH to CO2 and water. A Chemical Reactor Network (CRN) approach is developed to understand the oxidation of PAH inside the combustion chamber. Chemical reactor network modeling has been found to be a valuable tool in the evaluation of oxidation behavior of PAH under various conditions.

Keywords: PAH, PSR, energy recovery, ferro alloy furnace

Procedia PDF Downloads 260
525 Flux-Gate vs. Anisotropic Magneto Resistance Magnetic Sensors Characteristics in Closed-Loop Operation

Authors: Neoclis Hadjigeorgiou, Spyridon Angelopoulos, Evangelos V. Hristoforou, Paul P. Sotiriadis

Abstract:

The increasing demand for accurate and reliable magnetic measurements over the past decades has paved the way for the development of different types of magnetic sensing systems as well as of more advanced measurement techniques. Anisotropic Magneto Resistance (AMR) sensors have emerged as a promising solution for applications requiring high resolution, providing an ideal balance between performance and cost. However, certain issues of AMR sensors such as non-linear response and measurement noise are rarely discussed in the relevant literature. In this work, an analog closed loop compensation system is proposed, developed and tested as a means to eliminate the non-linearity of AMR response, reduce the 1/f noise and enhance the sensitivity of magnetic sensor. Additional performance aspects, such as cross-axis and hysteresis effects are also examined. This system was analyzed using an analytical model and a P-Spice model, considering both the sensor itself as well as the accompanying electronic circuitry. In addition, a commercial closed loop architecture Flux-Gate sensor (calibrated and certified), has been used for comparison purposes. Three different experimental setups have been constructed for the purposes of this work, each one utilized for DC magnetic field measurements, AC magnetic field measurements and Noise density measurements respectively. The DC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to calibrate and characterize the system under consideration. A high-accuracy DC power supply has been used for providing the operating current to the Helmholtz coils. The results were recorded by a multichannel voltmeter The AC magnetic field measurements have been conducted in laboratory environment employing a cubic Helmholtz coil setup in order to examine the effective bandwidth not only of the proposed system but also for the Flux-Gate sensor. A voltage controlled current source driven by a function generator has been utilized for the Helmholtz coil excitation. The result was observed by the oscilloscope. The third experimental apparatus incorporated an AC magnetic shielding construction composed of several layers of electric steel that had been demagnetized prior to the experimental process. Each sensor was placed alone and the response was captured by the oscilloscope. The preliminary experimental results indicate that closed loop AMR response presented a maximum deviation of 0.36% with respect to the ideal linear response, while the corresponding values for the open loop AMR system and the Fluxgate sensor reached 2% and 0.01% respectively. Moreover, the noise density of the proposed close loop AMR sensor system remained almost as low as the noise density of the AMR sensor itself, yet considerably higher than that of the Flux-Gate sensor. All relevant numerical data are presented in the paper.

Keywords: AMR sensor, chopper, closed loop, electronic noise, magnetic noise, memory effects, flux-gate sensor, linearity improvement, sensitivity improvement

Procedia PDF Downloads 411
524 Rehabilitation of Orthotropic Steel Deck Bridges Using a Modified Ortho-Composite Deck System

Authors: Mozhdeh Shirinzadeh, Richard Stroetmann

Abstract:

Orthotropic steel deck bridge consists of a deck plate, longitudinal stiffeners under the deck plate, cross beams and the main longitudinal girders. Due to the several advantages, Orthotropic Steel Deck (OSD) systems have been utilized in many bridges worldwide. The significant feature of this structural system is its high load-bearing capacity while having relatively low dead weight. In addition, cost efficiency and the ability of rapid field erection have made the orthotropic steel deck a popular type of bridge worldwide. However, OSD bridges are highly susceptible to fatigue damage. A large number of welded joints can be regarded as the main weakness of this system. This problem is, in particular, evident in the bridges which were built before 1994 when the fatigue design criteria had not been introduced in the bridge design codes. Recently, an Orthotropic-composite slab (OCS) for road bridges has been experimentally and numerically evaluated and developed at Technische Universität Dresden as a part of AIF-FOSTA research project P1265. The results of the project have provided a solid foundation for the design and analysis of Orthotropic-composite decks with dowel strips as a durable alternative to conventional steel or reinforced concrete decks. In continuation, while using the achievements of that project, the application of a modified Ortho-composite deck for an existing typical OSD bridge is investigated. Composite action is obtained by using rows of dowel strips in a clothoid (CL) shape. Regarding Eurocode criteria for different fatigue detail categories of an OSD bridge, the effect of the proposed modification approach is assessed. Moreover, a numerical parametric study is carried out utilizing finite element software to determine the impact of different variables, such as the size and arrangement of dowel strips, the application of transverse or longitudinal rows of dowel strips, and local wheel loads. For the verification of the simulation technique, experimental results of a segment of an OCS deck are used conducted in project P1265. Fatigue assessment is performed based on the last draft of Eurocode 1993-2 (2024) for the most probable detail categories (Hot-Spots) that have been reported in the previous statistical studies. Then, an analytical comparison is provided between the typical orthotropic steel deck and the modified Ortho-composite deck bridge in terms of fatigue issues and durability. The load-bearing capacity of the bridge, the critical deflections, and the composite behavior are also evaluated and compared. Results give a comprehensive overview of the efficiency of the rehabilitation method considering the required design service life of the bridge. Moreover, the proposed approach is assessed with regard to the construction method, details and practical aspects, as well as the economic point of view.

Keywords: composite action, fatigue, finite element method, steel deck, bridge

Procedia PDF Downloads 62
523 Define Immersive Need Level for Optimal Adoption of Virtual Words with BIM Methodology

Authors: Simone Balin, Cecilia M. Bolognesi, Paolo Borin

Abstract:

In the construction industry, there is a large amount of data and interconnected information. To manage this information effectively, a transition to the immersive digitization of information processes is required. This transition is important to improve knowledge circulation, product quality, production sustainability and user satisfaction. However, there is currently a lack of a common definition of immersion in the construction industry, leading to misunderstandings and limiting the use of advanced immersive technologies. Furthermore, the lack of guidelines and a common vocabulary causes interested actors to abandon the virtual world after the first collaborative steps. This research aims to define the optimal use of immersive technologies in the AEC sector, particularly for collaborative processes based on the BIM methodology. Additionally, the research focuses on creating classes and levels to structure and define guidelines and a vocabulary for the use of the " Immersive Need Level." This concept, matured by recent technological advancements, aims to enable a broader application of state-of-the-art immersive technologies, avoiding misunderstandings, redundancies, or paradoxes. While the concept of "Informational Need Level" has been well clarified with the recent UNI EN 17412-1:2021 standard, when it comes to immersion, current regulations and literature only provide some hints about the technology and related equipment, leaving the procedural approach and the user's free interpretation completely unexplored. Therefore, once the necessary knowledge and information are acquired (Informational Need Level), it is possible to transition to an Immersive Need Level that involves the practical application of the acquired knowledge, exploring scenarios and solutions in a more thorough and detailed manner, with user involvement, via different immersion scales, in the design, construction or management process of a building or infrastructure. The need for information constitutes the basis for acquiring relevant knowledge and information, while the immersive need can manifest itself later, once a solid information base has been solidified, using the senses and developing immersive awareness. This new approach could solve the problem of inertia among AEC industry players in adopting and experimenting with new immersive technologies, expanding collaborative iterations and the range of available options.

Keywords: AECindustry, immersive technology (IMT), virtual reality, augmented reality, building information modeling (BIM), decision making, collaborative process, information need level, immersive level of need

Procedia PDF Downloads 79
522 Relative Importance of Different Mitochondrial Components in Maintaining the Barrier Integrity of Retinal Endothelial Cells: Implications for Vascular-associated Retinal Diseases

Authors: Shaimaa Eltanani, Thangal Yumnamcha, Ahmed S. Ibrahim

Abstract:

Purpose: Mitochondria dysfunction is central to breaking the barrier integrity of retinal endothelial cells (RECs) in various blinding eye diseases such as diabetic retinopathy and retinopathy of prematurity. Therefore, we aimed to dissect the role of different mitochondrial components, specifically, those of oxidative phosphorylation (OxPhos), in maintaining the barrier function of RECs. Methods: Electric cell-substrate impedance sensing (ECIS) technology was used to assess in real-time the role of different mitochondrial components in the total impedance (Z) of human RECs (HRECs) and its components; the capacitance (C) and the total resistance (R). HRECs were treated with specific mitochondrial inhibitors that target different steps in OxPhos: Rotenone for complex I; Oligomycin for ATP synthase; and FCCP for uncoupling OxPhos. Furthermore, data were modeled to investigate the effects of these inhibitors on the three parameters that govern the total resistance of cells: cell-cell interactions (Rb), cell-matrix interactions (α), and cell membrane permeability (Cm). Results: Rotenone (1 µM) produced the greatest reduction in the Z, followed by FCCP (1 µM), whereas no reduction in the Z was observed after the treatment with Oligomycin (1 µM). Following this further, we deconvoluted the effect of these inhibitors on Rb, α, and Cm. Firstly, rotenone (1 µM) completely abolished the resistance contribution of Rb, as the Rb became zero immediately after the treatment. Secondly, FCCP (1 µM) eliminated the resistance contribution of Rb only after 2.5 hours and increased Cm without considerable effect on α. Lastly, Oligomycin had the lowest impact among these inhibitors on Rb, which became similar to the control group at the end of the experiment without noticeable effects on Cm or α. Conclusion: These results demonstrate differential roles for complex I, complex V, and coupling of OxPhos in maintaining the barrier functionality of HRECs, in which complex I being the most important component in regulating the barrier functionality and the spreading behavior of HRECs. Such differences can be used in investigating gene expression as well as for screening selective agents that improve the functionality of complex I to be used in the therapeutic approach for treating REC-related retinal diseases.

Keywords: human retinal endothelial cells (hrecs), rotenone, oligomycin, fccp, oxidative phosphorylation, oxphos, capacitance, impedance, ecis modeling, rb resistance, α resistance, and barrier integrity

Procedia PDF Downloads 91
521 Correlation between Defect Suppression and Biosensing Capability of Hydrothermally Grown ZnO Nanorods

Authors: Mayoorika Shukla, Pramila Jakhar, Tejendra Dixit, I. A. Palani, Vipul Singh

Abstract:

Biosensors are analytical devices with wide range of applications in biological, chemical, environmental and clinical analysis. It comprises of bio-recognition layer which has biomolecules (enzymes, antibodies, DNA, etc.) immobilized over it for detection of analyte and transducer which converts the biological signal into the electrical signal. The performance of biosensor primarily the depends on the bio-recognition layer and therefore it has to be chosen wisely. In this regard, nanostructures of metal oxides such as ZnO, SnO2, V2O5, and TiO2, etc. have been explored extensively as bio-recognition layer. Recently, ZnO has the attracted attention of researchers due to its unique properties like high iso-electric point, biocompatibility, stability, high electron mobility and high electron binding energy, etc. Although there have been many reports on usage of ZnO as bio-recognition layer but to the authors’ knowledge, none has ever observed correlation between optical properties like defect suppression and biosensing capability of the sensor. Here, ZnO nanorods (ZNR) have been synthesized by a low cost, simple and low-temperature hydrothermal growth process, over Platinum (Pt) coated glass substrate. The ZNR have been synthesized in two steps viz. initially a seed layer was coated over substrate (Pt coated glass) followed by immersion of it into nutrient solution of Zinc nitrate and Hexamethylenetetramine (HMTA) with in situ addition of KMnO4. The addition of KMnO4 was observed to have a profound effect over the growth rate anisotropy of ZnO nanostructures. Clustered and powdery growth of ZnO was observed without addition of KMnO4, although by addition of it during the growth, uniform and crystalline ZNR were found to be grown over the substrate. Moreover, the same has resulted in suppression of defects as observed by Normalized Photoluminescence (PL) spectra since KMnO4 is a strong oxidizing agent which provides an oxygen rich growth environment. Further, to explore the correlation between defect suppression and biosensing capability of the ZNR Glucose oxidase (Gox) was immobilized over it, using physical adsorption technique followed by drop casting of nafion. Here the main objective of the work was to analyze effect of defect suppression over biosensing capability, and therefore Gox has been chosen as model enzyme, and electrochemical amperometric glucose detection was performed. The incorporation of KMnO4 during growth has resulted in variation of optical and charge transfer properties of ZNR which in turn were observed to have deep impact on biosensor figure of merits. The sensitivity of biosensor was found to increase by 12-18 times, due to variations introduced by addition of KMnO4 during growth. The amperometric detection of glucose in continuously stirred buffer solution was performed. Interestingly, defect suppression has been observed to contribute towards the improvement of biosensor performance. The detailed mechanism of growth of ZNR along with the overall influence of defect suppression on the sensing capabilities of the resulting enzymatic electrochemical biosensor and different figure of merits of the biosensor (Glass/Pt/ZNR/Gox/Nafion) will be discussed during the conference.

Keywords: biosensors, defects, KMnO4, ZnO nanorods

Procedia PDF Downloads 272
520 The Relationship between Proximity to Sources of Industrial-Related Outdoor Air Pollution and Children Emergency Department Visits for Asthma in the Census Metropolitan Area of Edmonton, Canada, 2004/2005 to 2009/2010

Authors: Laura A. Rodriguez-Villamizar, Alvaro Osornio-Vargas, Brian H. Rowe, Rhonda J. Rosychuk

Abstract:

Introduction/Objectives: The Census Metropolitan Area of Edmonton (CMAE) has important industrial emissions to the air from the Industrial Heartland Alberta (IHA) at the Northeast and the coal-fired power plants (CFPP) at the West. The objective of the study was to explore the presence of clusters of children asthma ED visits in the areas around the IHA and the CFPP. Methods: Retrospective data on children asthma ED visits was collected at the dissemination area (DA) level for children between 2 and 14 years of age, living in the CMAE between April 1, 2004, and March 31, 2010. We conducted a spatial analysis of disease clusters around putative sources with count (ecological) data using descriptive, hypothesis testing, and multivariable modeling analysis. Results: The mean crude rate of asthma ED visits was 9.3/1,000 children population per year during the study period. Circular spatial scan test for cases and events identified a cluster of children asthma ED visits in the DA where the CFPP are located in the Wabamum area. No clusters were identified around the IHA area. The multivariable models suggest that there is a significant decline in risk for children asthma ED visits as distance increases around the CFPP area this effect is modified at the SE direction with mean angle 125.58 degrees, where the risk increases with distance. In contrast, the regression models for IHA suggest that there is a significant increase in risk for children asthma ED visits as distance increases around the IHA area and this effect is modified at SW direction with mean angle 216.52 degrees, where the risk increases at shorter distances. Conclusions: Different methods for detecting clusters of disease consistently suggested the existence of a cluster of children asthma ED visits around the CFPP but not around the IHA within the CMAE. These results are probably explained by the direction of the air pollutants dispersion caused by the predominant and subdominant wind direction at each point. The use of different approaches to detect clusters of disease is valuable to have a better understanding of the presence, shape, direction and size of clusters of disease around pollution sources.

Keywords: air pollution, asthma, disease cluster, industry

Procedia PDF Downloads 274
519 Expressing Locality in Learning English: A Study of English Textbooks for Junior High School Year VII-IX in Indonesia Context

Authors: Agnes Siwi Purwaning Tyas, Dewi Cahya Ambarwati

Abstract:

This paper concerns the language learning that develops as a habit formation and a constructive process while exercising an oppressive power to construct the learners. As a locus of discussion, the investigation problematizes the transfer of English language to Indonesian students of junior high school through the use of English textbooks ‘Real Time: An Interactive English Course for Junior High School Students Year VII-IX’. English language has long performed as a global language and it is a demand upon the non-English native speakers to master the language if they desire to become internationally recognized individuals. Generally, English teachers teach the language in accordance with the nature of language learning in which they are trained and expected to teach the language within the culture of the target language. This provides a potential soft cultural penetration of a foreign ideology through language transmission. In the context of Indonesia, learning English as international language is considered dilemmatic. Most English textbooks in Indonesia incorporate cultural elements of the target language which in some extent may challenge the sensitivity towards local cultural values. On the other hand, local teachers demand more English textbooks for junior high school students which can facilitate cultural dissemination of both local and global values and promote learners’ cultural traits of both cultures to avoid misunderstanding and confusion. It also aims to support language learning as bidirectional process instead of instrument of oppression. However, sensitizing and localizing this foreign language is not sufficient to restrain its soft infiltration. In due course, domination persists making the English language as an authoritative language and positioning the locality as ‘the other’. Such critical premise has led to a discursive analysis referring to how the cultural elements of the target language are presented in the textbooks and whether the local characteristics of Indonesia are able to gradually reduce the degree of the foreign oppressive ideology. The three textbooks researched were written by non-Indonesian author edited by two Indonesia editors published by a local commercial publishing company, PT Erlangga. The analytical elaboration examines the cultural characteristics in the forms of names, terminologies, places, objects and imageries –not the linguistic aspect– of both cultural domains; English and Indonesia. Comparisons as well as categorizations were made to identify the cultural traits of each language and scrutinize the contextual analysis. In the analysis, 128 foreign elements and 27 local elements were found in textbook for grade VII, 132 foreign elements and 23 local elements were found in textbook for grade VIII, while 144 foreign elements and 35 local elements were found in grade IX textbook, demonstrating the unequal distribution of both cultures. Even though the ideal pedagogical approach of English learning moves to a different direction by the means of inserting local elements, the learners are continuously imposed to the culture of the target language and forced to internalize the concept of values under the influence of the target language which tend to marginalize their native culture.

Keywords: bidirectional process, English, local culture, oppression

Procedia PDF Downloads 254
518 Dynamic Changes in NT-proBNP Levels in Unrelated Donors during Hematopoietic Stem Cells Mobilization

Authors: Natalia V. Minaeva, Natalia A. Zorina, Marina N. Khorobrikh, Philipp S. Sherstnev, Tatiana V. Krivokorytova, Alexander S. Luchinin, Maksim S. Minaev, Igor V. Paramonov

Abstract:

Background. Over the last few decades, the Center for International Blood and Marrow Transplant Research (CIBMTR) and the World Marrow Donor Association (WMDA) have been actively working to ensure the safety of the hematopoietic stem cell (HSC) donation process. Registration of adverse events that may occur during the donation period and establishing a relationship between donation and side effects are included in the WMDA international standards. The level of blood serum N-terminal pro-brain natriuretic peptide (NT-proBNP) is an early marker of myocardial stress. Due to the high analytical sensitivity and specificity, laboratory assessment of NT-proBNP makes it possible to objectively diagnose myocardial dysfunction. It is well known that the main stimulus for proBNP synthesis and secretion from atrial and ventricular cardiac myocytes is myocyte stretch and increasement of myocardial extensibility and pressure in the heart chambers. Аim. The aim of the study was to assess the dynamic changes in the levels of blood serum N-terminal pro-brain natriuretic peptide of unrelated donors at various stages of hematopoietic stem cell mobilization. Materials. We have examined 133 unrelated donors, including 92 men and 41 women, that have been included into the study. The NT-proBNP levels were measured before the start of mobilization, then on the day of apheresis, and after the donation of allogeneic HSC. The relationship between NT-proBNP levels and body mass index (BMI), ferritin, hemoglobin, and white blood cells (WBC) levels was assessed on the day of apheresis. The median age of donors was 34 years. Mobilization of HSCs was managed with filgrastim administration at a dose of 10 μg/kg daily for 4-5 days. The first leukocytapheresis was performed on day 4 from the start of filgrastim administration. Quantitative values of the blood serum NT-proBNP level are presented as a median (Me), first and third quartiles (Q1-Q3). Comparative analysis was carried out using the t-test and correlation analysis as well by Spearman method. Results. The baseline blood serum NT-proBNP levels in all 133 donors were within the reference values (<125 pg/ml) and equaled 21,6 (10,0; 43,3) pg/ml. At the same time, the level of NT-proBNP in women was significantly higher than that of men. On the day of the HSC apheresis, a significant increase of blood serum NT-proBNP levels was detected and equald 131,2 (72,6; 165,3) pg/ml (p<0,001), with higher rates in female donors. A statistically significant weak inverse correleation was established between the level of NT-proBNP and the BMI of donors (-0.18, p = 0,03), as well as the level of hemoglobin (-0.33, p <0,001), and ferritin levels (-0.19, p = 0,03). No relationship has been established between the magnitude of WBC levels achieved as a result of the mobilization of HSC on the day of leukocytapheresis. A day after the apheresis, the blood serum NT-proBNP levels still exceeded the reference values, but there was a decreasing tendency. Conclusion. An increase of the blood serum NT-proBNP level in unrelated donors during the mobilization of HSC was established. Future studies should clarify the reason for this phenomenon, as well as its effects on donors' long-term health.

Keywords: unrelated donors, mobilization, hematopoietic stem cells, N-terminal pro-brain natriuretic peptide

Procedia PDF Downloads 86
517 Numerical Validation of Liquid Nitrogen Phase Change in a Star-Shaped Ambient Vaporizer

Authors: Yusuf Yilmaz, Gamze Gediz Ilis

Abstract:

Gas Nitrogen where has a boiling point of -189.52oC at atmospheric pressure widely used in the industry. Nitrogen that used in the industry should be transported in liquid form to the plant area. Ambient air vaporizer (AAV) generally used for vaporization of cryogenic gases such as liquid nitrogen (LN2), liquid oxygen (LOX), liquid natural gas (LNG), and liquid argon (LAR) etc. AAV is a group of star-shaped fin vaporizer. The design and the effect of the shape of fins of the vaporizer is one of the most important criteria for the performance of the vaporizer. In this study, the performance of AAV working with liquid nitrogen was analyzed numerically in a star-shaped aluminum finned pipe. The numerical analysis is performed in order to investigate the heat capacity of the vaporizer per meter pipe length. By this way, the vaporizer capacity can be predicted for the industrial applications. In order to achieve the validation of the numerical solution, the experimental setup is constructed. The setup includes a liquid nitrogen tank with a pressure of 9 bar. The star-shaped aluminum finned tube vaporizer is connected to the LN2 tank. The inlet and the outlet pressure and temperatures of the LN2 of the vaporizer are measured. The mass flow rate of the LN2 is also measured and collected. The comparison of the numerical solution is performed by these measured data. The ambient conditions of the experiment are given as boundary conditions to the numerical model. The surface tension and contact angle have a significant effect on the boiling of liquid nitrogen. Average heat transfer coefficient including convective and nucleated boiling components should be obtained for liquid nitrogen saturated flow boiling in the finned tube. Fluent CFD module is used to simulate the numerical solution. The turbulent k-ε model is taken to simulate the liquid nitrogen flow. The phase change is simulated by using the evaporation-condensation approach used with user-defined functions (UDF). The comparison of the numerical and experimental results will be shared in this study. Besides, the performance capacity of the star-shaped finned pipe vaporizer will be calculated in this study. Based on this numerical analysis, the performance of the vaporizer per unit length can be predicted for the industrial applications and the suitable pipe length of the vaporizer can be found for the special cases.

Keywords: liquid nitrogen, numerical modeling, two-phase flow, cryogenics

Procedia PDF Downloads 104
516 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations

Authors: Milena Nanova, Radul Shishkov, Damyan Damov, Martin Georgiev

Abstract:

This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper places emphasis on algorithmic implementation of the logical constraint and intricacies in residential architecture by exploring the potential of generative design to create visually engaging and contextually harmonious structures. This exploration also contains an analysis of how these designs align with legal building parameters, showcasing the potential for creative solutions within the confines of urban building regulations. Concurrently, our methodology integrates functional, economic, and environmental factors. We investigate how generative design can be utilized to optimize buildings' performance, considering them, aiming to achieve a symbiotic relationship between the built environment and its natural surroundings. Through a blend of theoretical research and practical case studies, this research highlights the multifaceted capabilities of generative design and demonstrates practical applications of our framework. Our findings illustrate the rich possibilities that arise from an algorithmic design approach in the context of a vibrant urban landscape. This study contributes an alternative perspective to residential architecture, suggesting that the future of urban development lies in embracing the complex interplay between computational design innovation, regulatory adherence, and environmental responsibility.

Keywords: generative design, computational design, parametric design, algorithmic modeling

Procedia PDF Downloads 38
515 Comparing Stability Index MAPping (SINMAP) Landslide Susceptibility Models in the Río La Carbonera, Southeast Flank of Pico de Orizaba Volcano, Mexico

Authors: Gabriel Legorreta Paulin, Marcus I. Bursik, Lilia Arana Salinas, Fernando Aceves Quesada

Abstract:

In volcanic environments, landslides and debris flows occur continually along stream systems of large stratovolcanoes. This is the case on Pico de Orizaba volcano, the highest mountain in Mexico. The volcano has a great potential to impact and damage human settlements and economic activities by landslides. People living along the lower valleys of Pico de Orizaba volcano are in continuous hazard by the coalescence of upstream landslide sediments that increased the destructive power of debris flows. These debris flows not only produce floods, but also cause the loss of lives and property. Although the importance of assessing such process, there is few landslide inventory maps and landslide susceptibility assessment. As a result in México, no landslide susceptibility models assessment has been conducted to evaluate advantage and disadvantage of models. In this study, a comprehensive study of landslide susceptibility models assessment using GIS technology is carried out on the SE flank of Pico de Orizaba volcano. A detailed multi-temporal landslide inventory map in the watershed is used as framework for the quantitative comparison of two landslide susceptibility maps. The maps are created based on 1) the Stability Index MAPping (SINMAP) model by using default geotechnical parameters and 2) by using findings of volcanic soils geotechnical proprieties obtained in the field. SINMAP combines the factor of safety derived from the infinite slope stability model with the theory of a hydrologic model to produce the susceptibility map. It has been claimed that SINMAP analysis is reasonably successful in defining areas that intuitively appear to be susceptible to landsliding in regions with sparse information. The validations of the resulting susceptibility maps are performed by comparing them with the inventory map under LOGISNET system which provides tools to compare by using a histogram and a contingency table. Results of the experiment allow for establishing how the individual models predict the landslide location, advantages, and limitations. The results also show that although the model tends to improve with the use of calibrated field data, the landslide susceptibility map does not perfectly represent existing landslides.

Keywords: GIS, landslide, modeling, LOGISNET, SINMAP

Procedia PDF Downloads 298
514 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 123
513 Erosion Influencing Factors Analysis: Case of Isser Watershed (North-West Algeria)

Authors: Chahrazed Salhi, Ayoub Zeroual, Yasmina Hamitouche

Abstract:

Soil water erosion poses a significant threat to the watersheds in Algeria today. The degradation of storage capacity in large dams over the past two decades, primarily due to erosion, necessitates a comprehensive understanding of the factors that contribute to soil erosion. The Isser watershed, located in the Northwestern region of Algeria, faces additional challenges such as recurrent droughts and the presence of delicate marl and clay outcrops, which amplify its susceptibility to water erosion. This study aims to employ advanced techniques such as Geographic Information Systems (GIS) and Remote Sensing (RS), in conjunction with the Canonical Correlation Analysis (CCA) method and Soil Water Assessment Tool (SWAT) model, to predict specific erosion patterns and analyze the key factors influencing erosion in the Isser basin. To accomplish this, an array of data sources including rainfall, climatic, hydrometric, land use, soil, digital elevation, and satellite data were utilized. The application of the SWAT model to the Isser basin yielded an average annual soil loss of approximately 16 t/ha/year. Particularly high erosion rates, exceeding 12 T/ha/year, were observed in the central and southern parts of the basin, encompassing 41% of the total basin area. Through Canonical Correlation Analysis, it was determined that vegetation cover and topography exerted the most substantial influence on erosion. Consequently, the study identified significant and spatially heterogeneous erosion throughout the study area. The impact of land topography on soil loss was found to be directly proportional, while vegetation cover exhibited an inverse proportional relationship. Modeling specific erosion for the Ladrat dam sub-basin estimated a rate of around 39 T/ha/year, thus accounting for the recorded capacity loss of 17.80% compared to the bathymetric survey conducted in 2019. The findings of this research provide valuable decision-support tools for soil conservation managers, empowering them to make informed decisions regarding soil conservation measures.

Keywords: Isser watershed, RS, CCA, SWAT, vegetation cover, topography

Procedia PDF Downloads 56
512 Impacts of Present and Future Climate Variability on Forest Ecosystem in Mediterranean Region

Authors: Orkan Ozcan, Nebiye Musaoglu, Murat Turkes

Abstract:

Climate change is largely recognized as one of the real, pressing and significant global problems. The concept of ‘climate change vulnerability’ helps us to better comprehend the cause/effect relationships behind climate change and its impact on human societies, socioeconomic sectors, physiographical and ecological systems. In this study, multifactorial spatial modeling was applied to evaluate the vulnerability of a Mediterranean forest ecosystem to climate change. As a result, the geographical distribution of the final Environmental Vulnerability Areas (EVAs) of the forest ecosystem is based on the estimated final Environmental Vulnerability Index (EVI) values. This revealed that at current levels of environmental degradation, physical, geographical, policy enforcement and socioeconomic conditions, the area with a ‘very low’ vulnerability degree covered mainly the town, its surrounding settlements and the agricultural lands found mainly over the low and flat travertine plateau and the plains at the east and southeast of the district. The spatial magnitude of the EVAs over the forest ecosystem under the current environmental degradation was also determined. This revealed that the EVAs classed as ‘very low’ account for 21% of the total area of the forest ecosystem, those classed as ‘low’ account for 36%, those classed as ‘medium’ account for 20%, and those classed as ‘high’ account for 24%. Based on regionally averaged future climate assessments and projected future climate indicators, both the study site and the western Mediterranean sub-region of Turkey will probably become associated with a drier, hotter, more continental and more water-deficient climate. This analysis holds true for all future scenarios, with the exception of RCP4.5 for the period from 2015 to 2030. However, the present dry-sub humid climate dominating this sub-region and the study area shows a potential for change towards more dry climatology and for it to become a semiarid climate in the period between 2031 and 2050 according to the RCP8.5 high emission scenario. All the observed and estimated results and assessments summarized in the study show clearly that the densest forest ecosystem in the southern part of the study site, which is characterized by mainly Mediterranean coniferous and some mixed forest and the maquis vegetation, will very likely be influenced by medium and high degrees of vulnerability to future environmental degradation, climate change and variability.

Keywords: forest ecosystem, Mediterranean climate, RCP scenarios, vulnerability analysis

Procedia PDF Downloads 343
511 Modeling Acceptability of a Personalized and Contextualized Radio Embedded in Vehicles

Authors: Ludivine Gueho, Sylvain Fleury, Eric Jamet

Abstract:

Driver distraction is known to be a major contributing factor of car accidents. Since many years, constructors have been designing embedded technologies to face this problem and reduce distraction. Being able to predict user acceptance would further be helpful in the development process to build appropriate systems. The present research aims at modelling the acceptability of a specific system, an innovative personalized and contextualized embedded radio, through an online survey of 202 people in France that assessed the psychological variables determining intentions to use the system. The questionnaire instantiated the dimensions of the extended version of the UTAUT acceptability model. Because of the specific features of the system assessed, we added 4 dimensions: perceived security, anxiety, trust and privacy concerns. Results showed that hedonic motivation, i.e., the fun or pleasure derived from using a technology, and performance expectancy, i.e., the degree to which individuals believe that the characteristics of the system meet their needs, are the most important dimensions in determining behavioral intentions about the innovative radio. To a lesser extent, social influence, i.e., the degree to which individuals think they can use the system while respecting their social group’s norms and while giving a positive image of themselves, had an effect on behavioral intentions. Moreover, trust, that is, the positive belief about the perceived reliability of, dependability of, and confidence in a person, object or process, had a significant effect, mediated by performance expectancy. In an applicative way, the present research reveals that, to be accepted, in-car embedded new technology has to address individual needs, for instance by facilitating the driving activity or by providing useful information. If it shows hedonic qualities by being entertaining, pretty or comfortable, this may improve the intentions to use it. Therefore, it is clearly important to include reflection about user experience in the design process. Finally, the users have to be reassured on the system’s reliability. For example, improving the transparency of the system by providing information about the system functioning, could improve trust. These results bring some highlights on determinant of acceptance of an in-vehicle technology and are useful for constructors to design acceptable systems.

Keywords: acceptability, innovative embedded radio, structural equation, user-centric evaluation, UTAUT

Procedia PDF Downloads 259
510 Reasons to Redesign: Teacher Education for a Brighter Tomorrow

Authors: Deborah L. Smith

Abstract:

To review our program and determine the best redesign options, department members gathered feedback and input through focus groups, analysis of data, and a review of the current research to ensure that the changes proposed were not based solely on the state’s new professional standards. In designing course assignments and assessments, we listened to a variety of constituents, including students, other institutions of higher learning, MDE webinars, host teachers, literacy clinic personnel, and other disciplinary experts. As a result, we are designing a program that is more inclusive of a variety of field experiences for growth. We have determined ways to improve our program by connecting academic disciplinary knowledge, educational psychology, and community building both inside and outside the classroom for professional learning communities. The state’s release of new professional standards led my department members to question what is working and what needs improvement in our program. One aspect of our program that continues to be supported by research and data analysis is the function of supervised field experiences with meaningful feedback. We seek to expand in this area. Other data indicate that we have strengths in modeling a variety of approaches such as cooperative learning, discussions, literacy strategies, and workshops. In the new program, field assignments will be connected to multiple courses, and efforts to scaffold student learning to guide them toward best evidence-based practices will be continuous. Despite running a program that meets multiple sets of standards, there are areas of need that we directly address in our redesign proposal. Technology is ever-changing, so it’s inevitable that improving digital skills is a focus. In addition, scaffolding procedures for English Language Learners (ELL) or other students who struggle is imperative. Diversity, equity, and inclusion (DEI) has been an integral part of our curriculum, but the research indicates that more self-reflection and a deeper understanding of culturally relevant practices would help the program improve. Connections with professional learning communities will be expanded, as will leadership components, so that teacher candidates understand their role in changing the face of education. A pilot program will run in academic year 22/23, and additional data will be collected each semester through evaluations and continued program review.

Keywords: DEI, field experiences, program redesign, teacher preparation

Procedia PDF Downloads 154
509 Delegation or Assignment: Registered Nurses’ Ambiguity in Interpreting Their Scope of Practice in Long Term Care Settings

Authors: D. Mulligan, D. Casey

Abstract:

Introductory Statement: Delegation is when a registered nurse (RN) transfers a task or activity that is normally within their scope of practice to another person (delegatee). RN delegation is common practice with unregistered staff, e.g., student nurses and health care assistants (HCAs). As the role of the HCA is increasingly embedded as a direct care and support role, especially in long-term residential care for older adults, there is RN uncertainty as to their role as a delegator. The assignment is when a task is transferred to a person that is within the role specification of the delegatee. RNs in long-term care (LTC) for older people are increasingly working in teams where there are less RNs and more HCAs providing direct care to the residents. The RN is responsible and accountable for their decision to delegate and assign tasks to HCAs. In an interpretive, multiple case studies to explore how delegation of tasks by RNs to HCAs occurred in long-term care settings in Ireland the importance of the RN understanding their scope of practice emerged. Methodology: Focus group interviews and individual interviews were undertaken as part of a multiple case study. Both cases, anonymized as Case A and Case B, were within the public health service in Ireland. The case study sites were long-term care settings for older adults located in different social care divisions, and in different geographical areas. Four focus group interviews with staff nurses and three individual interviews with CNMs were undertaken. The interactive data analysis approach was the analytical framework used, with within-case and cross-case analysis. The theoretical lens of organizational role theory, applying the role episode model (REM), was used to understand, interpret, and explain the findings. Study Findings: RNs and CNMs understood the role of the nurse regulator and the scope of practice. RNs understood that the RN was accountable for the care and support provided to residents. However, RNs and CNM2s could not describe delegation in the context of their scope of practice. In both cases, the RNs did not have a standardized process for assessing HCA competence to undertake nursing tasks or interventions. RNs did not routinely supervise HCAs. Tasks were assigned and not delegated. There were differences between the cases in relation to understanding which nursing tasks required delegation. HCAs in Case A undertook clinical vital sign assessments and documentation. HCAs in Case B did not routinely undertake these activities. Delegation and assignment were influenced by the organizational factors, e.g., model of care, absence of delegation policies, inadequate RN education on delegation, and a lack of RN and HCA role clarity. Concluding Statement: Nurse staffing levels and skill mix in long-term care settings continue to change with more HCAs providing more direct care and support. With decreasing RN staffing levels RNs will be required to delegate and assign more direct care to HCAs. There is a requirement to distinguish between RN assignment and delegation at policy, regulation, and organizational levels.

Keywords: assignment, delegation, registered nurse, scope of practice

Procedia PDF Downloads 143
508 Parameter Fitting of the Discrete Element Method When Modeling the DISAMATIC Process

Authors: E. Hovad, J. H. Walther, P. Larsen, J. Thorborg, J. H. Hattel

Abstract:

In sand casting of metal parts for the automotive industry such as brake disks and engine blocks, the molten metal is poured into a sand mold to get its final shape. The DISAMATIC molding process is a way to construct these sand molds for casting of steel parts and in the present work numerical simulations of this process are presented. During the process green sand is blown into a chamber and subsequently squeezed to finally obtain the sand mould. The sand flow is modelled with the Discrete Element method (DEM) and obtaining the correct material parameters for the simulation is the main goal. Different tests will be used to find or calibrate the DEM parameters needed; Poisson ratio, Young modulus, rolling friction coefficient, sliding friction coefficient and coefficient of restitution (COR). The Young modulus and Poisson ratio are found from compression tests of the bulk material and subsequently used in the DEM model according to the Hertz-Mindlin model. The main focus will be on calibrating the rolling resistance and sliding friction in the DEM model with respect to the behavior of “real” sand piles. More specifically, the surface profile of the “real” sand pile will be compared to the sand pile predicted with the DEM for different values of the rolling and sliding friction coefficients. When the DEM parameters are found for the particle-particle (sand-sand) interaction, the particle-wall interaction parameter values are also found. Here the sliding coefficient will be found from experiments and the rolling resistance is investigated by comparing with observations of how the green sand interacts with the chamber wall during experiments and the DEM simulations will be calibrated accordingly. The coefficient of restitution will be tested with different values in the DEM simulations and compared to video footages of the DISAMATIC process. Energy dissipation will be investigated in these simulations for different particle sizes and coefficient of restitution, where scaling laws will be considered to relate the energy dissipation for these parameters. Finally, the found parameter values are used in the overall discrete element model and compared to the video footage of the DISAMATIC process.

Keywords: discrete element method, physical properties of materials, calibration, granular flow

Procedia PDF Downloads 471
507 Understanding Hydrodynamic in Lake Victoria Basin in a Catchment Scale: A Literature Review

Authors: Seema Paul, John Mango Magero, Prosun Bhattacharya, Zahra Kalantari, Steve W. Lyon

Abstract:

The purpose of this review paper is to develop an understanding of lake hydrodynamics and the potential climate impact on the Lake Victoria (LV) catchment scale. This paper briefly discusses the main problems of lake hydrodynamics and its’ solutions that are related to quality assessment and climate effect. An empirical methodology in modeling and mapping have considered for understanding lake hydrodynamic and visualizing the long-term observational daily, monthly, and yearly mean dataset results by using geographical information system (GIS) and Comsol techniques. Data were obtained for the whole lake and five different meteorological stations, and several geoprocessing tools with spatial analysis are considered to produce results. The linear regression analyses were developed to build climate scenarios and a linear trend on lake rainfall data for a long period. A potential evapotranspiration rate has been described by the MODIS and the Thornthwaite method. The rainfall effect on lake water level observed by Partial Differential Equations (PDE), and water quality has manifested by a few nutrients parameters. The study revealed monthly and yearly rainfall varies with monthly and yearly maximum and minimum temperatures, and the rainfall is high during cool years and the temperature is high associated with below and average rainfall patterns. Rising temperatures are likely to accelerate evapotranspiration rates and more evapotranspiration is likely to lead to more rainfall, drought is more correlated with temperature and cloud is more correlated with rainfall. There is a trend in lake rainfall and long-time rainfall on the lake water surface has affected the lake level. The onshore and offshore have been concentrated by initial literature nutrients data. The study recommended that further studies should consider fully lake bathymetry development with flow analysis and its’ water balance, hydro-meteorological processes, solute transport, wind hydrodynamics, pollution and eutrophication these are crucial for lake water quality, climate impact assessment, and water sustainability.

Keywords: climograph, climate scenarios, evapotranspiration, linear trend flow, rainfall event on LV, concentration

Procedia PDF Downloads 80