Search results for: Finite Element Modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7068

Search results for: Finite Element Modeling

558 Cascade Multilevel Inverter-Based Grid-Tie Single-Phase and Three-Phase-Photovoltaic Power System Controlling and Modeling

Authors: Syed Masood Hussain

Abstract:

An effective control method, including system-level control and pulse width modulation for quasi-Z-source cascade multilevel inverter (qZS-CMI) based grid-tie photovoltaic (PV) power system is proposed. The system-level control achieves the grid-tie current injection, independent maximum power point tracking (MPPT) for separate PV panels, and dc-link voltage balance for all quasi-Z-source H-bridge inverter (qZS-HBI) modules. A recent upsurge in the study of photovoltaic (PV) power generation emerges, since they directly convert the solar radiation into electric power without hampering the environment. However, the stochastic fluctuation of solar power is inconsistent with the desired stable power injected to the grid, owing to variations of solar irradiation and temperature. To fully exploit the solar energy, extracting the PV panels’ maximum power and feeding them into grids at unity power factor become the most important. The contributions have been made by the cascade multilevel inverter (CMI). Nevertheless, the H-bridge inverter (HBI) module lacks boost function so that the inverter KVA rating requirement has to be increased twice with a PV voltage range of 1:2; and the different PV panel output voltages result in imbalanced dc-link voltages. However, each HBI module is a two-stage inverter, and many extra dc–dc converters not only increase the complexity of the power circuit and control and the system cost, but also decrease the efficiency. Recently, the Z-source/quasi-Z-source cascade multilevel inverter (ZS/qZS-CMI)-based PV systems were proposed. They possess the advantages of both traditional CMI and Z-source topologies. In order to properly operate the ZS/qZS-CMI, the power injection, independent control of dc-link voltages, and the pulse width modulation (PWM) are necessary. The main contributions of this paper include: 1) a novel multilevel space vector modulation (SVM) technique for the single phase qZS-CMI is proposed, which is implemented without additional resources; 2) a grid-connected control for the qZS-CMI based PV system is proposed, where the all PV panel voltage references from their independent MPPTs are used to control the grid-tie current; the dual-loop dc-link peak voltage control.

Keywords: Quzi-Z source inverter, Photo voltaic power system, space vector modulation, cascade multilevel inverter

Procedia PDF Downloads 527
557 Mathematical Modeling on Capturing of Magnetic Nanoparticles in an Implant Assisted Channel for Magnetic Drug Targeting

Authors: Shashi Sharma, V. K. Katiyar, Uaday Singh

Abstract:

The ability to manipulate magnetic particles in fluid flows by means of inhomogeneous magnetic fields is used in a wide range of biomedical applications including magnetic drug targeting (MDT). In MDT, magnetic carrier particles bounded with drug molecules are injected into the vascular system up-stream from the malignant tissue and attracted or retained at the specific region in the body with the help of an external magnetic field. Although the concept of MDT has been around for many years, however, wide spread acceptance of the technique is still looming despite the fact that it has shown some promise in both in vivo and clinical studies. This is because traditional MDT has some inherent limitations. Typically, the magnetic force is not very strong and it is also very short ranged. Since the magnetic force must overcome rather large hydrodynamic forces in the body, MDT applications have been limited to sites located close to the surface of the skin. Even in this most favorable situation, studies have shown that it is difficult to collect appreciable amounts of the MDCPs at the target site. To overcome these limitations of the traditional MDT approach, Ritter and co-workers reported the implant assisted magnetic drug targeting (IA-MDT). In IA-MDT, the magnetic implants are placed strategically at the target site to greatly and locally increase the magnetic force on MDCPs and help to attract and retain the MDCPs at the targeted region. In the present work, we develop a mathematical model to study the capturing of magnetic nanoparticles flowing in a fluid in an implant assisted cylindrical channel under the magnetic field. A coil of ferromagnetic SS 430 has been implanted inside the cylindrical channel to enhance the capturing of magnetic nanoparticles under the magnetic field. The dominant magnetic and drag forces, which significantly affect the capturing of nanoparticles, are incorporated in the model. It is observed through model results that capture efficiency increases from 23 to 51 % as we increase the magnetic field from 0.1 to 0.5 T, respectively. The increase in capture efficiency by increase in magnetic field is because as the magnetic field increases, the magnetization force, which is attractive in nature and responsible to attract or capture the magnetic particles, increases and results the capturing of large number of magnetic particles due to high strength of attractive magnetic force.

Keywords: capture efficiency, implant assisted-magnetic drug targeting (IA-MDT), magnetic nanoparticles, modelling

Procedia PDF Downloads 444
556 The Influence of Infiltration and Exfiltration Processes on Maximum Wave Run-Up: A Field Study on Trinidad Beaches

Authors: Shani Brathwaite, Deborah Villarroel-Lamb

Abstract:

Wave run-up may be defined as the time-varying position of the landward extent of the water’s edge, measured vertically from the mean water level position. The hydrodynamics of the swash zone and the accurate prediction of maximum wave run-up, play a critical role in the study of coastal engineering. The understanding of these processes is necessary for the modeling of sediment transport, beach recovery and the design and maintenance of coastal engineering structures. However, due to the complex nature of the swash zone, there remains a lack of detailed knowledge in this area. Particularly, there has been found to be insufficient consideration of bed porosity and ultimately infiltration/exfiltration processes, in the development of wave run-up models. Theoretically, there should be an inverse relationship between maximum wave run-up and beach porosity. The greater the rate of infiltration during an event, associated with a larger bed porosity, the lower the magnitude of the maximum wave run-up. Additionally, most models have been developed using data collected on North American or Australian beaches and may have limitations when used for operational forecasting in Trinidad. This paper aims to assess the influence and significance of infiltration and exfiltration processes on wave run-up magnitudes within the swash zone. It also seeks to pay particular attention to how well various empirical formulae can predict maximum run-up on contrasting beaches in Trinidad. Traditional surveying techniques will be used to collect wave run-up and cross-sectional data on various beaches. Wave data from wave gauges and wave models will be used as well as porosity measurements collected using a double ring infiltrometer. The relationship between maximum wave run-up and differing physical parameters will be investigated using correlation analyses. These physical parameters comprise wave and beach characteristics such as wave height, wave direction, period, beach slope, the magnitude of wave setup, and beach porosity. Most parameterizations to determine the maximum wave run-up are described using differing parameters and do not always have a good predictive capability. This study seeks to improve the formulation of wave run-up by using the aforementioned parameters to generate a formulation with a special focus on the influence of infiltration/exfiltration processes. This will further contribute to the improvement of the prediction of sediment transport, beach recovery and design of coastal engineering structures in Trinidad.

Keywords: beach porosity, empirical models, infiltration, swash, wave run-up

Procedia PDF Downloads 335
555 The Development of the Kamakhya Temple as a Historical Landmark in the Present State of Assam, India

Authors: Priyanka Tamta, Sukanya Sharma

Abstract:

The Kamakhya Temple in Assam plays a very important role in the development of Assam as not only a historical place but also as an archaeologically important site. Temple building activity on the site began in 5th century AD when a cave temple dedicated to Lord Balabhadraswami was constructed here by King Maharajadhiraja Sri Surendra Varman. In the history of Assam, the name of this king is not found and neither the name of this form of Vishnu is known in this region. But this inscription sanctified the place as it recorded the first ever temple building activity in this region. The fifteen hundred years habitation history of the Kamakhya temple sites shows a gradual progression of the site from a religious site to an archaeological site and finally as a historical landmark. Here, in this paper, our main objective is to understand the evolution of Kamakhya temple site as a historical landscape and as an important landmark in the history of Assam. The central theme of the paper is the gradual development of the religious site to a historical landmark. From epigraphical records, it is known that the site received patronage from all ruling dynasties of Assam and its adjoining regions. Royal households of Kashmir, Nepal, Bengal, Orissa, Bihar, etc. have left their footprints on the site. According to records they donated wealth, constructed or renovated temples and participated in the overall maintenance of the deity. This made Kamakhya temple a ground of interaction of faiths, communities, and royalties of the region. Since the 5th century AD, there was a continuous struggle between different beliefs, faiths, and power on the site to become the dominant authority of the site. In the process, powerful beliefs system subsumed minor ones into a larger doctrine of beliefs. This can be seen in the case of the evolution of the Kamakhya temple site as one of the important Shakta temples in India. Today, it is cultural identity marker of the state of Assam within which it is located. Its diverse faiths and beliefs have been appropriated by powerful legends to the dominant faith of the land. The temple has evolved from a cave temple to a complex of seventeen temples. The faith has evolved from the worship of water, an element of nature to the worship of the ten different forms of the goddess with their five male consorts or Bhairavas. Today, it represents and symbolizes the relationship of power and control out of which it has emerged. During different periods of occupation certain architectural and iconographical characters developed which indicated diffusion and cultural adaptation. Using this as sources and the epigraphical records this paper will analyze the interactive and dynamic processes which operated in the building of this cultural marker, the archaeological site of Kamakhya.

Keywords: cultural adaptation and diffusion, cultural and historical landscape, Kamakhya, Saktism, temple art and architecture, historiography

Procedia PDF Downloads 230
554 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.

Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications

Procedia PDF Downloads 70
553 Cleaning of Polycyclic Aromatic Hydrocarbons (PAH) Obtained from Ferroalloys Plant

Authors: Stefan Andersson, Balram Panjwani, Bernd Wittgens, Jan Erik Olsen

Abstract:

Polycyclic Aromatic hydrocarbons are organic compounds consisting of only hydrogen and carbon aromatic rings. PAH are neutral, non-polar molecules that are produced due to incomplete combustion of organic matter. These compounds are carcinogenic and interact with biological nucleophiles to inhibit the normal metabolic functions of the cells. Norways, the most important sources of PAH pollution is considered to be aluminum plants, the metallurgical industry, offshore oil activity, transport, and wood burning. Stricter governmental regulations regarding emissions to the outer and internal environment combined with increased awareness of the potential health effects have motivated Norwegian metal industries to increase their efforts to reduce emissions considerably. One of the objective of the ongoing industry and Norwegian research council supported "SCORE" project is to reduce potential PAH emissions from an off gas stream of a ferroalloy furnace through controlled combustion. In a dedicated combustion chamber. The sizing and configuration of the combustion chamber depends on the combined properties of the bulk gas stream and the properties of the PAH itself. In order to achieve efficient and complete combustion the residence time and minimum temperature need to be optimized. For this design approach reliable kinetic data of the individual PAH-species and/or groups thereof are necessary. However, kinetic data on the combustion of PAH are difficult to obtain and there is only a limited number of studies. The paper presents an evaluation of the kinetic data for some of the PAH obtained from literature. In the present study, the oxidation is modelled for pure PAH and also for PAH mixed with process gas. Using a perfectly stirred reactor modelling approach the oxidation is modelled including advanced reaction kinetics to study influence of residence time and temperature on the conversion of PAH to CO2 and water. A Chemical Reactor Network (CRN) approach is developed to understand the oxidation of PAH inside the combustion chamber. Chemical reactor network modeling has been found to be a valuable tool in the evaluation of oxidation behavior of PAH under various conditions.

Keywords: PAH, PSR, energy recovery, ferro alloy furnace

Procedia PDF Downloads 249
552 Geographic Origin Determination of Greek Rice (Oryza Sativa L.) Using Stable Isotopic Ratio Analysis

Authors: Anna-Akrivi Thomatou, Anastasios Zotos, Eleni C. Mazarakioti, Efthimios Kokkotos, Achilleas Kontogeorgos, Athanasios Ladavos, Angelos Patakas

Abstract:

It is well known that accurate determination of geographic origin to confront mislabeling and adulteration of foods is considered as a critical issue worldwide not only for the consumers, but also for producers and industries. Among agricultural products, rice (Oryza sativa L.) is the world’s third largest crop, providing food for more than half of the world’s population. Consequently, the quality and safety of rice products play an important role in people’s life and health. Despite the fact that rice is predominantly produced in Asian countries, rice cultivation in Greece is of significant importance, contributing to national agricultural sector income. More than 25,000 acres are cultivated in Greece, while rice exports to other countries consist the 0,5% of the global rice trade. Although several techniques are available in order to provide information about the geographical origin of rice, little data exist regarding the ability of these methodologies to discriminate rice production from Greece. Thus, the aim of this study is the comparative evaluation of stable isotope ratio methodology regarding its discriminative ability for geographical origin determination of rice samples produced in Greece compared to those from three other Asian countries namely Korea, China and Philippines. In total eighty (80) samples were collected from selected fields of Central Macedonia (Greece), during October of 2021. The light element (C, N, S) isotope ratios were measured using Isotope Ratio Mass Spectrometry (IRMS) and the results obtained were analyzed using chemometric techniques, including principal components analysis (PCA). Results indicated that the 𝜹 15N and 𝜹 34S values of rice produced in Greece were more markedly influenced by geographical origin compared to the 𝜹 13C. In particular, 𝜹 34S values in rice originating from Greece was -1.98 ± 1.71 compared to 2.10 ± 1.87, 4.41 ± 0.88 and 9.02 ± 0.75 for Korea, China and Philippines respectively. Among stable isotope ratios studied, values of 𝜹 34S seem to be the more appropriate isotope marker to discriminate rice geographic origin between the studied areas. These results imply the significant capability of stable isotope ratio methodology for effective geographical origin discrimination of rice, providing a valuable insight into the control of improper or fraudulent labeling. Acknowledgement: This research has been financed by the Public Investment Programme/General Secretariat for Research and Innovation, under the call “YPOERGO 3, code 2018SE01300000: project title: ‘Elaboration and implementation of methodology for authenticity and geographical origin assessment of agricultural products.

Keywords: geographical origin, authenticity, rice, isotope ratio mass spectrometry

Procedia PDF Downloads 65
551 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 153
550 Define Immersive Need Level for Optimal Adoption of Virtual Words with BIM Methodology

Authors: Simone Balin, Cecilia M. Bolognesi, Paolo Borin

Abstract:

In the construction industry, there is a large amount of data and interconnected information. To manage this information effectively, a transition to the immersive digitization of information processes is required. This transition is important to improve knowledge circulation, product quality, production sustainability and user satisfaction. However, there is currently a lack of a common definition of immersion in the construction industry, leading to misunderstandings and limiting the use of advanced immersive technologies. Furthermore, the lack of guidelines and a common vocabulary causes interested actors to abandon the virtual world after the first collaborative steps. This research aims to define the optimal use of immersive technologies in the AEC sector, particularly for collaborative processes based on the BIM methodology. Additionally, the research focuses on creating classes and levels to structure and define guidelines and a vocabulary for the use of the " Immersive Need Level." This concept, matured by recent technological advancements, aims to enable a broader application of state-of-the-art immersive technologies, avoiding misunderstandings, redundancies, or paradoxes. While the concept of "Informational Need Level" has been well clarified with the recent UNI EN 17412-1:2021 standard, when it comes to immersion, current regulations and literature only provide some hints about the technology and related equipment, leaving the procedural approach and the user's free interpretation completely unexplored. Therefore, once the necessary knowledge and information are acquired (Informational Need Level), it is possible to transition to an Immersive Need Level that involves the practical application of the acquired knowledge, exploring scenarios and solutions in a more thorough and detailed manner, with user involvement, via different immersion scales, in the design, construction or management process of a building or infrastructure. The need for information constitutes the basis for acquiring relevant knowledge and information, while the immersive need can manifest itself later, once a solid information base has been solidified, using the senses and developing immersive awareness. This new approach could solve the problem of inertia among AEC industry players in adopting and experimenting with new immersive technologies, expanding collaborative iterations and the range of available options.

Keywords: AECindustry, immersive technology (IMT), virtual reality, augmented reality, building information modeling (BIM), decision making, collaborative process, information need level, immersive level of need

Procedia PDF Downloads 63
549 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock

Authors: Vahid Bairami Rad

Abstract:

The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?

Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno

Procedia PDF Downloads 32
548 International Students into the Irish Higher Education System: Supporting the Transition

Authors: Tom Farrelly, Yvonne Kavanagh, Tony Murphy

Abstract:

The sharp rise in international students into Ireland has provided colleges with a number of opportunities but also a number of challenges, both at an institutional and individual lecturer level and of course for the incoming student. Previously, Ireland’s population, particularly its higher education student population was largely homogenous, largely drawn from its own shores and thus reflecting the ethnic, cultural and religious demographics of the day. However, over the twenty years Ireland witnessed considerable economic growth, downturn and subsequent growth all of which has resulted in an Ireland that has changed both culturally and demographically. Propelled by Ireland’s economic success up to the late 2000s, one of the defining features of this change was an unprecedented rise in the number of migrants, both academic and economic. In 2013, Ireland’s National Forum for the Enhancement for Teaching and Learning in Higher Education (hereafter the National Forum) invited proposals for inter-institutional collaborative projects aimed at different student groups’ transitioning in or out of higher education. Clearly, both as a country and a higher education sector we want incoming students to have a productive and enjoyable time in Ireland. One of the ways that will help the sector help the students make a successful transition is by developing strategies and polices that are well informed and student driven. This abstract outlines the research undertaken by the five colleges Institutes of Technology: Carlow; Cork; Tralee & Waterford and University College Cork) in Ireland that constitute the Southern cluster aimed at helping international students transition into the Irish higher education system. The aim of the southern clusters’ project was to develop a series of online learning units that can be accessed by prospective incoming international students prior to coming to Ireland and by Irish based lecturing staff. However, in order to make the units as relevant and informed as possible there was a strong research element to the project. As part of the southern cluster’s research strategy a large-scale online survey using SurveyMonkey was undertaken across the five colleges drawn from their respective international student communities. In total, there were 573 responses from students coming from over twenty different countries. The results from the survey have provided some interesting insights into the way that international students interact with and understand the Irish higher education system. The research and results will act as a model for consistent practice applicable across institutional clusters, thereby allowing institutions to minimise costs and focus on the unique aspects of transitioning international students into their institution.

Keywords: digital, international, support, transitions

Procedia PDF Downloads 266
547 I, Me and the Bot: Forming a theory of symbolic interactivity with a Chatbot

Authors: Felix Liedel

Abstract:

The rise of artificial intelligence has numerous and far-reaching consequences. In addition to the obvious consequences for entire professions, the increasing interaction with chatbots also has a wide range of social consequences and implications. We are already increasingly used to interacting with digital chatbots, be it in virtual consulting situations, creative development processes or even in building personal or intimate virtual relationships. A media-theoretical classification of these phenomena has so far been difficult, partly because the interactive element in the exchange with artificial intelligence has undeniable similarities to human-to-human communication but is not identical to it. The proposed study, therefore, aims to reformulate the concept of symbolic interaction in the tradition of George Herbert Mead as symbolic interactivity in communication with chatbots. In particular, Mead's socio-psychological considerations will be brought into dialog with the specific conditions of digital media, the special dispositive situation of chatbots and the characteristics of artificial intelligence. One example that illustrates this particular communication situation with chatbots is so-called consensus fiction: In face-to-face communication, we use symbols on the assumption that they will be interpreted in the same or a similar way by the other person. When briefing a chatbot, it quickly becomes clear that this is by no means the case: only the bot's response shows whether the initial request corresponds to the sender's actual intention. This makes it clear that chatbots do not just respond to requests. Rather, they function equally as projection surfaces for their communication partners but also as distillations of generalized social attitudes. The personalities of the chatbot avatars result, on the one hand, from the way we behave towards them and, on the other, from the content we have learned in advance. Similarly, we interpret the response behavior of the chatbots and make it the subject of our own actions with them. In conversation with the virtual chatbot, we enter into a dialog with ourselves but also with the content that the chatbot has previously learned. In our exchanges with chatbots, we, therefore, interpret socially influenced signs and behave towards them in an individual way according to the conditions that the medium deems acceptable. This leads to the emergence of situationally determined digital identities that are in exchange with the real self but are not identical to it: In conversation with digital chatbots, we bring our own impulses, which are brought into permanent negotiation with a generalized social attitude by the chatbot. This also leads to numerous media-ethical follow-up questions. The proposed approach is a continuation of my dissertation on moral decision-making in so-called interactive films. In this dissertation, I attempted to develop a concept of symbolic interactivity based on Mead. Current developments in artificial intelligence are now opening up new areas of application.

Keywords: artificial intelligence, chatbot, media theory, symbolic interactivity

Procedia PDF Downloads 30
546 Relative Importance of Different Mitochondrial Components in Maintaining the Barrier Integrity of Retinal Endothelial Cells: Implications for Vascular-associated Retinal Diseases

Authors: Shaimaa Eltanani, Thangal Yumnamcha, Ahmed S. Ibrahim

Abstract:

Purpose: Mitochondria dysfunction is central to breaking the barrier integrity of retinal endothelial cells (RECs) in various blinding eye diseases such as diabetic retinopathy and retinopathy of prematurity. Therefore, we aimed to dissect the role of different mitochondrial components, specifically, those of oxidative phosphorylation (OxPhos), in maintaining the barrier function of RECs. Methods: Electric cell-substrate impedance sensing (ECIS) technology was used to assess in real-time the role of different mitochondrial components in the total impedance (Z) of human RECs (HRECs) and its components; the capacitance (C) and the total resistance (R). HRECs were treated with specific mitochondrial inhibitors that target different steps in OxPhos: Rotenone for complex I; Oligomycin for ATP synthase; and FCCP for uncoupling OxPhos. Furthermore, data were modeled to investigate the effects of these inhibitors on the three parameters that govern the total resistance of cells: cell-cell interactions (Rb), cell-matrix interactions (α), and cell membrane permeability (Cm). Results: Rotenone (1 µM) produced the greatest reduction in the Z, followed by FCCP (1 µM), whereas no reduction in the Z was observed after the treatment with Oligomycin (1 µM). Following this further, we deconvoluted the effect of these inhibitors on Rb, α, and Cm. Firstly, rotenone (1 µM) completely abolished the resistance contribution of Rb, as the Rb became zero immediately after the treatment. Secondly, FCCP (1 µM) eliminated the resistance contribution of Rb only after 2.5 hours and increased Cm without considerable effect on α. Lastly, Oligomycin had the lowest impact among these inhibitors on Rb, which became similar to the control group at the end of the experiment without noticeable effects on Cm or α. Conclusion: These results demonstrate differential roles for complex I, complex V, and coupling of OxPhos in maintaining the barrier functionality of HRECs, in which complex I being the most important component in regulating the barrier functionality and the spreading behavior of HRECs. Such differences can be used in investigating gene expression as well as for screening selective agents that improve the functionality of complex I to be used in the therapeutic approach for treating REC-related retinal diseases.

Keywords: human retinal endothelial cells (hrecs), rotenone, oligomycin, fccp, oxidative phosphorylation, oxphos, capacitance, impedance, ecis modeling, rb resistance, α resistance, and barrier integrity

Procedia PDF Downloads 79
545 The Relationship between Proximity to Sources of Industrial-Related Outdoor Air Pollution and Children Emergency Department Visits for Asthma in the Census Metropolitan Area of Edmonton, Canada, 2004/2005 to 2009/2010

Authors: Laura A. Rodriguez-Villamizar, Alvaro Osornio-Vargas, Brian H. Rowe, Rhonda J. Rosychuk

Abstract:

Introduction/Objectives: The Census Metropolitan Area of Edmonton (CMAE) has important industrial emissions to the air from the Industrial Heartland Alberta (IHA) at the Northeast and the coal-fired power plants (CFPP) at the West. The objective of the study was to explore the presence of clusters of children asthma ED visits in the areas around the IHA and the CFPP. Methods: Retrospective data on children asthma ED visits was collected at the dissemination area (DA) level for children between 2 and 14 years of age, living in the CMAE between April 1, 2004, and March 31, 2010. We conducted a spatial analysis of disease clusters around putative sources with count (ecological) data using descriptive, hypothesis testing, and multivariable modeling analysis. Results: The mean crude rate of asthma ED visits was 9.3/1,000 children population per year during the study period. Circular spatial scan test for cases and events identified a cluster of children asthma ED visits in the DA where the CFPP are located in the Wabamum area. No clusters were identified around the IHA area. The multivariable models suggest that there is a significant decline in risk for children asthma ED visits as distance increases around the CFPP area this effect is modified at the SE direction with mean angle 125.58 degrees, where the risk increases with distance. In contrast, the regression models for IHA suggest that there is a significant increase in risk for children asthma ED visits as distance increases around the IHA area and this effect is modified at SW direction with mean angle 216.52 degrees, where the risk increases at shorter distances. Conclusions: Different methods for detecting clusters of disease consistently suggested the existence of a cluster of children asthma ED visits around the CFPP but not around the IHA within the CMAE. These results are probably explained by the direction of the air pollutants dispersion caused by the predominant and subdominant wind direction at each point. The use of different approaches to detect clusters of disease is valuable to have a better understanding of the presence, shape, direction and size of clusters of disease around pollution sources.

Keywords: air pollution, asthma, disease cluster, industry

Procedia PDF Downloads 263
544 Vortex Control by a Downstream Splitter Plate in Psudoplastic Fluid Flow

Authors: Sudipto Sarkar, Anamika Paul

Abstract:

Pseudoplastic (n<1, n is the power index) fluids have great importance in food, pharmaceutical and chemical process industries which require a lot of attention. Unfortunately, due to its complex flow behavior inadequate research works can be found even in laminar flow regime. A practical problem is solved in the present research work by numerical simulation where we tried to control the vortex shedding from a square cylinder using a horizontal splitter plate placed at the downstream flow region. The position of the plate is at the centerline of the cylinder with varying distance from the cylinder to calculate the critical gap-ratio. If the plate is placed inside this critical gap, the vortex shedding from the cylinder suppressed completely. The Reynolds number considered here is in unsteady laminar vortex shedding regime, Re = 100 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid). Flow behavior has been studied for three different gap-ratios (G/a = 2, 2.25 and 2.5, where G is the gap between cylinder and plate) and for a fluid with three different flow behavior indices (n =1, 0.8 and 0.5). The flow domain is constructed using Gambit 2.2.30 and this software is also used to generate the mesh and to impose the boundary conditions. For G/a = 2, the domain size is considered as 37.5a × 16a with 316 × 208 grid points in the streamwise and flow-normal directions respectively after a thorough grid independent study. Fine and equal grid spacing is used close to the geometry to capture the vortices shed from the cylinder and the boundary layer developed over the flat plate. Away from the geometry meshes are unequal in size and stretched out. For other gap-ratios, proportionate domain size and total grid points are used with similar kind of mesh distribution. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain boundary conditions are used for the simulation. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. Discretized forms of fully conservative 2-D unsteady Navier Stokes equations are then solved by Ansys Fluent 14.5. SIMPLE algorithm written in finite volume method is selected for this purpose which is a default solver inculcate in Fluent. The results obtained for Newtonian fluid flow agree well with previous works supporting Fluent’s usefulness in academic research. A thorough analysis of instantaneous and time-averaged flow fields are depicted both for Newtonian and pseudoplastic fluid flow. It has been observed that as the value of n reduces the stretching of shear layers also reduce and these layers try to roll up before the plate. For flow with high pseudoplasticity (n = 0.5) the nature of vortex shedding changes and the value of critical gap-ratio reduces. These are the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: CFD, pseudoplastic fluid flow, wake-boundary layer interactions, critical gap-ratio

Procedia PDF Downloads 96
543 Numerical Validation of Liquid Nitrogen Phase Change in a Star-Shaped Ambient Vaporizer

Authors: Yusuf Yilmaz, Gamze Gediz Ilis

Abstract:

Gas Nitrogen where has a boiling point of -189.52oC at atmospheric pressure widely used in the industry. Nitrogen that used in the industry should be transported in liquid form to the plant area. Ambient air vaporizer (AAV) generally used for vaporization of cryogenic gases such as liquid nitrogen (LN2), liquid oxygen (LOX), liquid natural gas (LNG), and liquid argon (LAR) etc. AAV is a group of star-shaped fin vaporizer. The design and the effect of the shape of fins of the vaporizer is one of the most important criteria for the performance of the vaporizer. In this study, the performance of AAV working with liquid nitrogen was analyzed numerically in a star-shaped aluminum finned pipe. The numerical analysis is performed in order to investigate the heat capacity of the vaporizer per meter pipe length. By this way, the vaporizer capacity can be predicted for the industrial applications. In order to achieve the validation of the numerical solution, the experimental setup is constructed. The setup includes a liquid nitrogen tank with a pressure of 9 bar. The star-shaped aluminum finned tube vaporizer is connected to the LN2 tank. The inlet and the outlet pressure and temperatures of the LN2 of the vaporizer are measured. The mass flow rate of the LN2 is also measured and collected. The comparison of the numerical solution is performed by these measured data. The ambient conditions of the experiment are given as boundary conditions to the numerical model. The surface tension and contact angle have a significant effect on the boiling of liquid nitrogen. Average heat transfer coefficient including convective and nucleated boiling components should be obtained for liquid nitrogen saturated flow boiling in the finned tube. Fluent CFD module is used to simulate the numerical solution. The turbulent k-ε model is taken to simulate the liquid nitrogen flow. The phase change is simulated by using the evaporation-condensation approach used with user-defined functions (UDF). The comparison of the numerical and experimental results will be shared in this study. Besides, the performance capacity of the star-shaped finned pipe vaporizer will be calculated in this study. Based on this numerical analysis, the performance of the vaporizer per unit length can be predicted for the industrial applications and the suitable pipe length of the vaporizer can be found for the special cases.

Keywords: liquid nitrogen, numerical modeling, two-phase flow, cryogenics

Procedia PDF Downloads 96
542 Music as Source Domain: A Cross-Linguistic Exploration of Conceptual Metaphors

Authors: Eleanor Sweeney, Chunyuan Di

Abstract:

The metaphors people use in everyday discourse do not arise randomly; rather, they develop from our physical experiences in our social and cultural environments. Conceptual Metaphor Theory (CMT) explains that through metaphor, we apply our embodied understanding of the physical world to non-material concepts to understand and express abstract concepts. Our most productive source domains derive from our embodied understanding and allow us to develop primary metaphors, and from primary metaphors, an elaborate, creative world of culturally constructed complex metaphors. Cognitive Linguistics researchers draw upon individual embodied experience for primary metaphors. Socioculturally embodied experience through music has long furnished linguistic expressions in diverse languages, as conceptual metaphors or everyday expressions.  Can a socially embodied experience function in the same way as an individually embodied experience in the creation of conceptual metaphors? The authors argue that since music is inherently social and embodied, musical experiences function as a richly motivated source domain. The focus of this study is socially embodied musical experience which is then reflected and expressed through metaphors. This cross-linguistic study explores music as a source domain for metaphors of social alignment in English, French, and Chinese. The authors explored two public discourse sites, Facebook and Linguée, in order to collect linguistic metaphors from three different languages. By conducting this cross-linguistic study, cross-cultural similarities and differences in metaphors for which music is the source domain can be examined. Different musical elements, such as melody, speed, rhythm and harmony, are analyzed for their possible metaphoric meanings of social alignment. Our findings suggest that the general metaphor cooperation is music is a productive metaphor with some subcases, and that correlated social behaviors can be metaphorically expressed with certain elements in music. For example, since performance is a subset of the category behavior, there is a natural mapping from performance in music to behavior in social settings: social alignment is musical performance. Musical performance entails a collective social expectation that exerts control over individual behavior.  When individual behavior does not align with the collective social expectation, music-related expressions are often used to express how the individual is violating social norms. Moreover, when individuals do align their behavior with social norms, similar musical expressions are used. Cooperation is a crucial social value in all cultures, indeed it is a key element of survival, and music provides a coherent, consistent, and rich source domain—one based upon a universal and definitive cultural practice.

Keywords: Chinese, Conceptual Metaphor Theory, cross-linguistic, culturally embodied experience, English, French, metaphor, music

Procedia PDF Downloads 146
541 Thermal Method Production of the Hydroxyapatite from Bone By-Products from Meat Industry

Authors: Agnieszka Sobczak-Kupiec, Dagmara Malina, Klaudia Pluta, Wioletta Florkiewicz, Bozena Tyliszczak

Abstract:

Introduction: Request for compound of phosphorus grows continuously, thus, it is searched for alternative sources of this element. One of these sources could be by-products from meat industry which contain prominent quantity of phosphorus compounds. Hydroxyapatite, which is natural component of animal and human bones, is leading material applied in bone surgery and also in stomatology. This is material, which is biocompatible, bioactive and osteoinductive. Methodology: Hydroxyapatite preparation: As a raw material was applied deproteinized and defatted bone pulp called bone sludge, which was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Hydroxyapatite was received in calcining process in chamber kiln with electric heating in air atmosphere in two stages. In the first stage, material was calcining in temperature 600°C within 3 hours. In the next stage unified material was calcining in three different temperatures (750°C, 850°C and 950°C) keeping material in maximum temperature within 3.0 hours. Bone sludge: Bone sludge was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Pork bones coming from the partition of meat were used as a raw material for the production of the protein hydrolysate. After disintegration, a mixture of bone pulp and water with a small amount of lactic acid was boiled at temperature 130-135°C and under pressure4 bar. After 3-3.5 hours boiled-out bones were separated on a sieve, and the solution of protein-fat hydrolysate got into a decanter, where bone sludge was separated from it. Results of the study: The phase composition was analyzed by roentgenographic method. Hydroxyapatite was the only crystalline phase observed in all the calcining products. XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Conclusion: The researches were shown that phosphorus content is around 12%, whereas, calcium content amounts to 28% on average. The conducted researches on bone-waste calcining at the temperatures of 750-950°C confirmed that thermal utilization of deproteinized bone-waste was possible. X-ray investigations were confirmed that hydroxyapatite is the main component of calcining products, and also XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Contents of calcium and phosphorus were distinctly increased with calcining temperature, whereas contents of phosphorus soluble in acids were decreased. It could be connected with higher crystallization degree of material received in higher temperatures and its stable structure. Acknowledgements: “The authors would like to thank the The National Centre for Research and Development (Grant no: LIDER//037/481/L-5/13/NCBR/2014) for providing financial support to this project”.

Keywords: bone by-products, bone sludge, calcination, hydroxyapatite

Procedia PDF Downloads 267
540 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations

Authors: Milena Nanova, Radul Shishkov, Damyan Damov, Martin Georgiev

Abstract:

This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper places emphasis on algorithmic implementation of the logical constraint and intricacies in residential architecture by exploring the potential of generative design to create visually engaging and contextually harmonious structures. This exploration also contains an analysis of how these designs align with legal building parameters, showcasing the potential for creative solutions within the confines of urban building regulations. Concurrently, our methodology integrates functional, economic, and environmental factors. We investigate how generative design can be utilized to optimize buildings' performance, considering them, aiming to achieve a symbiotic relationship between the built environment and its natural surroundings. Through a blend of theoretical research and practical case studies, this research highlights the multifaceted capabilities of generative design and demonstrates practical applications of our framework. Our findings illustrate the rich possibilities that arise from an algorithmic design approach in the context of a vibrant urban landscape. This study contributes an alternative perspective to residential architecture, suggesting that the future of urban development lies in embracing the complex interplay between computational design innovation, regulatory adherence, and environmental responsibility.

Keywords: generative design, computational design, parametric design, algorithmic modeling

Procedia PDF Downloads 33
539 Comparing Stability Index MAPping (SINMAP) Landslide Susceptibility Models in the Río La Carbonera, Southeast Flank of Pico de Orizaba Volcano, Mexico

Authors: Gabriel Legorreta Paulin, Marcus I. Bursik, Lilia Arana Salinas, Fernando Aceves Quesada

Abstract:

In volcanic environments, landslides and debris flows occur continually along stream systems of large stratovolcanoes. This is the case on Pico de Orizaba volcano, the highest mountain in Mexico. The volcano has a great potential to impact and damage human settlements and economic activities by landslides. People living along the lower valleys of Pico de Orizaba volcano are in continuous hazard by the coalescence of upstream landslide sediments that increased the destructive power of debris flows. These debris flows not only produce floods, but also cause the loss of lives and property. Although the importance of assessing such process, there is few landslide inventory maps and landslide susceptibility assessment. As a result in México, no landslide susceptibility models assessment has been conducted to evaluate advantage and disadvantage of models. In this study, a comprehensive study of landslide susceptibility models assessment using GIS technology is carried out on the SE flank of Pico de Orizaba volcano. A detailed multi-temporal landslide inventory map in the watershed is used as framework for the quantitative comparison of two landslide susceptibility maps. The maps are created based on 1) the Stability Index MAPping (SINMAP) model by using default geotechnical parameters and 2) by using findings of volcanic soils geotechnical proprieties obtained in the field. SINMAP combines the factor of safety derived from the infinite slope stability model with the theory of a hydrologic model to produce the susceptibility map. It has been claimed that SINMAP analysis is reasonably successful in defining areas that intuitively appear to be susceptible to landsliding in regions with sparse information. The validations of the resulting susceptibility maps are performed by comparing them with the inventory map under LOGISNET system which provides tools to compare by using a histogram and a contingency table. Results of the experiment allow for establishing how the individual models predict the landslide location, advantages, and limitations. The results also show that although the model tends to improve with the use of calibrated field data, the landslide susceptibility map does not perfectly represent existing landslides.

Keywords: GIS, landslide, modeling, LOGISNET, SINMAP

Procedia PDF Downloads 288
538 Anti-Obesity Effects of Pteryxin in Peucedanum japonicum Thunb Leaves through Different Pathways of Adipogenesis In-Vitro

Authors: Ruwani N. Nugara, Masashi Inafuku, Kensaku Takara, Hironori Iwasaki, Hirosuke Oku

Abstract:

Pteryxin from the partially purified hexane phase (HP) of Peucedanum japonicum Thunb (PJT) was identified as the active compound related to anti-obesity. Thus, in this study we investigated the mechanisms related to anti-obesity activity in-vitro. The HP was fractionated, and effect on the triglyceride (TG) content was evaluated in 3T3-L1 and HepG2 cells. Comprehensive spectroscopic analyses were used to identify the structure of the active compound. The dose dependent effect of active constituent on the TG content, and the gene expressions related to adipogenesis, fatty acid catabolism, energy expenditure, lipolysis and lipogenesis (20 μg/mL) were examined in-vitro. Furthermore, higher dosage of pteryxin (50μg/mL) was tested against 20μg/mL in 3T3-L1 adipocytes. The mRNA were subjected to SOLiD next generation sequencer and the obtained data were analyzed by Ingenuity Pathway Analysis (IPA). The active constituent was identified as pteryxin, a known compound in PJT. However, its biological activities against obesity have not been reported previously. Pteryxin dose dependently suppressed TG content in both 3T3-L1 adipocytes and HepG2 hepatocytes (P < 0.05). Sterol regulatory element-binding protein-1 (SREBP1 c), Fatty acid synthase (FASN), and acetyl-CoA carboxylase-1 (ACC1) were downregulated in pteryxin-treated adipocytes (by 18.0, 36.1 and 38.2%; P < 0.05, respectively) and hepatocytes (by 72.3, 62.9 and 38.8%, respectively; P < 0.05) indicating its suppressive effects on fatty acid synthesis. The hormone-sensitive lipase (HSL), a lipid catabolising gene was upregulated (by 15.1%; P < 0.05) in pteryxin-treated adipocytes suggesting improved lipolysis. Concordantly, the adipocyte size marker gene, paternally expressed gene1/mesoderm specific transcript (MEST) was downregulated (by 42.8%; P < 0.05), further accelerating the lipolytic activity. The upregulated trend of uncoupling protein 2 (UCP2; by 77.5%; P < 0.05) reflected the improved energy expenditure due to pteryxin. The 50μg/mL dosage of pteryxin completely suppressed PPARγ, MEST, SREBP 1C, HSL, Adiponectin, Fatty Acid Binding Protein (FABP) 4, and UCP’s in 3T3-L1 adipocytes. The IPA suggested that pteryxin at 20μg/mL and 50μg/mL suppress obesity in two different pathways, whereas the WNT signaling pathway play a key role in the higher dose of pteryxin in preadipocyte stage. Pteryxin in PJT play the key role in regulating lipid metabolism related gene network and improving energy production in vitro. Thus, the results suggests pteryxin as a new natural compound to be used as an anti-obesity drug in pharmaceutical industry.

Keywords: obesity, peucedanum japonicum thunb, pteryxin, food science

Procedia PDF Downloads 433
537 Digital Literacy Transformation and Implications in Institutions of Higher Learning in Kenya

Authors: Emily Cherono Sawe, Elisha Ondieki Makori

Abstract:

Knowledge and digital economies have brought challenges and potential opportunities for universities to innovate and improve the quality of learning. Disruption technologies and information dynamics continue to transform and change the landscape in teaching, scholarship, and research activities across universities. Digital literacy is a fundamental and imperative element in higher education and training, as witnessed during the new norm. COVID-19 caused unprecedented disruption in universities, where teaching and learning depended on digital innovations and applications. Academic services and activities were provided online, including library information services. Information professionals were forced to adopt various digital platforms in order to provide information services to patrons. University libraries’ roles in fulfilling educational responsibilities continue to evolve in response to changes in pedagogy, technology, economy, society, policies, and strategies of parent institutions. Libraries are currently undergoing considerable transformational change as a result of the inclusion of a digital environment. Academic libraries have been at the forefront of providing online learning resources and online information services, as well as supporting students and staff to develop digital literacy skills via online courses, tutorials, and workshops. Digital literacy transformation and information staff are crucial elements reminiscent of the prioritization of skills and knowledge for lifelong learning. The purpose of this baseline research is to assess the implications of digital literacy transformation in institutions of higher learning in Kenya and share appropriate strategies to leverage and sustain teaching and research. Objectives include examining the leverage and preparedness of the digital literacy environment in streamlining learning in the universities, exploring and benchmarking imperative digital competence for information professionals, establishing the perception of information professionals towards digital literacy skills, and determining lessons, best practices, and strategies to accelerate digital literacy transformation for effective research and learning in the universities. The study will adopt a descriptive research design using questionnaires and document analysis as the instruments for data collection. The targeted population is librarians and information professionals, as well as academics in public and private universities teaching information literacy programmes. Data and information are to be collected through an online structured questionnaire and digital face-to-face interviews. Findings and results will provide promising lessons together with best practices and strategies to transform and change digital literacies in university libraries in Kenya.

Keywords: digital literacy, digital innovations, information professionals, librarians, higher education, university libraries, digital information literacy

Procedia PDF Downloads 68
536 STML: Service Type-Checking Markup Language for Services of Web Components

Authors: Saqib Rasool, Adnan N. Mian

Abstract:

Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.

Keywords: REST, STML, type checking, web component

Procedia PDF Downloads 233
535 Erosion Influencing Factors Analysis: Case of Isser Watershed (North-West Algeria)

Authors: Chahrazed Salhi, Ayoub Zeroual, Yasmina Hamitouche

Abstract:

Soil water erosion poses a significant threat to the watersheds in Algeria today. The degradation of storage capacity in large dams over the past two decades, primarily due to erosion, necessitates a comprehensive understanding of the factors that contribute to soil erosion. The Isser watershed, located in the Northwestern region of Algeria, faces additional challenges such as recurrent droughts and the presence of delicate marl and clay outcrops, which amplify its susceptibility to water erosion. This study aims to employ advanced techniques such as Geographic Information Systems (GIS) and Remote Sensing (RS), in conjunction with the Canonical Correlation Analysis (CCA) method and Soil Water Assessment Tool (SWAT) model, to predict specific erosion patterns and analyze the key factors influencing erosion in the Isser basin. To accomplish this, an array of data sources including rainfall, climatic, hydrometric, land use, soil, digital elevation, and satellite data were utilized. The application of the SWAT model to the Isser basin yielded an average annual soil loss of approximately 16 t/ha/year. Particularly high erosion rates, exceeding 12 T/ha/year, were observed in the central and southern parts of the basin, encompassing 41% of the total basin area. Through Canonical Correlation Analysis, it was determined that vegetation cover and topography exerted the most substantial influence on erosion. Consequently, the study identified significant and spatially heterogeneous erosion throughout the study area. The impact of land topography on soil loss was found to be directly proportional, while vegetation cover exhibited an inverse proportional relationship. Modeling specific erosion for the Ladrat dam sub-basin estimated a rate of around 39 T/ha/year, thus accounting for the recorded capacity loss of 17.80% compared to the bathymetric survey conducted in 2019. The findings of this research provide valuable decision-support tools for soil conservation managers, empowering them to make informed decisions regarding soil conservation measures.

Keywords: Isser watershed, RS, CCA, SWAT, vegetation cover, topography

Procedia PDF Downloads 46
534 Impacts of Present and Future Climate Variability on Forest Ecosystem in Mediterranean Region

Authors: Orkan Ozcan, Nebiye Musaoglu, Murat Turkes

Abstract:

Climate change is largely recognized as one of the real, pressing and significant global problems. The concept of ‘climate change vulnerability’ helps us to better comprehend the cause/effect relationships behind climate change and its impact on human societies, socioeconomic sectors, physiographical and ecological systems. In this study, multifactorial spatial modeling was applied to evaluate the vulnerability of a Mediterranean forest ecosystem to climate change. As a result, the geographical distribution of the final Environmental Vulnerability Areas (EVAs) of the forest ecosystem is based on the estimated final Environmental Vulnerability Index (EVI) values. This revealed that at current levels of environmental degradation, physical, geographical, policy enforcement and socioeconomic conditions, the area with a ‘very low’ vulnerability degree covered mainly the town, its surrounding settlements and the agricultural lands found mainly over the low and flat travertine plateau and the plains at the east and southeast of the district. The spatial magnitude of the EVAs over the forest ecosystem under the current environmental degradation was also determined. This revealed that the EVAs classed as ‘very low’ account for 21% of the total area of the forest ecosystem, those classed as ‘low’ account for 36%, those classed as ‘medium’ account for 20%, and those classed as ‘high’ account for 24%. Based on regionally averaged future climate assessments and projected future climate indicators, both the study site and the western Mediterranean sub-region of Turkey will probably become associated with a drier, hotter, more continental and more water-deficient climate. This analysis holds true for all future scenarios, with the exception of RCP4.5 for the period from 2015 to 2030. However, the present dry-sub humid climate dominating this sub-region and the study area shows a potential for change towards more dry climatology and for it to become a semiarid climate in the period between 2031 and 2050 according to the RCP8.5 high emission scenario. All the observed and estimated results and assessments summarized in the study show clearly that the densest forest ecosystem in the southern part of the study site, which is characterized by mainly Mediterranean coniferous and some mixed forest and the maquis vegetation, will very likely be influenced by medium and high degrees of vulnerability to future environmental degradation, climate change and variability.

Keywords: forest ecosystem, Mediterranean climate, RCP scenarios, vulnerability analysis

Procedia PDF Downloads 335
533 Modeling Acceptability of a Personalized and Contextualized Radio Embedded in Vehicles

Authors: Ludivine Gueho, Sylvain Fleury, Eric Jamet

Abstract:

Driver distraction is known to be a major contributing factor of car accidents. Since many years, constructors have been designing embedded technologies to face this problem and reduce distraction. Being able to predict user acceptance would further be helpful in the development process to build appropriate systems. The present research aims at modelling the acceptability of a specific system, an innovative personalized and contextualized embedded radio, through an online survey of 202 people in France that assessed the psychological variables determining intentions to use the system. The questionnaire instantiated the dimensions of the extended version of the UTAUT acceptability model. Because of the specific features of the system assessed, we added 4 dimensions: perceived security, anxiety, trust and privacy concerns. Results showed that hedonic motivation, i.e., the fun or pleasure derived from using a technology, and performance expectancy, i.e., the degree to which individuals believe that the characteristics of the system meet their needs, are the most important dimensions in determining behavioral intentions about the innovative radio. To a lesser extent, social influence, i.e., the degree to which individuals think they can use the system while respecting their social group’s norms and while giving a positive image of themselves, had an effect on behavioral intentions. Moreover, trust, that is, the positive belief about the perceived reliability of, dependability of, and confidence in a person, object or process, had a significant effect, mediated by performance expectancy. In an applicative way, the present research reveals that, to be accepted, in-car embedded new technology has to address individual needs, for instance by facilitating the driving activity or by providing useful information. If it shows hedonic qualities by being entertaining, pretty or comfortable, this may improve the intentions to use it. Therefore, it is clearly important to include reflection about user experience in the design process. Finally, the users have to be reassured on the system’s reliability. For example, improving the transparency of the system by providing information about the system functioning, could improve trust. These results bring some highlights on determinant of acceptance of an in-vehicle technology and are useful for constructors to design acceptable systems.

Keywords: acceptability, innovative embedded radio, structural equation, user-centric evaluation, UTAUT

Procedia PDF Downloads 254
532 Nondestructive Monitoring of Atomic Reactions to Detect Precursors of Structural Failure

Authors: Volodymyr Rombakh

Abstract:

This article was written to substantiate the possibility of detecting the precursors of catastrophic destruction of a structure or device and stopping operation before it. Damage to solids results from breaking the bond between atoms, which requires energy. Modern theories of strength and fracture assume that such energy is due to stress. However, in a letter to W. Thomson (Lord Kelvin) dated December 18, 1856, J.C. Maxwell provided evidence that elastic energy cannot destroy solids. He proposed an equation for estimating a deformable body's energy, equal to the sum of two energies. Due to symmetrical compression, the first term does not change, but the second term is distortion without compression. Both types of energy are represented in the equation as a quadratic function of strain, but Maxwell repeatedly wrote that it is not stress but strain. Furthermore, he notes that the nature of the energy causing the distortion is unknown to him. An article devoted to theories of elasticity was published in 1850. Maxwell tried to express mechanical properties with the help of optics, which became possible only after the creation of quantum mechanics. However, Maxwell's work on elasticity is not cited in the theories of strength and fracture. The authors of these theories and their associates are still trying to describe the phenomena they observe based on classical mechanics. The study of Faraday's experiments, Maxwell's and Rutherford's ideas, made it possible to discover a previously unknown area of electromagnetic radiation. The properties of photons emitted in this reaction are fundamentally different from those of photons emitted in nuclear reactions and are caused by the transition of electrons in an atom. The photons released during all processes in the universe, including from plants and organs in natural conditions; their penetrating power in metal is millions of times greater than that of one of the gamma rays. However, they are not non-invasive. This apparent contradiction is because the chaotic motion of protons is accompanied by the chaotic radiation of photons in time and space. Such photons are not coherent. The energy of a solitary photon is insufficient to break the bond between atoms, one of the stages of which is ionization. The photographs registered the rail deformation by 113 cars, while the Gaiger Counter did not. The author's studies show that the cause of damage to a solid is the breakage of bonds between a finite number of atoms due to the stimulated emission of metastable atoms. The guarantee of the reliability of the structure is the ratio of the energy dissipation rate to the energy accumulation rate, but not the strength, which is not a physical parameter since it cannot be measured or calculated. The possibility of continuous control of this ratio is due to the spontaneous emission of photons by metastable atoms. The article presents calculation examples of the destruction of energy and photographs due to the action of photons emitted during the atomic-proton reaction.

Keywords: atomic-proton reaction, precursors of man-made disasters, strain, stress

Procedia PDF Downloads 75
531 Reasons to Redesign: Teacher Education for a Brighter Tomorrow

Authors: Deborah L. Smith

Abstract:

To review our program and determine the best redesign options, department members gathered feedback and input through focus groups, analysis of data, and a review of the current research to ensure that the changes proposed were not based solely on the state’s new professional standards. In designing course assignments and assessments, we listened to a variety of constituents, including students, other institutions of higher learning, MDE webinars, host teachers, literacy clinic personnel, and other disciplinary experts. As a result, we are designing a program that is more inclusive of a variety of field experiences for growth. We have determined ways to improve our program by connecting academic disciplinary knowledge, educational psychology, and community building both inside and outside the classroom for professional learning communities. The state’s release of new professional standards led my department members to question what is working and what needs improvement in our program. One aspect of our program that continues to be supported by research and data analysis is the function of supervised field experiences with meaningful feedback. We seek to expand in this area. Other data indicate that we have strengths in modeling a variety of approaches such as cooperative learning, discussions, literacy strategies, and workshops. In the new program, field assignments will be connected to multiple courses, and efforts to scaffold student learning to guide them toward best evidence-based practices will be continuous. Despite running a program that meets multiple sets of standards, there are areas of need that we directly address in our redesign proposal. Technology is ever-changing, so it’s inevitable that improving digital skills is a focus. In addition, scaffolding procedures for English Language Learners (ELL) or other students who struggle is imperative. Diversity, equity, and inclusion (DEI) has been an integral part of our curriculum, but the research indicates that more self-reflection and a deeper understanding of culturally relevant practices would help the program improve. Connections with professional learning communities will be expanded, as will leadership components, so that teacher candidates understand their role in changing the face of education. A pilot program will run in academic year 22/23, and additional data will be collected each semester through evaluations and continued program review.

Keywords: DEI, field experiences, program redesign, teacher preparation

Procedia PDF Downloads 144
530 Personalized Climate Change Advertising: The Role of Augmented Reality (A.R.) Technology in Encouraging Users for Climate Change Action

Authors: Mokhlisur Rahman

Abstract:

The growing consensus among scientists and world leaders indicates that immediate action should be considered regarding the climate change phenomenon. However, climate change is no more a global issue but a personal one. Thus, individual participation is necessary to address such a significant issue. Studies show that individuals who perceive climate change as a personal issue are more likely to act toward it. This abstract presents augmented reality (A.R.) technology in the social media platform Facebook video advertising. The idea involves creating a video advertisement that enables users to interact with the video by navigating its features and experiencing the result uniquely and engagingly. This advertisement uses A.R. to bring changes, such as people making changes in real-life scenarios by simple clicks on the video and hearing an instant rewarding fact about their choices. The video shows three options: room, lawn, and driveway. Users select one option and engage in interaction based on while holding the camera in their personal spaces: Suppose users select the first option, room, and hold their camera toward spots such as by the windows, balcony, corners, and even walls. In that case, the A.R. offers users different plants appropriate for those unoccupied spaces in the room. Users can change the options of the plants and see which space at their house deserves a plant that makes it more natural. When a user adds a natural element to the video, the video content explains a piece of beneficiary information about how the user contributes to the world more to be livable and why it is necessary. With the help of A.R., if users select the second option, lawn, and hold their camera toward their lawn, the options are various small trees for their lawn to make it more environmentally friendly and decorative. The video plays a beneficiary explanation here too. Suppose users select the third option, driveway, and hold their camera toward their driveway. In that case, the A.R. video option offers unique recycle bin designs using A.I. measurement of spaces. The video plays audio information on anthropogenic contribution to greenhouse gas emission. IoT embeds tracking code in the video ad on Facebook, which stores the exact number of views in the cloud for data analysis. An online survey at the end collects short qualitative answers. This study helps understand the number of users involved and willing to change their behavior; It makes personalized advertising in social media. Considering the current state of climate change, the urgency for action is increasing. This ad increases the chance to make direct connections with individuals and gives a sense of personal responsibility for climate change to act

Keywords: motivations, climate, iot, personalized-advertising, action

Procedia PDF Downloads 54
529 Understanding Hydrodynamic in Lake Victoria Basin in a Catchment Scale: A Literature Review

Authors: Seema Paul, John Mango Magero, Prosun Bhattacharya, Zahra Kalantari, Steve W. Lyon

Abstract:

The purpose of this review paper is to develop an understanding of lake hydrodynamics and the potential climate impact on the Lake Victoria (LV) catchment scale. This paper briefly discusses the main problems of lake hydrodynamics and its’ solutions that are related to quality assessment and climate effect. An empirical methodology in modeling and mapping have considered for understanding lake hydrodynamic and visualizing the long-term observational daily, monthly, and yearly mean dataset results by using geographical information system (GIS) and Comsol techniques. Data were obtained for the whole lake and five different meteorological stations, and several geoprocessing tools with spatial analysis are considered to produce results. The linear regression analyses were developed to build climate scenarios and a linear trend on lake rainfall data for a long period. A potential evapotranspiration rate has been described by the MODIS and the Thornthwaite method. The rainfall effect on lake water level observed by Partial Differential Equations (PDE), and water quality has manifested by a few nutrients parameters. The study revealed monthly and yearly rainfall varies with monthly and yearly maximum and minimum temperatures, and the rainfall is high during cool years and the temperature is high associated with below and average rainfall patterns. Rising temperatures are likely to accelerate evapotranspiration rates and more evapotranspiration is likely to lead to more rainfall, drought is more correlated with temperature and cloud is more correlated with rainfall. There is a trend in lake rainfall and long-time rainfall on the lake water surface has affected the lake level. The onshore and offshore have been concentrated by initial literature nutrients data. The study recommended that further studies should consider fully lake bathymetry development with flow analysis and its’ water balance, hydro-meteorological processes, solute transport, wind hydrodynamics, pollution and eutrophication these are crucial for lake water quality, climate impact assessment, and water sustainability.

Keywords: climograph, climate scenarios, evapotranspiration, linear trend flow, rainfall event on LV, concentration

Procedia PDF Downloads 72