Search results for: simulation optimization
547 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat
Authors: M. Venegas, M. De Vega, N. García-Hernando
Abstract:
Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy
Procedia PDF Downloads 285546 Improving the Uniformity of Electrostatic Meter’s Spatial Sensitivity
Authors: Mohamed Abdalla, Ruixue Cheng, Jianyong Zhang
Abstract:
In pneumatic conveying, the solids are mixed with air or gas. In industries such as coal fired power stations, blast furnaces for iron making, cement and flour processing, the mass flow rate of solids needs to be monitored or controlled. However the current gas-solids two-phase flow measurement techniques are not as accurate as the flow meters available for the single phase flow. One of the problems that the multi-phase flow meters to face is that the flow profiles vary with measurement locations and conditions of pipe routing, bends, elbows and other restriction devices in conveying system as well as conveying velocity and concentration. To measure solids flow rate or concentration with non-even distribution of solids in gas, a uniform spatial sensitivity is required for a multi-phase flow meter. However, there are not many meters inherently have such property. The circular electrostatic meter is a popular choice for gas-solids flow measurement with its high sensitivity to flow, robust construction, low cost for installation and non-intrusive nature. However such meters have the inherent non-uniform spatial sensitivity. This paper first analyses the spatial sensitivity of circular electrostatic meter in general and then by combining the effect of the sensitivity to a single particle and the sensing volume for a given electrode geometry, the paper reveals first time how a circular electrostatic meter responds to a roping flow stream, which is much more complex than what is believed at present. The paper will provide the recent research findings on spatial sensitivity investigation at the University of Tees side based on Finite element analysis using Ansys Fluent software, including time and frequency domain characteristics and the effect of electrode geometry. The simulation results will be compared tothe experimental results obtained on a large scale (14” diameter) rig. The purpose of this research is paving a way to achieve a uniform spatial sensitivity for the circular electrostatic sensor by mean of compensation so as to improve overall accuracy of gas-solids flow measurement.Keywords: spatial sensitivity, electrostatic sensor, pneumatic conveying, Ansys Fluent software
Procedia PDF Downloads 367545 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments
Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora
Abstract:
Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver
Procedia PDF Downloads 315544 150 KVA Multifunction Laboratory Test Unit Based on Power-Frequency Converter
Authors: Bartosz Kedra, Robert Malkowski
Abstract:
This paper provides description and presentation of laboratory test unit built basing on 150 kVA power frequency converter and Simulink RealTime platform. Assumptions, based on criteria which load and generator types may be simulated using discussed device, are presented, as well as control algorithm structure. As laboratory setup contains transformer with thyristor controlled tap changer, a wider scope of setup capabilities is presented. Information about used communication interface, data maintenance, and storage solution as well as used Simulink real-time features is presented. List and description of all measurements are provided. Potential of laboratory setup modifications is evaluated. For purposes of Rapid Control Prototyping, a dedicated environment was used Simulink RealTime. Therefore, load model Functional Unit Controller is based on a PC computer with I/O cards and Simulink RealTime software. Simulink RealTime was used to create real-time applications directly from Simulink models. In the next step, applications were loaded on a target computer connected to physical devices that provided opportunity to perform Hardware in the Loop (HIL) tests, as well as the mentioned Rapid Control Prototyping process. With Simulink RealTime, Simulink models were extended with I/O cards driver blocks that made automatic generation of real-time applications and performing interactive or automated runs on a dedicated target computer equipped with a real-time kernel, multicore CPU, and I/O cards possible. Results of performed laboratory tests are presented. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule.Keywords: MATLAB, power converter, Simulink Real-Time, thyristor-controlled tap changer
Procedia PDF Downloads 323543 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire
Authors: Vinay A. Sharma, Shiva Prasad H. C.
Abstract:
The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.Keywords: decision-making, leadership, logic, strategic management
Procedia PDF Downloads 108542 Formation Flying Design Applied for an Aurora Borealis Monitoring Mission
Authors: Thais Cardoso Franco, Caio Nahuel Sousa Fagonde, Willer Gomes dos Santos
Abstract:
Aurora Borealis is an optical phenomenon composed of luminous events observed in the night skies in the polar regions resulting from disturbances in the magnetosphere due to the impact of solar wind particles with the Earth's upper atmosphere, channeled by the Earth's magnetic field, which causes atmospheric molecules to become excited and emit electromagnetic spectrum, leading to the display of lights in the sky. However, there are still different implications of this phenomenon under study: high intensity auroras are often accompanied by geomagnetic storms that cause blackouts on Earth and impair the transmission of signals from the Global Navigation Satellite Systems (GNSS). Auroras are also known to occur on other planets and exoplanets, so the activity is an indication of active space weather conditions that can aid in learning about the planetary environment. In order to improve understanding of the phenomenon, this research aims to design a satellite formation flying solution for collecting and transmitting data for monitoring aurora borealis in northern hemisphere, an approach that allows studying the event with multipoint data collection in a reduced time interval, in order to allow analysis from the beginning of the phenomenon until its decline. To this end, the ideal number of satellites, the spacing between them, as well as the ideal topology to be used will be analyzed. From an orbital study, approaches from different altitudes, eccentricities and inclinations will also be considered. Given that at large relative distances between satellites in formation, controllers tend to fail, a study on the efficiency of nonlinear adaptive control methods from the point of view of position maintenance and propellant consumption will be carried out. The main orbital perturbations considered in the simulation: non-homogeneity terrestrial, atmospheric drag, gravitational action of the Sun and the Moon, accelerations due to solar radiation pressure and relativistic effects.Keywords: formation flying, nonlinear adaptive control method, aurora borealis, adaptive SDRE method
Procedia PDF Downloads 39541 A Study of Secondary Particle Production from Carbon Ion Beam for Radiotherapy
Authors: Shaikah Alsubayae, Gianluigi Casse, Carlos Chavez, Jon Taylor, Alan Taylor, Mohammad Alsulimane
Abstract:
Achieving precise radiotherapy through carbon therapy necessitates the accurate monitoring of radiation dose distribution within the patient's body. This process is pivotal for targeted tumor treatment, minimizing harm to healthy tissues, and enhancing overall treatment effectiveness while reducing the risk of side effects. In our investigation, we adopted a methodological approach to monitor secondary proton doses in carbon therapy using Monte Carlo (MC) simulations. Initially, Geant4 simulations were employed to extract the initial positions of secondary particles generated during interactions between carbon ions and water, including protons, gamma rays, alpha particles, neutrons, and tritons. Subsequently, we explored the relationship between the carbon ion beam and these secondary particles. Interaction vertex imaging (IVI) proves valuable for monitoring dose distribution during carbon therapy, providing information about secondary particle locations and abundances, particularly protons. The IVI method relies on charged particles produced during ion fragmentation to gather range information by reconstructing particle trajectories back to their point of origin, known as the vertex. In the context of carbon ion therapy, our simulation results indicated a strong correlation between some secondary particles and the range of carbon ions. However, challenges arose due to the unique elongated geometry of the target, hindering the straightforward transmission of forward-generated protons. Consequently, the limited protons that did emerge predominantly originated from points close to the target entrance. Fragment (protons) trajectories were approximated as straight lines, and a beam back-projection algorithm, utilizing interaction positions recorded in Si detectors, was developed to reconstruct vertices. The analysis revealed a correlation between the reconstructed and actual positions.Keywords: radiotherapy, carbon therapy, monitor secondary proton doses, interaction vertex imaging
Procedia PDF Downloads 78540 Thermodynamic Analysis of Surface Seawater under Ocean Warming: An Integrated Approach Combining Experimental Measurements, Theoretical Modeling, Machine Learning Techniques, and Molecular Dynamics Simulation for Climate Change Assessment
Authors: Nishaben Desai Dholakiya, Anirban Roy, Ranjan Dey
Abstract:
Understanding ocean thermodynamics has become increasingly critical as Earth's oceans serve as the primary planetary heat regulator, absorbing approximately 93% of excess heat energy from anthropogenic greenhouse gas emissions. This investigation presents a comprehensive analysis of Arabian Sea surface seawater thermodynamics, focusing specifically on heat capacity (Cp) and thermal expansion coefficient (α) - parameters fundamental to global heat distribution patterns. Through high-precision experimental measurements of ultrasonic velocity and density across varying temperature (293.15-318.15K) and salinity (0.5-35 ppt) conditions, it characterize critical thermophysical parameters including specific heat capacity, thermal expansion, and isobaric and isothermal compressibility coefficients in natural seawater systems. The study employs advanced machine learning frameworks - Random Forest, Gradient Booster, Stacked Ensemble Machine Learning (SEML), and AdaBoost - with SEML achieving exceptional accuracy (R² > 0.99) in heat capacity predictions. the findings reveal significant temperature-dependent molecular restructuring: enhanced thermal energy disrupts hydrogen-bonded networks and ion-water interactions, manifesting as decreased heat capacity with increasing temperature (negative ∂Cp/∂T). This mechanism creates a positive feedback loop where reduced heat absorption capacity potentially accelerates oceanic warming cycles. These quantitative insights into seawater thermodynamics provide crucial parametric inputs for climate models and evidence-based environmental policy formulation, particularly addressing the critical knowledge gap in thermal expansion behavior of seawater under varying temperature-salinity conditions.Keywords: climate change, arabian sea, thermodynamics, machine learning
Procedia PDF Downloads 11539 Mathematical Modeling and Simulation of Convective Heat Transfer System in Adjustable Flat Collector Orientation for Commercial Solar Dryers
Authors: Adeaga Ibiyemi Iyabo, Adeaga Oyetunde Adeoye
Abstract:
Interestingly, mechanical drying methods has played a major role in the commercialization of agricultural and agricultural allied sectors. In the overall, drying enhances the favorable storability and preservation of agricultural produce which in turn promotes its producibility, marketability, salability, and profitability. Recent researches have shown that solar drying is easier, affordable, controllable, and of course, cleaner and purer than other means of drying methods. It is, therefore, needful to persistently appraise solar dryers with a view to improving on the existing advantages. In this paper, mathematical equations were formulated for solar dryer using mass conservation law, material balance law and least cost savings method. Computer codes were written in Visual Basic.Net. The developed computer software, which considered Ibadan, a strategic south-western geographical location in Nigeria, was used to investigate the relationship between variable orientation angle of flat plate collector on solar energy trapped, derived monthly heat load, available energy supplied by solar and fraction supplied by solar energy when 50000 Kg/Month of produce was dried over a year. At variable collector tilt angle of 10°.13°,15°,18°, 20°, the derived monthly heat load, available energy supplied by solar were 1211224.63MJ, 102121.34MJ, 0.111; 3299274.63MJ, 10121.34MJ, 0.132; 5999364.706MJ, 171222.859MJ, 0.286; 4211224.63MJ, 132121.34MJ, 0.121; 2200224.63MJ, 112121.34MJ, 0.104, respectively .These results showed that if optimum collector angle is not reached, those factors needed for efficient and cost reduction drying will be difficult to attain. Therefore, this software has revealed that off - optimum collector angle in commercial solar drying does not worth it, hence the importance of the software in decision making as to the optimum collector angle of orientation.Keywords: energy, ibadan, heat - load, visual-basic.net
Procedia PDF Downloads 411538 Technical Evaluation of Upgrading a Simple Gas Turbine Fired by Diesel to a Combined Cycle Power Plant in Kingdom of Suadi Arabistan Using WinSim Design II Software
Authors: Salman Obaidoon, Mohamed Hassan, Omer Bakather
Abstract:
As environmental regulations increase, the need for a clean and inexpensive energy is becoming necessary these days using an available raw material with high efficiency and low emissions of toxic gases. This paper presents a study on modifying a gas turbine power plant fired by diesel, which is located in Saudi Arabia in order to increase the efficiency and capacity of the station as well as decrease the rate of emissions. The studied power plant consists of 30 units with different capacities and total net power is 1470 MW. The study was conducted on unit number 25 (GT-25) which produces 72.3 MW with 29.5% efficiency. In the beginning, the unit was modeled and simulated by using WinSim Design II software. In this step, actual unit data were used in order to test the validity of the model. The net power and efficiency obtained from software were 76.4 MW and 32.2% respectively. A difference of about 6% was found in the simulated power plant compared to the actual station which means that the model is valid. After the validation of the model, the simple gas turbine power plant was converted to a combined cycle power plant (CCPP). In this case, the exhausted gas released from the gas turbine was introduced to a heat recovery steam generator (HRSG), which consists of three heat exchangers: an economizer, an evaporator and a superheater. In this proposed model, many scenarios were conducted in order to get the optimal operating conditions. The net power of CCPP was increased to 116.4 MW while the overall efficiency of the unit was reached to 49.02%, consuming the same amount of fuel for the gas turbine power plant. For the purpose of comparing the rate of emissions of carbon dioxide on each model. It was found that the rate of CO₂ emissions was decreased from 15.94 kg/s to 9.22 kg/s by using the combined cycle power model as a result of reducing of the amount of diesel from 5.08 kg/s to 2.94 kg/s needed to produce 76.5 MW. The results indicate that the rate of emissions of carbon dioxide was decreased by 42.133% in CCPP compared to the simple gas turbine power plant.Keywords: combined cycle power plant, efficiency, heat recovery steam generator, simulation, validation, WinSim design II software
Procedia PDF Downloads 275537 Synchronous Reference Frame and Instantaneous P-Q Theory Based Control of Unified Power Quality Conditioner for Power Quality Improvement of Distribution System
Authors: Ambachew Simreteab Gebremedhn
Abstract:
Context: The paper explores the use of synchronous reference frame theory (SRFT) and instantaneous reactive power theory (IRPT) based control of Unified Power Quality Conditioner (UPQC) for improving power quality in distribution systems. Research Aim: To investigate the performance of different control configurations of UPQC using SRFT and IRPT for mitigating power quality issues in distribution systems. Methodology: The study compares three control techniques (SRFT-IRPT, SRFT-SRFT, IRPT-IRPT) implemented in series and shunt active filters of UPQC. Data is collected under various control algorithms to analyze UPQC performance. Findings: Results indicate the effectiveness of SRFT and IRPT based control techniques in addressing power quality problems such as voltage sags, swells, unbalance, harmonics, and current harmonics in distribution systems. Theoretical Importance: The study provides insights into the application of SRFT and IRPT in improving power quality, specifically in mitigating unbalanced voltage sags, where conventional methods fall short. Data Collection: Data is collected under various control algorithms using simulation in MATLAB Simulink and real-time operation executed with experimental results obtained using RT-LAB. Analysis Procedures: Performance analysis of UPQC under different control algorithms is conducted to evaluate the effectiveness of SRFT and IRPT based control techniques in mitigating power quality issues. Questions Addressed: How do SRFT and IRPT based control techniques compare in improving power quality in distribution systems? What is the impact of using different control configurations on the performance of UPQC? Conclusion: The study demonstrates the efficacy of SRFT and IRPT based control of UPQC in mitigating power quality issues in distribution systems, highlighting their potential for enhancing voltage and current quality.Keywords: power quality, UPQC, shunt active filter, series active filter, non-linear load, RT-LAB, MATLAB
Procedia PDF Downloads 10536 Phytochemical Content and Bioactive Properties of Wheat Sprouts
Authors: Jasna Čanadanović-Brunet, Lidija Jevrić, Gordana Ćetković, Vesna Tumbas Šaponjac, Jelena Vulić, Slađana Stajčić
Abstract:
Wheat contains high amount of nutrients such as dietary fiber, resistant starch, vitamins, minerals and microconstituents, which are building blocks of body tissues, but also help in the prevention of diseases such as cardiovascular disease, cancer and diabetes. Sprouting enhances the nutritional value of whole wheat through biosynthesis of tocopherols, polyphenols and other valuable phytochemicals. Since the nutritional and sensory benefits of germination have been extensively documented, using of sprouted grains in food formulations is becoming a trend in healthy foods. The present work addressed the possibility of using freeze-dried sprouted wheat powder, obtained from spelt-wheat cv. ‘Nirvana’ (Triticum spelta L.) and winter wheat cv. ‘Simonida’ (Triticum aestivum L. ssp. vulgare var. lutescens), as a source of phytochemicals, to improve the functional status of the consumer. The phytochemicals' content (total polyphenols, flavonoids, chlorophylls and carotenoids) and biological activities (antioxidant activity on DPPH radicals and antiinflammatory activity) of sprouted wheat powders were assessed spectrophotometrically. The content of flavonoids (216.52 mg RE/100 g), carotenoids (22.84 mg β-carotene/100 g) and chlorophylls (131.23 mg/100 g), as well as antiinflammatory activity (EC50=3.70 mg/ml) was found to be higher in sprouted spelt-wheat powder, while total polyphenols (607.21 mg GAE/100 g) and antioxidant activity on DDPPH radicals (EC50=0.27 mmol TE/100 g) was found to be higher in sprouted winter wheat powders. Simulation of gastro-intestinal digestion of sprouted wheat powders clearly shows that intestinal digestion caused a higher release of polyphenols than gastric digestion for both samples, which indicates their higher bioavailability in the colon. The results of the current study have shown that wheat sprouts can provide a high content of phytochemicals and considerable bioactivities. Moreover, data reported show that they contain a unique pattern of bioactive molecules, which make these cereal sprouts attractive functional foods for a health-promoting diet.Keywords: wheat, sprouts, phytochemicals, bioactivity
Procedia PDF Downloads 466535 The Impact of Virtual Learning Strategy on Youth Learning Motivation in Malaysian Higher Learning Instituitions
Authors: Hafizah Harun, Habibah Harun, Azlina Kamaruddin
Abstract:
Virtual reality has become a powerful and promising tool in education because of their unique technological characteristics that differentiate them from the other ICT applications. Despite the numerous interpretations of its definition, virtual reality can be concisely and precisely described as the integration of computer graphics and various input and display technologies to create the illusion of immersion in a computer generated reality. Generally, there are two major types based on the level of interaction and immersive environment that are immersive and non-immersive virtual reality. In the study of the role of virtual reality in built environment education, Horne and Thompson were reported as saying that the benefits of using visualization technologies were seen as having the potential to improve and extend the learning process, increase student motivation and awareness, and add to the diversity of teaching methods. Youngblut reported that students enjoy working with virtual worlds and this experience can be highly motivating. The impact of virtual reality on youth learning in Malaysia is currently not well explored because the technology is still not widely used here. Only a handful of the universities, such as University Malaya, MMU, and Unimas are applying virtual reality strategy in some of their undergraduate programs. From the literature, it has been identified that there are several virtual reality learning strategies currently available. Therefore, this study aims to investigate the impact of Virtual Reality strategy on Youth Learning Motivation in Malaysian higher learning institutions. We will explore the relationship between virtual reality (gaming, laboratory, simulation) and youth leaning motivation. Another aspect that we will explore is the framework for virtual reality implementation at higher learning institution in Malaysia. This study will be carried out quantitatively by distributing questionnaires to respondents from sample universities. Data analysis are descriptive and multiple regression. Researcher will carry out a pilot test prior to distributing the questionnaires to 300 undergraduate students who are undergoing their courses in virtual reality environment. The respondents come from two universities, MMU CyberJaya and University Malaya. The expected outcomes from this study are the identification of which virtual reality strategy has most impact on students’ motivation in learning and a proposed framework of virtual reality implementation at higher learning.Keywords: virtual reality, learning strategy, youth learning, motivation
Procedia PDF Downloads 389534 Multi-Agent System Based Solution for Operating Agile and Customizable Micro Manufacturing Systems
Authors: Dylan Santos De Pinho, Arnaud Gay De Combes, Matthieu Steuhlet, Claude Jeannerat, Nabil Ouerhani
Abstract:
The Industry 4.0 initiative has been launched to address huge challenges related to ever-smaller batch sizes. The end-user need for highly customized products requires highly adaptive production systems in order to keep the same efficiency of shop floors. Most of the classical Software solutions that operate the manufacturing processes in a shop floor are based on rigid Manufacturing Execution Systems (MES), which are not capable to adapt the production order on the fly depending on changing demands and or conditions. In this paper, we present a highly modular and flexible solution to orchestrate a set of production systems composed of a micro-milling machine-tool, a polishing station, a cleaning station, a part inspection station, and a rough material store. The different stations are installed according to a novel matrix configuration of a 3x3 vertical shelf. The different cells of the shelf are connected through horizontal and vertical rails on which a set of shuttles circulate to transport the machined parts from a station to another. Our software solution for orchestrating the tasks of each station is based on a Multi-Agent System. Each station and each shuttle is operated by an autonomous agent. All agents communicate with a central agent that holds all the information about the manufacturing order. The core innovation of this paper lies in the path planning of the different shuttles with two major objectives: 1) reduce the waiting time of stations and thus reduce the cycle time of the entire part, and 2) reduce the disturbances like vibration generated by the shuttles, which highly impacts the manufacturing process and thus the quality of the final part. Simulation results show that the cycle time of the parts is reduced by up to 50% compared with MES operated linear production lines while the disturbance is systematically avoided for the critical stations like the milling machine-tool.Keywords: multi-agent systems, micro-manufacturing, flexible manufacturing, transfer systems
Procedia PDF Downloads 130533 Beyond the Tragedy of Absence: Vizenor's Comedy of Native Presence
Authors: Mahdi Sepehrmanesh
Abstract:
This essay explores Gerald Vizenor's innovative concepts of the tragedy of absence and the comedy of presence as frameworks for understanding and challenging dominant narratives about Native American identity and history. Vizenor's work critiques the notion of irrevocable cultural loss and rigid definitions of Indigenous identity based on blood quantum and stereotypical practices. Through subversive humor, trickster figures, and storytelling, Vizenor asserts the active presence and continuance of Native peoples, advocating for a dynamic, self-determined understanding of Native identity. The essay examines Vizenor's use of postmodern techniques, including his engagement with simulation and hyperreality, to disrupt colonial discourses and create new spaces for Indigenous expression. It explores the concept of "crossblood" identities as a means of resisting essentialist notions of Native authenticity and embracing the complexities of contemporary Indigenous experiences. Vizenor's ideas of survivance and transmotion are analyzed as strategies for cultural resilience and adaptation in the face of ongoing colonial pressures. The interplay between absence and presence in Vizenor's work is discussed, particularly through the lens of shadow survivance and the power of storytelling. The essay also delves into Vizenor's critique of terminal creed and his promotion of natural reason as an alternative epistemology to Western rationalism. While acknowledging the significant influence of Vizenor's work on Native American literature and theory, the essay also addresses critiques of his approach, including concerns about the accessibility of his writing and its political effectiveness. Despite these debates, the essay argues that Vizenor's concepts offer a powerful vision of Indigenous futurity that is rooted in tradition yet open to change, inspiring hope and agency in the face of oppression. By examining Vizenor's multifaceted approach to Native American identity and presence, this essay contributes to ongoing discussions about Indigenous representation, cultural continuity, and resistance to colonial narratives in literature and beyond.Keywords: gerald vizenor, identity native american literature, survivance, trickster discourse, identity
Procedia PDF Downloads 35532 Experimental Study on the Heating Characteristics of Transcritical CO₂ Heat Pumps
Authors: Lingxiao Yang, Xin Wang, Bo Xu, Zhenqian Chen
Abstract:
Due to its outstanding environmental performance, higher heating temperature and excellent low-temperature performance, transcritical carbon dioxide (CO₂) heat pumps are receiving more and more attention. However, improperly set operating parameters have a serious negative impact on the performance of the transcritical CO₂ heat pump due to the properties of CO₂. In this study, the heat transfer characteristics of the gas cooler are studied based on the modified “three-stage” gas cooler, then the effect of three operating parameters, compressor speed, gas cooler water-inlet flowrate and gas cooler water-inlet temperature, on the heating process of the system are investigated from the perspective of thermal quality and heat capacity. The results shows that: In the heat transfer process of gas cooler, the temperature distribution of CO₂ and water shows a typical “two region” and “three zone” pattern; The rise in the cooling pressure of CO₂ serves to increase the thermal quality on the CO₂ side of the gas cooler, which in turn improves the heating temperature of the system; Nevertheless, the elevated thermal quality on the CO₂ side can exacerbate the mismatch of heat capacity on both sides of the gas cooler, thereby adversely affecting the system coefficient of performance (COP); Furthermore, increasing compressor speed mitigates the mismatch in heat capacity caused by elevated thermal quality, which is exacerbated by decreasing gas cooler water-inlet flowrate and rising gas cooler water-inlet temperature; As a delegate, the varying compressor speed results in a 7.1°C increase in heating temperature within the experimental range, accompanied by a 10.01% decrease in COP and an 11.36% increase in heating capacity. This study can not only provide an important reference for the theoretical analysis and control strategy of the transcritical CO₂ heat pump, but also guide the related simulation and the design of the gas cooler. However, the range of experimental parameters in the current study is small and the conclusions drawn are not further analysed quantitatively. Therefore, expanding the range of parameters studied and proposing corresponding quantitative conclusions and indicators with universal applicability could greatly increase the practical applicability of this study. This is also the goal of our next research.Keywords: transcritical CO₂ heat pump, gas cooler, heat capacity, thermal quality
Procedia PDF Downloads 21531 Evaluation of a Piecewise Linear Mixed-Effects Model in the Analysis of Randomized Cross-over Trial
Authors: Moses Mwangi, Geert Verbeke, Geert Molenberghs
Abstract:
Cross-over designs are commonly used in randomized clinical trials to estimate efficacy of a new treatment with respect to a reference treatment (placebo or standard). The main advantage of using cross-over design over conventional parallel design is its flexibility, where every subject become its own control, thereby reducing confounding effect. Jones & Kenward, discuss in detail more recent developments in the analysis of cross-over trials. We revisit the simple piecewise linear mixed-effects model, proposed by Mwangi et. al, (in press) for its first application in the analysis of cross-over trials. We compared performance of the proposed piecewise linear mixed-effects model with two commonly cited statistical models namely, (1) Grizzle model; and (2) Jones & Kenward model, used in estimation of the treatment effect, in the analysis of randomized cross-over trial. We estimate two performance measurements (mean square error (MSE) and coverage probability) for the three methods, using data simulated from the proposed piecewise linear mixed-effects model. Piecewise linear mixed-effects model yielded lowest MSE estimates compared to Grizzle and Jones & Kenward models for both small (Nobs=20) and large (Nobs=600) sample sizes. It’s coverage probability were highest compared to Grizzle and Jones & Kenward models for both small and large sample sizes. A piecewise linear mixed-effects model is a better estimator of treatment effect than its two competing estimators (Grizzle and Jones & Kenward models) in the analysis of cross-over trials. The data generating mechanism used in this paper captures two time periods for a simple 2-Treatments x 2-Periods cross-over design. Its application is extendible to more complex cross-over designs with multiple treatments and periods. In addition, it is important to note that, even for single response models, adding more random effects increases the complexity of the model and thus may be difficult or impossible to fit in some cases.Keywords: Evaluation, Grizzle model, Jones & Kenward model, Performance measures, Simulation
Procedia PDF Downloads 124530 Multi-Institutional Report on Toxicities of Concurrent Nivolumab and Radiation Therapy
Authors: Neha P. Amin, Maliha Zainib, Sean Parker, Malcolm Mattes
Abstract:
Purpose/Objectives: Combination immunotherapy (IT) and radiation therapy (RT) is an actively growing field of clinical investigation due to promising findings of synergistic effects from immune-mediated mechanisms observed in preclinical studies and clinical data from case reports of abscopal effects. While there are many ongoing trials of combined IT-RT, there are still limited data on toxicity and outcome optimization regarding RT dose, fractionation, and sequencing of RT with IT. Nivolumab (NIVO), an anti-PD-1 monoclonal antibody, has been rapidly adopted in the clinic over the past 2 years, resulting in more patients being considered for concurrent RT-NIVO. Knowledge about the toxicity profile of combined RT-NIVO is important for both the patient and physician when making educated treatment decisions. The acute toxicity profile of concurrent RT-NIVO was analyzed in this study. Materials/Methods: A retrospective review of all consecutive patients who received NIVO from 1/2015 to 5/2017 at 4 separate centers within two separate institutions was performed. Those patients who completed a course of RT from 1 day prior to initial NIVO infusion through 1 month after last NIVO infusion were considered to have received concurrent therapy and included in the subsequent analysis. Descriptive statistics are reported for patient/tumor/treatment characteristics and observed acute toxicities within 3 months of RT completion. Results: Among 261 patients who received NIVO, 46 (17.6%) received concurrent RT to 67 different sites. The median f/u was 3.3 (.1-19.8) months, and 11/46 (24%) were still alive at last analysis. The most common histology, RT prescription, and treatment site included non-small cell lung cancer (23/46, 50%), 30 Gy in 10 fractions (16/67, 24%), and central thorax/abdomen (26/67, 39%), respectively. 79% (53/67) of irradiated sites were treated with 3D-conformal technique and palliative dose-fractionation. Grade 3, 4, and 5 toxicities were experienced by 11, 1, and 2 patients, respectively. However all grade 4 and 5 toxicities were outside of the irradiated area and attributed to the NIVO alone, and only 4/11 (36%) of the grade 3 toxicities were attributed to the RT-NIVO. The irradiated site in these cases included the brain [2/10 (20%)] and central thorax/abdomen [2/19 (10.5%)], including one unexpected grade 3 pancreatitides following stereotactic body RT to the left adrenal gland. Conclusions: Concurrent RT-NIVO is generally well tolerated, though with potentially increased rates of severe toxicity when irradiating the lung, abdomen, or brain. Pending more definitive data, we recommend counseling patients on the potentially increased rates of side effects from combined immunotherapy and radiotherapy to these locations. Future prospective trials assessing fractionation and sequencing of RT with IT will help inform combined therapy recommendations.Keywords: combined immunotherapy and radiation, immunotherapy, Nivolumab, toxicity of concurrent immunotherapy and radiation
Procedia PDF Downloads 392529 Correlations and Impacts Of Optimal Rearing Parameters on Nutritional Value Of Mealworm (Tenebrio Molitor)
Authors: Fabienne Vozy, Anick Lepage
Abstract:
Insects are displaying high nutritional value, low greenhouse gas emissions, low land use requirements and high food conversion efficiency. They can contribute to the food chain and be one of many solutions to protein shortages. Currently, in North America, nutritional entomology is under-developed and the needs to better understand its benefits remain to convince large-scale producers and consumers (both for human and agricultural needs). As such, large-scale production of mealworms offers a promising alternative to replacing traditional sources of protein and fatty acids. To proceed orderly, it is required to collect more data on the nutritional values of insects such as, a) Evaluate the diets of insects to improve their dietary value; b) Test the breeding conditions to optimize yields; c) Evaluate the use of by-products and organic residues as sources of food. Among the featured technical parameters, relative humidity (RH) percentage and temperature, optimal substrates and hydration sources are critical elements, thus establishing potential benchmarks for to optimize conversion rates of protein and fatty acids. This research is to establish the combination of the most influential rearing parameters with local food residues, to correlate the findings with the nutritional value of the larvae harvested. 125 same-monthly old adults/replica are randomly selected in the mealworm breeding pool then placed to oviposit in growth chambers preset at 26°C and 65% RH. Adults are removed after 7 days. Larvae are harvested upon the apparition of the first nymphosis signs and batches, are analyzed for their nutritional values using wet chemistry analysis. The first samples analyses include total weight of both fresh and dried larvae, residual humidity, crude proteins (CP%), and crude fats (CF%). Further analyses are scheduled to include soluble proteins and fatty acids. Although they are consistent with previous published data, the preliminary results show no significant differences between treatments for any type of analysis. Nutritional properties of each substrate combination have yet allowed to discriminate the most effective residue recipe. Technical issues such as the particles’ size of the various substrate combinations and larvae screen compatibility are to be investigated since it induced a variable percentage of lost larvae upon harvesting. To address those methodological issues are key to develop a standardized efficient procedure. The aim is to provide producers with easily reproducible conditions, without incurring additional excessive expenditure on their part in terms of equipment and workforce.Keywords: entomophagy, nutritional value, rearing parameters optimization, Tenebrio molitor
Procedia PDF Downloads 113528 Branched Chain Amino Acid Kinesio PVP Gel Tape from Extract of Pea (Pisum sativum L.) Based on Ultrasound-Assisted Extraction Technology
Authors: Doni Dermawan
Abstract:
Modern sports competition as a consequence of the increase in the value of the business and entertainment in the field of sport has been demanding athletes to always have excellent physical endurance performance. Physical exercise is done in a long time, and intensive may pose a risk of muscle tissue damage caused by the increase of the enzyme creatine kinase. Branched Chain Amino Acids (BCAA) is an essential amino acid that is composed of leucine, isoleucine, and valine which serves to maintain muscle tissue, keeping the immune system, and prevent further loss of coordination and muscle pain. Pea (Pisum sativum L.) is a kind of leguminous plants that are rich in Branched Chain Amino Acids (BCAA) where every one gram of protein pea contains 82.7 mg of leucine; 56.3 mg isoleucine; and 56.0 mg of valine. This research aims to develop Branched Chain Amino Acids (BCAA) from pea extract is applied in dosage forms Gel PVP Kinesio Tape technology using Ultrasound-assisted Extraction. The method used in the writing of this paper is the Cochrane Collaboration Review that includes literature studies, testing the quality of the study, the characteristics of the data collection, analysis, interpretation of results, and clinical trials as well as recommendations for further research. Extraction of BCAA in pea done using ultrasound-assisted extraction technology with optimization variables includes the type of solvent extraction (NaOH 0.1%), temperature (20-250C), time (15-30 minutes) power (80 watt) and ultrasonic frequency (35 KHz). The advantages of this extraction method are the level of penetration of the solvent into the membrane of the cell is high and can increase the transfer period so that the BCAA substance separation process more efficient. BCAA extraction results are then applied to the polymer PVP (Polyvinylpyrrolidone) Gel powder composed of PVP K30 and K100 HPMC dissolved in 10 mL of water-methanol (1: 1) v / v. Preparations Kinesio Tape Gel PVP is the BCAA in the gel are absorbed into the muscle tissue, and joints through tensile force then provides stimulation to the muscle circulation with variable pressure so that the muscle can increase the biomechanical movement and prevent damage to the muscle enzyme creatine kinase. Analysis and evaluation of test preparation include interaction, thickness, weight uniformity, humidity, water vapor permeability, the levels of the active substance, content uniformity, percentage elongation, stability testing, release profile, permeation in vitro and in vivo skin irritation testing.Keywords: branched chain amino acid, BCAA, Kinesio tape, pea, PVP gel, ultrasound-assisted extraction
Procedia PDF Downloads 289527 A Study on ZnO Nanoparticles Properties: An Integration of Rietveld Method and First-Principles Calculation
Authors: Kausar Harun, Ahmad Azmin Mohamad
Abstract:
Zinc oxide (ZnO) has been extensively used in optoelectronic devices, with recent interest as photoanode material in dye-sensitize solar cell. Numerous methods employed to experimentally synthesized ZnO, while some are theoretically-modeled. Both approaches provide information on ZnO properties, but theoretical calculation proved to be more accurate and timely effective. Thus, integration between these two methods is essential to intimately resemble the properties of synthesized ZnO. In this study, experimentally-grown ZnO nanoparticles were prepared by sol-gel storage method with zinc acetate dihydrate and methanol as precursor and solvent. A 1 M sodium hydroxide (NaOH) solution was used as stabilizer. The optimum time to produce ZnO nanoparticles were recorded as 12 hours. Phase and structural analysis showed that single phase ZnO produced with wurtzite hexagonal structure. Further work on quantitative analysis was done via Rietveld-refinement method to obtain structural and crystallite parameter such as lattice dimensions, space group, and atomic coordination. The lattice dimensions were a=b=3.2498Å and c=5.2068Å which were later used as main input in first-principles calculations. By applying density-functional theory (DFT) embedded in CASTEP computer code, the structure of synthesized ZnO was built and optimized using several exchange-correlation functionals. The generalized-gradient approximation functional with Perdew-Burke-Ernzerhof and Hubbard U corrections (GGA-PBE+U) showed the structure with lowest energy and lattice deviations. In this study, emphasize also given to the modification of valence electron energy level to overcome the underestimation in DFT calculation. Both Zn and O valance energy were fixed at Ud=8.3 eV and Up=7.3 eV, respectively. Hence, the following electronic and optical properties of synthesized ZnO were calculated based on GGA-PBE+U functional within ultrasoft-pseudopotential method. In conclusion, the incorporation of Rietveld analysis into first-principles calculation was valid as the resulting properties were comparable with those reported in literature. The time taken to evaluate certain properties via physical testing was then eliminated as the simulation could be done through computational method.Keywords: density functional theory, first-principles, Rietveld-refinement, ZnO nanoparticles
Procedia PDF Downloads 309526 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults
Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter
Abstract:
Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization
Procedia PDF Downloads 144525 Study of Morning-Glory Spillway Structure in Hydraulic Characteristics by CFD Model
Authors: Mostafa Zandi, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. Morning-Glory spillway is one of the common spillways for discharging the overflow water behind dams, these kinds of spillways are constructed in dams with small reservoirs. In this research, the hydraulic flow characteristics of a morning-glory spillways are investigated with CFD model. Two dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k- and k-, were chosen to model Reynolds shear stress term. The power law scheme was used for discretization of momentum, k , and equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k -ε (Standard) has the most consistent results with experimental results. When the jet is getting closer to end of basin, the computational results increase with the numerical results of their differences. The lower profile of the water jet has less sensitivity to the hydraulic jet profile than the hydraulic jet profile. In the pressure test, it was also found that the results show that the numerical values of the pressure in the lower landing number differ greatly in experimental results. The characteristics of the complex flows over a Morning-Glory spillway were studied numerically using a RANS solver. Grid study showed that numerical results of a 57512-node grid had the best agreement with the experimental values. The desired downstream channel length was preferred to be 1.5 meter, and the standard k-ε turbulence model produced the best results in Morning-Glory spillway. The numerical free-surface profiles followed the theoretical equations very well.Keywords: morning-glory spillway, CFD model, hydraulic characteristics, wall function
Procedia PDF Downloads 77524 A Study of Topical and Similarity of Sebum Layer Using Interactive Technology in Image Narratives
Authors: Chao Wang
Abstract:
Under rapid innovation of information technology, the media plays a very important role in the dissemination of information, and it has a totally different analogy generations face. However, the involvement of narrative images provides more possibilities of narrative text. "Images" through the process of aperture, a camera shutter and developable photosensitive processes are manufactured, recorded and stamped on paper, displayed on a computer screen-concretely saved. They exist in different forms of files, data, or evidence as the ultimate looks of events. By the interface of media and network platforms and special visual field of the viewer, class body space exists and extends out as thin as sebum layer, extremely soft and delicate with real full tension. The physical space of sebum layer of confuses the fact that physical objects exist, needs to be established under a perceived consensus. As at the scene, the existing concepts and boundaries of physical perceptions are blurred. Sebum layer physical simulation shapes the “Topical-Similarity" immersing, leading the contemporary social practice communities, groups, network users with a kind of illusion without the presence, i.e. a non-real illusion. From the investigation and discussion of literatures, digital movies editing manufacture and produce the variability characteristics of time (for example, slices, rupture, set, and reset) are analyzed. Interactive eBook has an unique interaction in "Waiting-Greeting" and "Expectation-Response" that makes the operation of image narrative structure more interpretations functionally. The works of digital editing and interactive technology are combined and further analyze concept and results. After digitization of Interventional Imaging and interactive technology, real events exist linked and the media handing cannot be cut relationship through movies, interactive art, practical case discussion and analysis. Audience needs more rational thinking about images carried by the authenticity of the text.Keywords: sebum layer, topical and similarity, interactive technology, image narrative
Procedia PDF Downloads 389523 Self-Supervised Learning for Hate-Speech Identification
Authors: Shrabani Ghosh
Abstract:
Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.Keywords: attention learning, language model, offensive language detection, self-supervised learning
Procedia PDF Downloads 106522 Analysis of Road Network Vulnerability Due to Merapi Volcano Eruption
Authors: Imam Muthohar, Budi Hartono, Sigit Priyanto, Hardiansyah Hardiansyah
Abstract:
The eruption of Merapi Volcano in Yogyakarta, Indonesia in 2010 caused many casualties due to minimum preparedness in facing disaster. Increasing population capacity and evacuating to safe places become very important to minimize casualties. Regional government through the Regional Disaster Management Agency has divided disaster-prone areas into three parts, namely ring 1 at a distance of 10 km, ring 2 at a distance of 15 km and ring 3 at a distance of 20 km from the center of Mount Merapi. The success of the evacuation is fully supported by road network infrastructure as a way to rescue in an emergency. This research attempts to model evacuation process based on the rise of refugees in ring 1, expanded to ring 2 and finally expanded to ring 3. The model was developed using SATURN (Simulation and Assignment of Traffic to Urban Road Networks) program version 11.3. 12W, involving 140 centroid, 449 buffer nodes, and 851 links across Yogyakarta Special Region, which was aimed at making a preliminary identification of road networks considered vulnerable to disaster. An assumption made to identify vulnerability was the improvement of road network performance in the form of flow and travel times on the coverage of ring 1, ring 2, ring 3, Sleman outside the ring, Yogyakarta City, Bantul, Kulon Progo, and Gunung Kidul. The research results indicated that the performance increase in the road networks existing in the area of ring 2, ring 3, and Sleman outside the ring. The road network in ring 1 started to increase when the evacuation was expanded to ring 2 and ring 3. Meanwhile, the performance of road networks in Yogyakarta City, Bantul, Kulon Progo, and Gunung Kidul during the evacuation period simultaneously decreased in when the evacuation areas were expanded. The results of preliminary identification of the vulnerability have determined that the road networks existing in ring 1, ring 2, ring 3 and Sleman outside the ring were considered vulnerable to the evacuation of Mount Merapi eruption. Therefore, it is necessary to pay a great deal of attention in order to face the disasters that potentially occur at anytime.Keywords: model, evacuation, SATURN, vulnerability
Procedia PDF Downloads 170521 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine
Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi
Abstract:
Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.Keywords: diesel fuel, CFD, evaporation, multiphase
Procedia PDF Downloads 343520 Multi-Scale Modelling of the Cerebral Lymphatic System and Its Failure
Authors: Alexandra K. Diem, Giles Richardson, Roxana O. Carare, Neil W. Bressloff
Abstract:
Alzheimer's disease (AD) is the most common form of dementia and although it has been researched for over 100 years, there is still no cure or preventive medication. Its onset and progression is closely related to the accumulation of the neuronal metabolite Aβ. This raises the question of how metabolites and waste products are eliminated from the brain as the brain does not have a traditional lymphatic system. In recent years the rapid uptake of Aβ into cerebral artery walls and its clearance along those arteries towards the lymph nodes in the neck has been suggested and confirmed in mice studies, which has led to the hypothesis that interstitial fluid (ISF), in the basement membranes in the walls of cerebral arteries, provides the pathways for the lymphatic drainage of Aβ. This mechanism, however, requires a net reverse flow of ISF inside the blood vessel wall compared to the blood flow and the driving forces for such a mechanism remain unknown. While possible driving mechanisms have been studied using mathematical models in the past, a mechanism for net reverse flow has not been discovered yet. Here, we aim to address the question of the driving force of this reverse lymphatic drainage of Aβ (also called perivascular drainage) by using multi-scale numerical and analytical modelling. The numerical simulation software COMSOL Multiphysics 4.4 is used to develop a fluid-structure interaction model of a cerebral artery, which models blood flow and displacements in the artery wall due to blood pressure changes. An analytical model of a layer of basement membrane inside the wall governs the flow of ISF and, therefore, solute drainage based on the pressure changes and wall displacements obtained from the cerebral artery model. The findings suggest that an active role in facilitating a reverse flow is played by the components of the basement membrane and that stiffening of the artery wall during age is a major risk factor for the impairment of brain lymphatics. Additionally, our model supports the hypothesis of a close association between cerebrovascular diseases and the failure of perivascular drainage.Keywords: Alzheimer's disease, artery wall mechanics, cerebral blood flow, cerebral lymphatics
Procedia PDF Downloads 526519 Approaches to Integrating Entrepreneurial Education in School Curriculum
Authors: Kofi Nkonkonya Mpuangnan, Samantha Govender, Hlengiwe Romualda Mhlongo
Abstract:
In recent years, a noticeable and worrisome pattern has emerged in numerous developing nations which is a steady and persistent rise in unemployment rates. This escalation of economic struggles has become a cause of great concern for parents who, having invested significant resources in their children's education, harboured hopes of achieving economic prosperity and stability for their families through secure employment. To effectively tackle this pressing unemployment issue, it is imperative to adopt a holistic approach, and a pivotal aspect of this approach involves incorporating entrepreneurial education seamlessly into the entire educational system. In this light, the authors explored approaches to integrating entrepreneurial education into school curriculum focusing on the following questions. How can an entrepreneurial mindset among learners be promoted in school? And how far have pedagogical approaches improved entrepreneurship in schools? To find answers to these questions, a systematic literature review underpinned by Human Capital Theory was adopted. This method was supported by the three stages of guidelines like planning, conducting, and reporting. The data were specifically sought from publishers with expansive coverage of scholarly literature like Sage, Taylor & Francis, Emirate, and Springer, covering publications from 1965 to 2023. The search was supported by two broad terms such as promoting entrepreneurial mindset in learners and pedagogical strategies for enhancing entrepreneurship. It was found that acquiring an entrepreneurial mindset through an innovative classroom environment, resilience, and guest speakers and industry experts. Also, teachers can promote entrepreneurial education through the adoption of pedagogical approaches such as hands-on learning and experiential activities, role-playing, business simulation games and creative and innovative teaching. It was recommended that the Ministry of Education should develop tailored training programs and workshops aimed at empowering educators with the essential competencies and insights to deliver impactful entrepreneurial education.Keywords: education, entrepreneurship, school curriculum, pedagogical approaches, integration
Procedia PDF Downloads 97518 Application of Groundwater Level Data Mining in Aquifer Identification
Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen
Abstract:
Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.Keywords: aquifer identification, decision tree, groundwater, Fourier transform
Procedia PDF Downloads 157