Search results for: modeling and optimization
1192 Integration of Hybrid PV-Wind in Three Phase Grid System Using Fuzzy MPPT without Battery Storage for Remote Area
Authors: Thohaku Abdul Hadi, Hadyan Perdana Putra, Nugroho Wicaksono, Adhika Prajna Nandiwardhana, Onang Surya Nugroho, Heri Suryoatmojo, Soedibjo
Abstract:
Access to electricity is now a basic requirement of mankind. Unfortunately, there are still many places around the world which have no access to electricity, such as small islands, where there could potentially be a factory, a plantation, a residential area, or resorts. Many of these places might have substantial potential for energy generation such us Photovoltaic (PV) and Wind turbine (WT), which can be used to generate electricity independently for themselves. Solar energy and wind power are renewable energy sources which are mostly found in nature and also kinds of alternative energy that are still developing in a rapid speed to help and meet the demand of electricity. PV and Wind has a characteristic of power depend on solar irradiation and wind speed based on geographical these areas. This paper presented a control methodology of hybrid small scale PV/Wind energy system that use a fuzzy logic controller (FLC) to extract the maximum power point tracking (MPPT) in different solar irradiation and wind speed. This paper discusses simulation and analysis of the generation process of hybrid resources in MPP and power conditioning unit (PCU) of Photovoltaic (PV) and Wind Turbine (WT) that is connected to the three-phase low voltage electricity grid system (380V) without battery storage. The capacity of the sources used is 2.2 kWp PV and 2.5 kW PMSG (Permanent Magnet Synchronous Generator) -WT power rating. The Modeling of hybrid PV/Wind, as well as integrated power electronics components in grid connected system, are simulated using MATLAB/Simulink.Keywords: fuzzy MPPT, grid connected inverter, photovoltaic (PV), PMSG wind turbine
Procedia PDF Downloads 3551191 Computational Intelligence and Machine Learning for Urban Drainage Infrastructure Asset Management
Authors: Thewodros K. Geberemariam
Abstract:
The rapid physical expansion of urbanization coupled with aging infrastructure presents a unique decision and management challenges for many big city municipalities. Cities must therefore upgrade and maintain the existing aging urban drainage infrastructure systems to keep up with the demands. Given the overall contribution of assets to municipal revenue and the importance of infrastructure to the success of a livable city, many municipalities are currently looking for a robust and smart urban drainage infrastructure asset management solution that combines management, financial, engineering and technical practices. This robust decision-making shall rely on sound, complete, current and relevant data that enables asset valuation, impairment testing, lifecycle modeling, and forecasting across the multiple asset portfolios. On this paper, predictive computational intelligence (CI) and multi-class machine learning (ML) coupled with online, offline, and historical record data that are collected from an array of multi-parameter sensors are used for the extraction of different operational and non-conforming patterns hidden in structured and unstructured data to determine and produce actionable insight on the current and future states of the network. This paper aims to improve the strategic decision-making process by identifying all possible alternatives; evaluate the risk of each alternative, and choose the alternative most likely to attain the required goal in a cost-effective manner using historical and near real-time urban drainage infrastructure data for urban drainage infrastructures assets that have previously not benefited from computational intelligence and machine learning advancements.Keywords: computational intelligence, machine learning, urban drainage infrastructure, machine learning, classification, prediction, asset management space
Procedia PDF Downloads 1521190 A Multi-Stage Learning Framework for Reliable and Cost-Effective Estimation of Vehicle Yaw Angle
Authors: Zhiyong Zheng, Xu Li, Liang Huang, Zhengliang Sun, Jianhua Xu
Abstract:
Yaw angle plays a significant role in many vehicle safety applications, such as collision avoidance and lane-keeping system. Although the estimation of the yaw angle has been extensively studied in existing literature, it is still the main challenge to simultaneously achieve a reliable and cost-effective solution in complex urban environments. This paper proposes a multi-stage learning framework to estimate the yaw angle with a monocular camera, which can deal with the challenge in a more reliable manner. In the first stage, an efficient road detection network is designed to extract the road region, providing a highly reliable reference for the estimation. In the second stage, a variational auto-encoder (VAE) is proposed to learn the distribution patterns of road regions, which is particularly suitable for modeling the changing patterns of yaw angle under different driving maneuvers, and it can inherently enhance the generalization ability. In the last stage, a gated recurrent unit (GRU) network is used to capture the temporal correlations of the learned patterns, which is capable to further improve the estimation accuracy due to the fact that the changes of deflection angle are relatively easier to recognize among continuous frames. Afterward, the yaw angle can be obtained by combining the estimated deflection angle and the road direction stored in a roadway map. Through effective multi-stage learning, the proposed framework presents high reliability while it maintains better accuracy. Road-test experiments with different driving maneuvers were performed in complex urban environments, and the results validate the effectiveness of the proposed framework.Keywords: gated recurrent unit, multi-stage learning, reliable estimation, variational auto-encoder, yaw angle
Procedia PDF Downloads 1441189 Evaluation of the Impact of Reducing the Traffic Light Cycle for Cars to Improve Non-Vehicular Transportation: A Case of Study in Lima
Authors: Gheyder Concha Bendezu, Rodrigo Lescano Loli, Aldo Bravo Lizano
Abstract:
In big urbanized cities of Latin America, motor vehicles have priority over non-motor vehicles and pedestrians. There is an important problem that affects people's health and quality of life; lack of inclusion towards pedestrians makes it difficult for them to move smoothly and safely since the city has been planned for the transit of motor vehicles. Faced with the new trend for sustainable and economical transport, the city is forced to develop infrastructure in order to incorporate pedestrians and users with non-motorized vehicles in the transport system. The present research aims to study the influence of non-motorized vehicles on an avenue, the optimization of a cycle using traffic lights based on simulation in Synchro software, to improve the flow of non-motor vehicles. The evaluation is of the microscopic type; for this reason, field data was collected, such as vehicular, pedestrian, and non-motor vehicle user demand. With the values of speed and travel time, it is represented in the current scenario that contains the existing problem. These data allow to create a microsimulation model in Vissim software, later to be calibrated and validated so that it has a behavior similar to reality. The results of this model are compared with the efficiency parameters of the proposed model; these parameters are the queue length, the travel speed, and mainly the travel times of the users at this intersection. The results reflect a reduction of 27% in travel time, that is, an improvement between the proposed model and the current one for this great avenue. The tail length of motor vehicles is also reduced by 12.5%, a considerable improvement. All this represents an improvement in the level of service and in the quality of life of users.Keywords: bikeway, microsimulation, pedestrians, queue length, traffic light cycle, travel time
Procedia PDF Downloads 1761188 Reducing the Imbalance Penalty Through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey
Authors: Hayriye Anıl, Görkem Kar
Abstract:
In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations since geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning, and, time series methods, the total generation of the power plants belonging to Zorlu Natural Electricity Generation, which has a high installed capacity in terms of geothermal, was estimated for the first one and two weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.Keywords: machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting
Procedia PDF Downloads 1101187 Analysis of Cycling Accessibility on Chengdu Tianfu Greenway Based on Improved Two-Step Floating Catchment Area Method: A Case Study of Jincheng Greenway
Authors: Qin Zhu
Abstract:
Under the background of accelerating the construction of Beautiful and Livable Park City in Chengdu, the Tianfu greenway system, as an important support system for the construction of parks in the whole region, its accessibility is one of the key indicators to measure the effectiveness of the greenway construction. In recent years, cycling has become an important transportation mode for residents to go to the greenways because of its low-carbon, healthy and convenient characteristics, and the study of greenway accessibility under cycling mode can provide reference suggestions for the optimization and improvement of greenways. Taking Jincheng Greenway in Chengdu City as an example, the Baidu Map Application Programming Interface (API) and questionnaire survey was used to improve the two-step floating catchment area (2SFCA) method from the three dimensions of search threshold, supply side and demand side, to calculate the cycling accessibility of the greenway and to explore the spatial matching relationship with the population density, the number of entrances and the comprehensive attractiveness. The results show that: 1) the distribution of greenway accessibility in Jincheng shows a pattern of "high in the south and low in the north, high in the west and low in the east", 2) the spatial match between greenway accessibility and population density of the residential area is imbalanced, and there is a significant positive correlation between accessibility and the number of selectable greenway access points in residential areas, as well as the overall attractiveness of greenways, with a high degree of match. On this basis, it is proposed to give priority to the mismatch area to alleviate the contradiction between supply and demand, optimize the greenway access points to improve the traffic connection, enhance the comprehensive quality of the greenway and strengthen the service capacity, to further improve the cycling accessibility of the Jincheng Greenway and improve the spatial allocation of greenway resources.Keywords: accessibility, Baidu maps API, cycling, greenway, 2SFCA
Procedia PDF Downloads 861186 PitMod: The Lorax Pit Lake Hydrodynamic and Water Quality Model
Authors: Silvano Salvador, Maryam Zarrinderakht, Alan Martin
Abstract:
Open pits, which are the result of mining, are filled by water over time until the water reaches the elevation of the local water table and generates mine pit lakes. There are several specific regulations about the water quality of pit lakes, and mining operations should keep the quality of groundwater above pre-defined standards. Therefore, an accurate, acceptable numerical model predicting pit lakes’ water balance and water quality is needed in advance of mine excavation. We carry on analyzing and developing the model introduced by Crusius, Dunbar, et al. (2002) for pit lakes. This model, called “PitMod”, simulates the physical and geochemical evolution of pit lakes over time scales ranging from a few months up to a century or more. Here, a lake is approximated as one-dimensional, horizontally averaged vertical layers. PitMod calculates the time-dependent vertical distribution of physical and geochemical pit lake properties, like temperature, salinity, conductivity, pH, trace metals, and dissolved oxygen, within each model layer. This model considers the effect of pit morphology, climate data, multiple surface and subsurface (groundwater) inflows/outflows, precipitation/evaporation, surface ice formation/melting, vertical mixing due to surface wind stress, convection, background turbulence and equilibrium geochemistry using PHREEQC and linking that to the geochemical reactions. PitMod, which is used and validated in over 50 mines projects since 2002, incorporates physical processes like those found in other lake models such as DYRESM (Imerito 2007). However, unlike DYRESM PitMod also includes geochemical processes, pit wall runoff, and other effects. In addition, PitMod is actively under development and can be customized as required for a particular site.Keywords: pit lakes, mining, modeling, hydrology
Procedia PDF Downloads 1581185 Lateral Torsional Buckling Resistance of Trapezoidally Corrugated Web Girders
Authors: Annamária Käferné Rácz, Bence Jáger, Balázs Kövesdi, László Dunai
Abstract:
Due to the numerous advantages of steel corrugated web girders, its application field is growing for bridges as well as for buildings. The global stability behavior of such girders is significantly larger than those of conventional I-girders with flat web, thus the application of the structural steel material can be significantly reduced. Design codes and specifications do not provide clear and complete rules or recommendations for the determination of the lateral torsional buckling (LTB) resistance of corrugated web girders. Therefore, the authors made a thorough investigation regarding the LTB resistance of the corrugated web girders. Finite element (FE) simulations have been performed to develop new design formulas for the determination of the LTB resistance of trapezoidally corrugated web girders. FE model is developed considering geometrical and material nonlinear analysis using equivalent geometric imperfections (GMNI analysis). The equivalent geometric imperfections involve the initial geometric imperfections and residual stresses coming from rolling, welding and flame cutting. Imperfection sensitivity analysis was performed to determine the necessary magnitudes regarding only the first eigenmodes shape imperfections. By the help of the validated FE model, an extended parametric study is carried out to investigate the LTB resistance for different trapezoidal corrugation profiles. First, the critical moment of a specific girder was calculated by FE model. The critical moments from the FE calculations are compared to the previous analytical calculation proposals. Then, nonlinear analysis was carried out to determine the ultimate resistance. Due to the numerical investigations, new proposals are developed for the determination of the LTB resistance of trapezoidally corrugated web girders through a modification factor on the design method related to the conventional flat web girders.Keywords: corrugated web, lateral torsional buckling, critical moment, FE modeling
Procedia PDF Downloads 2831184 Ebola Virus Glycoprotein Inhibitors from Natural Compounds: Computer-Aided Drug Design
Authors: Driss Cherqaoui, Nouhaila Ait Lahcen, Ismail Hdoufane, Mehdi Oubahmane, Wissal Liman, Christelle Delaite, Mohammed M. Alanazi
Abstract:
The Ebola virus is a highly contagious and deadly pathogen that causes Ebola virus disease. The Ebola virus glycoprotein (EBOV-GP) is a key factor in viral entry into host cells, making it a critical target for therapeutic intervention. Using a combination of computational approaches, this study focuses on the identification of natural compounds that could serve as potent inhibitors of EBOV-GP. The 3D structure of EBOV-GP was selected, with missing residues modeled, and this structure was minimized and equilibrated. Two large natural compound databases, COCONUT and NPASS, were chosen and filtered based on toxicity risks and Lipinski’s Rule of Five to ensure drug-likeness. Following this, a pharmacophore model, built from 22 reported active inhibitors, was employed to refine the selection of compounds with a focus on structural relevance to known Ebola inhibitors. The filtered compounds were subjected to virtual screening via molecular docking, which identified ten promising candidates (five from each database) with strong binding affinities to EBOV-GP. These compounds were then validated through molecular dynamics simulations to evaluate their binding stability and interactions with the target. The top three compounds from each database were further analyzed using ADMET profiling, confirming their favorable pharmacokinetic properties, stability, and safety. These results suggest that the selected compounds have the potential to inhibit EBOV-GP, offering new avenues for antiviral drug development against the Ebola virus.Keywords: EBOV-GP, Ebola virus glycoprotein, high-throughput drug screening, molecular docking, molecular dynamics, natural compounds, pharmacophore modeling, virtual screening
Procedia PDF Downloads 231183 Regression Analysis in Estimating Stream-Flow and the Effect of Hierarchical Clustering Analysis: A Case Study in Euphrates-Tigris Basin
Authors: Goksel Ezgi Guzey, Bihrat Onoz
Abstract:
The scarcity of streamflow gauging stations and the increasing effects of global warming cause designing water management systems to be very difficult. This study is a significant contribution to assessing regional regression models for estimating streamflow. In this study, simulated meteorological data was related to the observed streamflow data from 1971 to 2020 for 33 stream gauging stations of the Euphrates-Tigris Basin. Ordinary least squares regression was used to predict flow for 2020-2100 with the simulated meteorological data. CORDEX- EURO and CORDEX-MENA domains were used with 0.11 and 0.22 grids, respectively, to estimate climate conditions under certain climate scenarios. Twelve meteorological variables simulated by two regional climate models, RCA4 and RegCM4, were used as independent variables in the ordinary least squares regression, where the observed streamflow was the dependent variable. The variability of streamflow was then calculated with 5-6 meteorological variables and watershed characteristics such as area and height prior to the application. Of the regression analysis of 31 stream gauging stations' data, the stations were subjected to a clustering analysis, which grouped the stations in two clusters in terms of their hydrometeorological properties. Two streamflow equations were found for the two clusters of stream gauging stations for every domain and every regional climate model, which increased the efficiency of streamflow estimation by a range of 10-15% for all the models. This study underlines the importance of homogeneity of a region in estimating streamflow not only in terms of the geographical location but also in terms of the meteorological characteristics of that region.Keywords: hydrology, streamflow estimation, climate change, hydrologic modeling, HBV, hydropower
Procedia PDF Downloads 1291182 La₀.₈Ba₀.₂FeO₃ Perovskite as an Additive in the Three-Way Catalyst (TWCs) for Reduction of PGMs Loading
Authors: Mahshid Davoodpoor, Zahra Shamohammadi Ghahsareh, Saeid Razfar, Alaleh Dabbaghi
Abstract:
Nowadays, air pollution has become a topic of great concern all over the world. One of the main sources of air pollution is automobile exhaust gas, which introduces a large number of toxic gases, including CO, unburned hydrocarbons (HCs), NOx, and non-methane hydrocarbons (NMHCs), into the air. The application of three-way catalysts (TWCs) is still the most effective strategy to mitigate the emission of these pollutants. Due to the stringent environmental regulations which continuously become stricter, studies on the TWCs are ongoing despite several years of research and development. This arises from the washcoat complexity and the several numbers of parameters involved in the redox reactions. The main objectives of these studies are the optimization of washcoat formulation and the investigation of different coating modes. Perovskite (ABO₃), as a promising class of materials, has unique features that make it versatile to use as an alternative to commonly mixed oxides in washcoats. High catalytic activity for oxidation reactions and its relatively high oxygen storage capacity are important properties of perovskites in catalytic applications. Herein, La₀.₈Ba₀.₂FeO₃ perovskite material was synthesized using the co-precipitation method and characterized by XRD, ICP, and BET analysis. The effect of synthesis conditions, including B site metal (Fe and Co), metal precursor concentration, and dopant (Ba), were examined on the phase purity of the products. The selected perovskite sample was used as one of the components in the TWC formulation to evaluate its catalytic performance through Light-off, oxygen storage capacity, and emission analysis. Results showed a remarkable increment in oxygen storage capacity and also revealed that T50 and emission of CO, HC, and NOx reduced in the presence of perovskite structure which approves the enhancement of catalytic performance for the new washcoat formulation. This study shows the brilliant future of advanced oxide structures in the TWCs.Keywords: Perovskite, three-way catalyst, PGMs, PGMs reduction
Procedia PDF Downloads 671181 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race
Authors: Joonas Pääkkönen
Abstract:
In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling
Procedia PDF Downloads 1251180 Real-Time Monitoring of Drinking Water Quality Using Advanced Devices
Authors: Amani Abdallah, Isam Shahrour
Abstract:
The quality of drinking water is a major concern of public health. The control of this quality is generally performed in the laboratory, which requires a long time. This type of control is not adapted for accidental pollution from sudden events, which can have serious consequences on population health. Therefore, it is of major interest to develop real-time innovative solutions for the detection of accidental contamination in drinking water systems This paper presents researches conducted within the SunRise Demonstrator for ‘Smart and Sustainable Cities’ with a particular focus on the supervision of the water quality. This work aims at (i) implementing a smart water system in a large water network (Campus of the University Lille1) including innovative equipment for real-time detection of abnormal events, such as those related to the contamination of drinking water and (ii) develop a numerical modeling of the contamination diffusion in the water distribution system. The first step included verification of the water quality sensors and their effectiveness on a network prototype of 50m length. This part included the evaluation of the efficiency of these sensors in the detection both bacterial and chemical contamination events in drinking water distribution systems. An on-line optical sensor integral with a laboratory-scale distribution system (LDS) was shown to respond rapidly to changes in refractive index induced by injected loads of chemical (cadmium, mercury) and biological contaminations (Escherichia coli). All injected substances were detected by the sensor; the magnitude of the response depends on the type of contaminant introduced and it is proportional to the injected substance concentration.Keywords: distribution system, drinking water, refraction index, sensor, real-time
Procedia PDF Downloads 3551179 Efficient Energy Extraction Circuit for Impact Harvesting from High Impedance Sources
Authors: Sherif Keddis, Mohamed Azzam, Norbert Schwesinger
Abstract:
Harvesting mechanical energy from footsteps or other impacts is a possibility to enable wireless autonomous sensor nodes. These can be used for a highly efficient control of connected devices such as lights, security systems, air conditioning systems or other smart home applications. They can also be used for accurate location or occupancy monitoring. Converting the mechanical energy into useful electrical energy can be achieved using the piezoelectric effect offering simple harvesting setups and low deflections. The challenge facing piezoelectric transducers is the achievable amount of energy per impact in the lower mJ range and the management of such low energies. Simple setups for energy extraction such as a full wave bridge connected directly to a capacitor are problematic due to the mismatch between high impedance sources and low impedance storage elements. Efficient energy circuits for piezoelectric harvesters are commonly designed for vibration harvesters and require periodic input energies with predictable frequencies. Due to the sporadic nature of impact harvesters, such circuits are not well suited. This paper presents a self-powered circuit that avoids the impedance mismatch during energy extraction by disconnecting the load until the source reaches its charge peak. The switch is implemented with passive components and works independent from the input frequency. Therefore, this circuit is suited for impact harvesting and sporadic inputs. For the same input energy, this circuit stores 150% of the energy in comparison to a directly connected capacitor to a bridge rectifier. The total efficiency, defined as the ratio of stored energy on a capacitor to available energy measured across a matched resistive load, is 63%. Although the resulting energy is already sufficient to power certain autonomous applications, further optimization of the circuit are still under investigation in order to improve the overall efficiency.Keywords: autonomous sensors, circuit design, energy harvesting, energy management, impact harvester, piezoelectricity
Procedia PDF Downloads 1541178 Assessment of Forest Above Ground Biomass Through Linear Modeling Technique Using SAR Data
Authors: Arjun G. Koppad
Abstract:
The study was conducted in Joida taluk of Uttara Kannada district, Karnataka, India, to assess the land use land cover (LULC) and forest aboveground biomass using L band SAR data. The study area covered has dense, moderately dense, and sparse forests. The sampled area was 0.01 percent of the forest area with 30 sampling plots which were selected randomly. The point center quadrate (PCQ) method was used to select the tree and collected the tree growth parameters viz., tree height, diameter at breast height (DBH), and diameter at the tree base. The tree crown density was measured with a densitometer. Each sample plot biomass was estimated using the standard formula. In this study, the LULC classification was done using Freeman-Durden, Yamaghuchi and Pauli polarimetric decompositions. It was observed that the Freeman-Durden decomposition showed better LULC classification with an accuracy of 88 percent. An attempt was made to estimate the aboveground biomass using SAR backscatter. The ALOS-2 PALSAR-2 L-band data (HH, HV, VV &VH) fully polarimetric quad-pol SAR data was used. SAR backscatter-based regression model was implemented to retrieve forest aboveground biomass of the study area. Cross-polarization (HV) has shown a good correlation with forest above-ground biomass. The Multi Linear Regression analysis was done to estimate aboveground biomass of the natural forest areas of the Joida taluk. The different polarizations (HH &HV, VV &HH, HV & VH, VV&VH) combination of HH and HV polarization shows a good correlation with field and predicted biomass. The RMSE and value for HH & HV and HH & VV were 78 t/ha and 0.861, 81 t/ha and 0.853, respectively. Hence the model can be recommended for estimating AGB for the dense, moderately dense, and sparse forest.Keywords: forest, biomass, LULC, back scatter, SAR, regression
Procedia PDF Downloads 261177 Study of University Course Scheduling for Crowd Gathering Risk Prevention and Control in the Context of Routine Epidemic Prevention
Authors: Yuzhen Hu, Sirui Wang
Abstract:
As a training base for intellectual talents, universities have a large number of students. Teaching is a primary activity in universities, and during the teaching process, a large number of people gather both inside and outside the teaching buildings, posing a strong risk of close contact. The class schedule is the fundamental basis for teaching activities in universities and plays a crucial role in the management of teaching order. Different class schedules can lead to varying degrees of indoor gatherings and trajectories of class attendees. In recent years, highly contagious diseases have frequently occurred worldwide, and how to reduce the risk of infection has always been a hot issue related to public safety. "Reducing gatherings" is one of the core measures in epidemic prevention and control, and it can be controlled through scientific scheduling in specific environments. Therefore, the scientific prevention and control goal can be achieved by considering the reduction of the risk of excessive gathering of people during the course schedule arrangement. Firstly, we address the issue of personnel gathering in various pathways on campus, with the goal of minimizing congestion and maximizing teaching effectiveness, establishing a nonlinear mathematical model. Next, we design an improved genetic algorithm, incorporating real-time evacuation operations based on tracking search and multidimensional positive gradient cross-mutation operations, considering the characteristics of outdoor crowd evacuation. Finally, we apply undergraduate course data from a university in Harbin to conduct a case study. It compares and analyzes the effects of algorithm improvement and optimization of gathering situations and explores the impact of path blocking on the degree of gathering of individuals on other pathways.Keywords: the university timetabling problem, risk prevention, genetic algorithm, risk control
Procedia PDF Downloads 911176 Structural Model on Organizational Climate, Leadership Behavior and Organizational Commitment: Work Engagement of Private Secondary School Teachers in Davao City
Authors: Genevaive Melendres
Abstract:
School administrators face the reality of teachers losing their engagement, or schools losing the teachers. This study is then conducted to identify a structural model that best predict work engagement of private secondary teachers in Davao City. Ninety-three teachers from four sectarian schools and 56 teachers from four non-sectarian schools were involved in the completion of four survey instruments namely Organizational Climate Questionnaire, Leader Behavior Descriptive Questionnaire, Organizational Commitment Scales, and Utrecht Work Engagement Scales. Data were analyzed using frequency distribution, mean, standardized deviation, t-test for independent sample, Pearson r, stepwise multiple regression analysis, and structural equation modeling. Results show that schools have high level of organizational climate dimensions; leaders oftentimes show work-oriented and people-oriented behavior; teachers have high normative commitment and they are very often engaged at their work. Teachers from non-sectarian schools have higher organizational commitment than those from sectarian schools. Organizational climate and leadership behavior are positively related to and predict work engagement whereas commitment did not show any relationship. This study underscores the relative effects of three variables on the work engagement of teachers. After testing network of relationships and evaluating several models, a best-fitting model was found between leadership behavior and work engagement. The noteworthy findings suggest that principals pay attention and consistently evaluate their behavior for this best predicts the work engagement of the teachers. The study provides value to administrators who take decisions and create conditions in which teachers derive fulfillment.Keywords: leadership behavior, organizational climate, organizational commitment, private secondary school teachers, structural model on work engagement
Procedia PDF Downloads 2731175 Screening and Optimization of Conditions for Pectinase Production by Aspergillus Flavus
Authors: Rumaisa Shahid, Saad Aziz Durrani, Shameel Pervez, Ibatsam Khokhar
Abstract:
Food waste is a prevalent issue in Pakistan, with over 40 percent of food discarded annually. Despite their decay, rotting fruits retain residual nutritional value consumed by microorganisms, notably fungi and bacteria. Fungi, preferred for their extracellular enzyme release, are gaining prominence, particularly for pectinase production. This enzyme offers several advantages, including clarifying juices by breaking down pectic compounds. In this study, three Aspergillus flavus isolates derived from decomposed fruits and manure were selected for pectinase production. The primary aim was to isolate fungi from diverse waste sources, identify the isolates and assess their capacity for pectinase production. The identification was done through morphological characteristics with the help of Light microscopy and Scanning Electron Microscopy (SEM). Pectinolytic potential was screened using pectin minimal salt agar (PMSA) medium, comparing clear zone diameters among isolates. Identification relied on morphological characteristics. Optimizing substrate (lemon and orange peel powder) concentrations, pH, temperature, and incubation period aimed to enhance pectinase yield. Spectrophotometry enabled quantitative analysis. The temperature was set at room temperature (28 ºC). The optimal conditions for Aspergillus flavus strain AF1(isolated from mango) included a pH of 5, an incubation period of 120 hours, and substrate concentrations of 3.3% for orange peels and 6.6% for lemon peels. For AF2 and AF3 (both isolated from soil), the ideal pH and incubation period were the same as AF1 i.e. pH 5 and 120 hours. However, their optimized substrate concentrations varied, with AF2 showing maximum activity at 3.3% for orange peels and 6.6% for lemon peels, while AF3 exhibited its peak activity at 6.6% for orange peels and 8.3% for lemon peels. Among the isolates, AF1 demonstrated superior performance under these conditions, comparatively.Keywords: pectinase, lemon peel, orange peel, aspergillus flavus
Procedia PDF Downloads 721174 Meanings and Concepts of Standardization in Systems Medicine
Authors: Imme Petersen, Wiebke Sick, Regine Kollek
Abstract:
In systems medicine, high-throughput technologies produce large amounts of data on different biological and pathological processes, including (disturbed) gene expressions, metabolic pathways and signaling. The large volume of data of different types, stored in separate databases and often located at different geographical sites have posed new challenges regarding data handling and processing. Tools based on bioinformatics have been developed to resolve the upcoming problems of systematizing, standardizing and integrating the various data. However, the heterogeneity of data gathered at different levels of biological complexity is still a major challenge in data analysis. To build multilayer disease modules, large and heterogeneous data of disease-related information (e.g., genotype, phenotype, environmental factors) are correlated. Therefore, a great deal of attention in systems medicine has been put on data standardization, primarily to retrieve and combine large, heterogeneous datasets into standardized and incorporated forms and structures. However, this data-centred concept of standardization in systems medicine is contrary to the debate in science and technology studies (STS) on standardization that rather emphasizes the dynamics, contexts and negotiations of standard operating procedures. Based on empirical work on research consortia that explore the molecular profile of diseases to establish systems medical approaches in the clinic in Germany, we trace how standardized data are processed and shaped by bioinformatics tools, how scientists using such data in research perceive such standard operating procedures and which consequences for knowledge production (e.g. modeling) arise from it. Hence, different concepts and meanings of standardization are explored to get a deeper insight into standard operating procedures not only in systems medicine, but also beyond.Keywords: data, science and technology studies (STS), standardization, systems medicine
Procedia PDF Downloads 3411173 Analysis of the Detachment of Water Droplets from a Porous Fibrous Surface
Authors: Ibrahim Rassoul, E-K. Si Ahmed
Abstract:
The growth, deformation, and detachment of fluid droplets adherent to solid substrates is a problem of fundamental interest with numerous practical applications. Specific interest in this proposal is the problem of a droplet on a fibrous, hydrophobic substrate subjected to body or external forces (gravity, convection). The past decade has seen tremendous advances in proton exchange membrane fuel cell (PEMFC) technology. However, there remain many challenges to bring commercially viable stationary PEMFC products to the market. PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On the one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause 'flooding' (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The aim of this work is to investigate the stability of a liquid water droplet emerging form a GDL pore, to gain fundamental insight into the instability process leading to detachment. The approach will combine analytical and numerical modeling with experimental visualization and measurements.Keywords: polymer electrolyte fuel cell, water droplet, gas diffusion layer, contact angle, surface tension
Procedia PDF Downloads 2521172 Ontology based Fault Detection and Diagnosis system Querying and Reasoning examples
Authors: Marko Batic, Nikola Tomasevic, Sanja Vranes
Abstract:
One of the strongholds in the ubiquitous efforts related to the energy conservation and energy efficiency improvement is represented by the retrofit of high energy consumers in buildings. In general, HVAC systems represent the highest energy consumers in buildings. However they usually suffer from mal-operation and/or malfunction, causing even higher energy consumption than necessary. Various Fault Detection and Diagnosis (FDD) systems can be successfully employed for this purpose, especially when it comes to the application at a single device/unit level. In the case of more complex systems, where multiple devices are operating in the context of the same building, significant energy efficiency improvements can only be achieved through application of comprehensive FDD systems relying on additional higher level knowledge, such as their geographical location, served area, their intra- and inter- system dependencies etc. This paper presents a comprehensive FDD system that relies on the utilization of common knowledge repository that stores all critical information. The discussed system is deployed as a test-bed platform at the two at Fiumicino and Malpensa airports in Italy. This paper aims at presenting advantages of implementation of the knowledge base through the utilization of ontology and offers improved functionalities of such system through examples of typical queries and reasoning that enable derivation of high level energy conservation measures (ECM). Therefore, key SPARQL queries and SWRL rules, based on the two instantiated airport ontologies, are elaborated. The detection of high level irregularities in the operation of airport heating/cooling plants is discussed and estimation of energy savings is reported.Keywords: airport ontology, knowledge management, ontology modeling, reasoning
Procedia PDF Downloads 5381171 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education
Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue
Abstract:
In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education
Procedia PDF Downloads 1081170 Collapse Load Analysis of Reinforced Concrete Pile Group in Liquefying Soils under Lateral Loading
Authors: Pavan K. Emani, Shashank Kothari, V. S. Phanikanth
Abstract:
The ultimate load analysis of RC pile groups has assumed a lot of significance under liquefying soil conditions, especially due to post-earthquake studies of 1964 Niigata, 1995 Kobe and 2001 Bhuj earthquakes. The present study reports the results of numerical simulations on pile groups subjected to monotonically increasing lateral loads under design amounts of pile axial loading. The soil liquefaction has been considered through the non-linear p-y relationship of the soil springs, which can vary along the depth/length of the pile. This variation again is related to the liquefaction potential of the site and the magnitude of the seismic shaking. As the piles in the group can reach their extreme deflections and rotations during increased amounts of lateral loading, a precise modeling of the inelastic behavior of the pile cross-section is done, considering the complete stress-strain behavior of concrete, with and without confinement, and reinforcing steel, including the strain-hardening portion. The possibility of the inelastic buckling of the individual piles is considered in the overall collapse modes. The model is analysed using Riks analysis in finite element software to check the post buckling behavior and plastic collapse of piles. The results confirm the kinds of failure modes predicted by centrifuge test results reported by researchers on pile group, although the pile material used is significantly different from that of the simulation model. The extension of the present work promises an important contribution to the design codes for pile groups in liquefying soils.Keywords: collapse load analysis, inelastic buckling, liquefaction, pile group
Procedia PDF Downloads 1621169 Achieving Process Stability through Automation and Process Optimization at H Blast Furnace Tata Steel, Jamshedpur
Authors: Krishnendu Mukhopadhyay, Subhashis Kundu, Mayank Tiwari, Sameeran Pani, Padmapal, Uttam Singh
Abstract:
Blast Furnace is a counter current process where burden descends from top and hot gases ascend from bottom and chemically reduce iron oxides into liquid hot metal. One of the major problems of blast furnace operation is the erratic burden descent inside furnace. Sometimes this problem is so acute that burden descent stops resulting in Hanging and instability of the furnace. This problem is very frequent in blast furnaces worldwide and results in huge production losses. This situation becomes more adverse when blast furnaces are operated at low coke rate and high coal injection rate with adverse raw materials like high alumina ore and high coke ash. For last three years, H-Blast Furnace Tata Steel was able to reduce coke rate from 450 kg/thm to 350 kg/thm with an increase in coal injection to 200 kg/thm which are close to world benchmarks and expand profitability. To sustain this regime, elimination of irregularities of blast furnace like hanging, channeling, and scaffolding is very essential. In this paper, sustaining of zero hanging spell for consecutive three years with low coke rate operation by improvement in burden characteristics, burden distribution, changes in slag regime, casting practices and adequate automation of the furnace operation has been illustrated. Models have been created to comprehend and upgrade the blast furnace process understanding. A model has been developed to predict the process of maintaining slag viscosity in desired range to attain proper burden permeability. A channeling prediction model has also been developed to understand channeling symptoms so that early actions can be initiated. The models have helped to a great extent in standardizing the control decisions of operators at H-Blast Furnace of Tata Steel, Jamshedpur and thus achieving process stability for last three years.Keywords: hanging, channelling, blast furnace, coke
Procedia PDF Downloads 1961168 Identification of Vehicle Dynamic Parameters by Using Optimized Exciting Trajectory on 3- DOF Parallel Manipulator
Authors: Di Yao, Gunther Prokop, Kay Buttner
Abstract:
Dynamic parameters, including the center of gravity, mass and inertia moments of vehicle, play an essential role in vehicle simulation, collision test and real-time control of vehicle active systems. To identify the important vehicle dynamic parameters, a systematic parameter identification procedure is studied in this work. In the first step of the procedure, a conceptual parallel manipulator (virtual test rig), which possesses three rotational degrees-of-freedom, is firstly proposed. To realize kinematic characteristics of the conceptual parallel manipulator, the kinematic analysis consists of inverse kinematic and singularity architecture is carried out. Based on the Euler's rotation equations for rigid body dynamics, the dynamic model of parallel manipulator and derivation of measurement matrix for parameter identification are presented subsequently. In order to reduce the sensitivity of parameter identification to measurement noise and other unexpected disturbances, a parameter optimization process of searching for optimal exciting trajectory of parallel manipulator is conducted in the following section. For this purpose, the 321-Euler-angles defined by parameterized finite-Fourier-series are primarily used to describe the general exciting trajectory of parallel manipulator. To minimize the condition number of measurement matrix for achieving better parameter identification accuracy, the unknown coefficients of parameterized finite-Fourier-series are estimated by employing an iterative algorithm based on MATLAB®. Meanwhile, the iterative algorithm will ensure the parallel manipulator still keeps in an achievable working status during the execution of optimal exciting trajectory. It is showed that the proposed procedure and methods in this work can effectively identify the vehicle dynamic parameters and could be an important application of parallel manipulator in the fields of parameter identification and test rig development.Keywords: parameter identification, parallel manipulator, singularity architecture, dynamic modelling, exciting trajectory
Procedia PDF Downloads 2661167 Synthesis and Characterization of Anti-Psychotic Drugs Based DNA Aptamers
Authors: Shringika Soni, Utkarsh Jain, Nidhi Chauhan
Abstract:
Aptamers are recently discovered ~80-100 bp long artificial oligonucleotides that not only demonstrated their applications in therapeutics; it is tremendously used in diagnostic and sensing application to detect different biomarkers and drugs. Synthesizing aptamers for proteins or genomic template is comparatively feasible in laboratory, but drugs or other chemical target based aptamers require major specification and proper optimization and validation. One has to optimize all selection, amplification, and characterization steps of the end product, which is extremely time-consuming. Therefore, we performed asymmetric PCR (polymerase chain reaction) for random oligonucleotides pool synthesis, and further use them in Systematic evolution of ligands by exponential enrichment (SELEX) for anti-psychotic drugs based aptamers synthesis. Anti-psychotic drugs are major tranquilizers to control psychosis for proper cognitive functions. Though their low medical use, their misuse may lead to severe medical condition as addiction and can promote crime in social and economical impact. In this work, we have approached the in-vitro SELEX method for ssDNA synthesis for anti-psychotic drugs (in this case ‘target’) based aptamer synthesis. The study was performed in three stages, where first stage included synthesis of random oligonucleotides pool via asymmetric PCR where end product was analyzed with electrophoresis and purified for further stages. The purified oligonucleotide pool was incubated in SELEX buffer, and further partition was performed in the next stage to obtain target specific aptamers. The isolated oligonucleotides are characterized and quantified after each round of partition, and significant results were obtained. After the repetitive partition and amplification steps of target-specific oligonucleotides, final stage included sequencing of end product. We can confirm the specific sequence for anti-psychoactive drugs, which will be further used in diagnostic application in clinical and forensic set-up.Keywords: anti-psychotic drugs, aptamer, biosensor, ssDNA, SELEX
Procedia PDF Downloads 1351166 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: customer value, Huff's Gravity Model, POS, Retailer
Procedia PDF Downloads 1231165 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches
Authors: Vahid Nourani, Atefeh Ashrafi
Abstract:
Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant
Procedia PDF Downloads 1281164 Contrasting Infrastructure Sharing and Resource Substitution Synergies Business Models
Authors: Robin Molinier
Abstract:
Industrial symbiosis (I.S) rely on two modes of cooperation that are infrastructure sharing and resource substitution to obtain economic and environmental benefits. The former consists in the intensification of use of an asset while the latter is based on the use of waste, fatal energy (and utilities) as alternatives to standard inputs. Both modes, in fact, rely on the shift from a business-as-usual functioning towards an alternative production system structure so that in a business point of view the distinction is not clear. In order to investigate the way those cooperation modes can be distinguished, we consider the stakeholders' interplay in the business model structure regarding their resources and requirements. For infrastructure sharing (following economic engineering literature) the cost function of capacity induces economies of scale so that demand pooling reduces global expanses. Grassroot investment sizing decision and the ex-post pricing strongly depends on the design optimization phase for capacity sizing whereas ex-post operational cost sharing minimizing budgets are less dependent upon production rates. Value is then mainly design driven. For resource substitution, synergies value stems from availability and is at risk regarding both supplier and user load profiles and market prices of the standard input. Baseline input purchasing cost reduction is thus more driven by the operational phase of the symbiosis and must be analyzed within the whole sourcing policy (including diversification strategies and expensive back-up replacement). Moreover, while resource substitution involves a chain of intermediate processors to match quality requirements, the infrastructure model relies on a single operator whose competencies allow to produce non-rival goods. Transaction costs appear higher in resource substitution synergies due to the high level of customization which induces asset specificity, and non-homogeneity following transaction costs economics arguments.Keywords: business model, capacity, sourcing, synergies
Procedia PDF Downloads 1751163 VR in the Middle School Classroom-An Experimental Study on Spatial Relations and Immersive Virtual Reality
Authors: Danielle Schneider, Ying Xie
Abstract:
Middle school science, technology, engineering, and math (STEM) teachers experience an exceptional challenge in the expectation to incorporate curricula that builds strong spatial reasoning skills on rudimentary geometry concepts. Because spatial ability is so closely tied to STEM students’ success, researchers are tasked to determine effective instructional practices that create an authentic learning environment within the immersive virtual reality learning environment (IVRLE). This study looked to investigate the effect of the IVRLE on middle school STEM students’ spatial reasoning skills as a methodology to benefit the STEM middle school students’ spatial reasoning skills. This experimental study was comprised of thirty 7th-grade STEM students divided into a treatment group that was engaged in an immersive VR platform where they engaged in building an object in the virtual realm by applying spatial processing and visualizing its dimensions and a control group that built the identical object using a desktop computer-based, computer-aided design (CAD) program. Before and after the students participated in the respective “3D modeling” environment, their spatial reasoning abilities were assessed using the Middle Grades Mathematics Project Spatial Visualization Test (MGMP-SVT). Additionally, both groups created a physical 3D model as a secondary measure to measure the effectiveness of the IVRLE. The results of a one-way ANOVA in this study identified a negative effect on those in the IVRLE. These findings suggest that with middle school students, virtual reality (VR) proved an inadequate tool to benefit spatial relation skills as compared to desktop-based CAD.Keywords: virtual reality, spatial reasoning, CAD, middle school STEM
Procedia PDF Downloads 86