Search results for: autoregressive moving average model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20776

Search results for: autoregressive moving average model

20116 All-or-None Principle and Weakness of Hodgkin-Huxley Mathematical Model

Authors: S. A. Sadegh Zadeh, C. Kambhampati

Abstract:

Mathematical and computational modellings are the necessary tools for reviewing, analysing, and predicting processes and events in the wide spectrum range of scientific fields. Therefore, in a field as rapidly developing as neuroscience, the combination of these two modellings can have a significant role in helping to guide the direction the field takes. The paper combined mathematical and computational modelling to prove a weakness in a very precious model in neuroscience. This paper is intended to analyse all-or-none principle in Hodgkin-Huxley mathematical model. By implementation the computational model of Hodgkin-Huxley model and applying the concept of all-or-none principle, an investigation on this mathematical model has been performed. The results clearly showed that the mathematical model of Hodgkin-Huxley does not observe this fundamental law in neurophysiology to generating action potentials. This study shows that further mathematical studies on the Hodgkin-Huxley model are needed in order to create a model without this weakness.

Keywords: all-or-none, computational modelling, mathematical model, transmembrane voltage, action potential

Procedia PDF Downloads 596
20115 Comparison of Visual Field Tests in Glaucoma Patients with a Central Visual Field Defect

Authors: Hye-Young Shin, Hae-Young Lopilly Park, Chan Kee Park

Abstract:

We compared the 24-2 and 10-2 visual fields (VFs) and investigate the degree of discrepancy between the two tests in glaucomatous eyes with central VF defects. In all, 99 eyes of 99 glaucoma patients who underwent both the 24-2 VF and 10-2 VF tests within 6 months were enrolled retrospectively. Glaucomatous eyes involving a central VF defect were divided into three groups based on the average total deviation (TD) of 12 central points in the 24-2 VF test (N = 33, in each group): group 1 (tercile with the highest TD), group 2 (intermediate TD), and group 3 (lowest TD). The TD difference was calculated by subtracting the average TD of the 10-2 VF test from the average TD of 12 central points in the 24-2 VF test. The absolute central TD difference in each quadrant was defined as the absolute value of the TD value obtained by subtracting the average TD of four central points in the 10-2 VF test from the innermost TD in the 24-2 VF test in each quadrant. The TD differences differed significantly between group 3 and groups 1 and 2 (P < 0.001). In the superonasal quadrant, the absolute central TD difference was significantly greater in group 2 than in group 1 (P < 0.05). In the superotemporal quadrant, the absolute central TD difference was significantly greater in group 3 than in groups 1 and 2 (P < 0.001). Our results indicate that the results of VF tests for different VFs can be inconsistent, depending on the degree of central defects and the VF quadrant.

Keywords: central visual field defect, glaucoma, 10-2 visual field, 24-2 visual field

Procedia PDF Downloads 158
20114 Characterization of the Microbial Induced Carbonate Precipitation Technique as a Biological Cementing Agent for Sand Deposits

Authors: Sameh Abu El-Soud, Zahra Zayed, Safwan Khedr, Adel M. Belal

Abstract:

The population increase in Egypt is urging for horizontal land development which became a demand to allow the benefit of different natural resources and expand from the narrow Nile valley. However, this development is facing challenges preventing land development and agriculture development. Desertification and moving sand dunes in the west sector of Egypt are considered the major obstacle that is blocking the ideal land use and development. In the proposed research, the sandy soil is treated biologically using Bacillus pasteurii bacteria as these bacteria have the ability to bond the sand partials to change its state of loose sand to cemented sand, which reduces the moving ability of the sand dunes. The procedure of implementing the Microbial Induced Carbonate Precipitation Technique (MICP) technique is examined, and the different factors affecting on this process such as the medium of bacteria sample preparation, the optical density (OD600), the reactant concentration, injection rates and intervals are highlighted. Based on the findings of the MICP treatment for sandy soil, conclusions and future recommendations are reached.

Keywords: soil stabilization, biological treatment, microbial induced carbonate precipitation (MICP), sand cementation

Procedia PDF Downloads 227
20113 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 292
20112 Development of Vapor Absorption Refrigeration System for Mini-Bus Car’s Air Conditioning: A Two-Fluid Model

Authors: Yoftahe Nigussie

Abstract:

This research explores the implementation of a vapor absorption refrigeration system (VARS) in mini-bus cars to enhance air conditioning efficiency. The conventional vapor compression refrigeration system (VCRS) in vehicles relies on mechanical work from the engine, leading to increased fuel consumption. The proposed VARS aims to utilize waste heat and exhaust gas from the internal combustion engine to cool the mini-bus cabin, thereby reducing fuel consumption and atmospheric pollution. The project involves two models: Model 1, a two-fluid vapor absorption system (VAS), and Model 2, a three-fluid VAS. Model 1 uses ammonia (NH₃) and water (H₂O) as refrigerants, where water absorbs ammonia rapidly, producing a cooling effect. The absorption cycle operates on the principle that absorbing ammonia in water decreases vapor pressure. The ammonia-water solution undergoes cycles of desorption, condensation, expansion, and absorption, facilitated by a generator, condenser, expansion valve, and absorber. The objectives of this research include reducing atmospheric pollution, minimizing air conditioning maintenance costs, lowering capital costs, enhancing fuel economy, and eliminating the need for a compressor. The comparison between vapor absorption and compression systems reveals advantages such as smoother operation, fewer moving parts, and the ability to work at lower evaporator pressures without affecting the Coefficient of Performance (COP). The proposed VARS demonstrates potential benefits for mini-bus air conditioning systems, providing a sustainable and energy-efficient alternative. By utilizing waste heat and exhaust gas, this system contributes to environmental preservation while addressing economic considerations for vehicle owners. Further research and development in this area could lead to the widespread adoption of vapor absorption technology in automotive air conditioning systems.

Keywords: room, zone, space, thermal resistance

Procedia PDF Downloads 53
20111 Assessment of Educational Service Quality at Master's Level in an Iranian University Using Based on HEdPERF Model

Authors: Faranak Omidian

Abstract:

The aim of this research was to examine the quality of education service at master's level in the Islamic Azad University of Dezful. In terms of objective, this is an applied research and in regard to methodology, it is a descriptive analytical research. The statistical population included all students of master's degree in the Islamic Azad University of Dezful. The sample size was determined using stratified random sampling method in different fields of study. The research questionnaire is the translated version of standardized Abdullah's HEdPERF 41-item scale which is based on a 5-point Likert scale. In order to determine the validity, the translated questionnaire was given to the professors of educational sciences. The correlation among all questions has been regarded at a value of 0.644. The results showed that the quality of educational service at master's level in this university, based on chi-square goodness of fit test, was equal to 73.36 and its degree of freedom was 2 at a significant level of 0.001, indicating the low desirability of the services. According to Friedman test, academic responsiveness has been reported to be in a higher status than other dimensions with an average rank of 3.94 while accessibility, with an average rank of 2.15, has been in the lowest status from master's students' viewpoint.

Keywords: educational service quality, master's level, Iranian university

Procedia PDF Downloads 264
20110 Geochemical Investigation of Weathering and Sorting for Tepeköy Sandstones

Authors: M. Yavuz Hüseyinca, Şuayip Küpeli

Abstract:

The Chemical Index of Alteration (CIA) values of Late Eocene-Oligocene aged sandstones that exposed on the eastern edge of Tuz Lake (Central Anatolia, Turkey) range from 49 to 59 with an average of 51. The A-CN-K diagram indicates that sandstones underwent post-depositional K-metasomatism. The original average CIA value before the K-metasomatism is calculated as 55. This value is lower than that of Post Archean Australian Shale (PAAS) and defines a low intense chemical weathering in the source-area. Extrapolation of sandstones back to the plagioclase-alkali feldspar line in the A-CN-K diagram suggests a high average plagioclase to alkali feldspar ratio in the provenance and a composition close to granodiorite. The Zr/Sc and Th/Sc ratios with the Al₂O₃-Zr-TiO₂ space do not show zircon addition that refuse both recycling of sediments and sorting effect. All these data suggest direct and rapid transportation from the source due to topographic uplift and probably arid to semi-arid climate conditions for the sandstones.

Keywords: central Anatolia, sandstone, sorting, weathering

Procedia PDF Downloads 357
20109 Effect of the Average Kits Birth Weight and of the Number of Born Alive per Liter on the Milk Production of Algerian Rabbit Raised in Aures Area

Authors: S. Moumen, M. Melizi

Abstract:

In order to characterize rabbits does of an Aures local population raised in Algeria; a study of their milk yield was realized in the experimental rabbitry of El Hadj Lakhdhar University. Milk production of does was measured every day during the days following 215 parturitions. It was estimated by weighing the female before and after the single daily suckling (10-15 min between the 2 weighing operations). The various calculated parameters were the quantity of milk produced per day, per week and the total quantity produced in 21 days, as well as the intake of milk by young rabbits. The analysis concerned the effects of the number of successive litters (3 classes: 1 to 3 and more) and of the average number of the number of young rabbits suckled per litter (6 classes: from 1-2 kits to more than 6). During the 21 days of controlled lactation, the average litter size was 6±3. The rabbits of the Aures area produced on average 2544.34±747 g in 21 days that is 121 g of milk/day or 21g of milk/kit/day. The milk yield increased from 526, 1035, 1240, and 2801g to 760, 1365, 1715 and 3840 for week 1, 2, 3 and the total period of lactation respectively. Nevertheless, milk production available per kit and per day decreased linearly with kits number in the litter for each of the 3 weeks considered. On the other hand the milk yield was not affected by the weight at birth of kits.

Keywords: milk production, litter size, rabbit, Aures area, Algeria

Procedia PDF Downloads 499
20108 Bioavailability of Zinc to Wheat Grown in the Calcareous Soils of Iraqi Kurdistan

Authors: Muhammed Saeed Rasheed

Abstract:

Knowledge of the zinc and phytic acid (PA) concentrations of staple cereal crops are essential when evaluating the nutritional health of national and regional populations. In the present study, a total of 120 farmers’ fields in Iraqi Kurdistan were surveyed for zinc status in soil and wheat grain samples; wheat is the staple carbohydrate source in the region. Soils were analysed for total concentrations of phosphorus (PT) and zinc (ZnT), available P (POlsen) and Zn (ZnDTPA) and for pH. Average values (mg kg-1) ranged between 403-3740 (PT), 42.0-203 (ZnT), 2.13-28.1 (POlsen) and 0.14-5.23 (ZnDTPA); pH was in the range 7.46-8.67. The concentrations of Zn, PA/Zn molar ratio and estimated Zn bioavailability were also determined in wheat grain. The ranges of Zn and PA concentrations (mg kg⁻¹) were 12.3-63.2 and 5400 – 9300, respectively, giving a PA/Zn molar ratio of 15.7-30.6. A trivariate model was used to estimate intake of bioaccessible Zn, employing the following parameter values: (i) maximum Zn absorption = 0.09 (AMAX), (ii) equilibrium dissociation constant of zinc-receptor binding reaction = 0.680 (KP), and (iii) equilibrium dissociation constant of Zn-PA binding reaction = 0.033 (KR). In the model, total daily absorbed Zn (TAZ) (mg d⁻¹) as a function of total daily nutritional PA (mmole d⁻¹) and total daily nutritional Zn (mmole Zn d⁻¹) was estimated assuming an average wheat flour consumption of 300 g day⁻¹ in the region. Consideration of the PA and Zn intake suggest only 21.5±2.9% of grain Zn is bioavailable so that the effective Zn intake from wheat is only 1.84-2.63 mg d-1 for the local population. Overall results suggest available dietary Zn is below recommended levels (11 mg d⁻¹), partly due to low uptake by wheat but also due to the presence of large concentrations of PA in wheat grains. A crop breeding program combined with enhanced agronomic management methods is needed to enhance both Zn uptake and bioavailability in grains of cultivated wheat types.

Keywords: phosphorus, zinc, phytic acid, phytic acid to zinc molar ratio, zinc bioavailability

Procedia PDF Downloads 111
20107 Artificial Intelligence Based Online Monitoring System for Cardiac Patient

Authors: Syed Qasim Gilani, Muhammad Umair, Muhammad Noman, Syed Bilawal Shah, Aqib Abbasi, Muhammad Waheed

Abstract:

Cardiovascular Diseases(CVD's) are the major cause of death in the world. The main reason for these deaths is the unavailability of first aid for heart failure. In many cases, patients die before reaching the hospital. We in this paper are presenting innovative online health service for Cardiac Patients. The proposed online health system has two ends. Users through device developed by us can communicate with their doctor through a mobile application. This interface provides them with first aid.Also by using this service, they have an easy interface with their doctors for attaining medical advice. According to the proposed system, we developed a device called Cardiac Care. Cardiac Care is a portable device which a patient can use at their home for monitoring heart condition. When a patient checks his/her heart condition, Electrocardiogram (ECG), Blood Pressure(BP), Temperature are sent to the central database. The severity of patients condition is checked using Artificial Intelligence Algorithm at the database. If the patient is suffering from the minor problem, our algorithm will suggest a prescription for patients. But if patient's condition is severe, patients record is sent to doctor through the mobile Android application. Doctor after reviewing patients condition suggests next step. If a doctor identifies the patient condition as critical, then the message is sent to the central database for sending an ambulance for the patient. Ambulance starts moving towards patient for bringing him/her to hospital. We have implemented this model at prototype level. This model will be life-saving for millions of people around the globe. According to this proposed model patients will be in contact with their doctors all the time.

Keywords: cardiovascular disease, classification, electrocardiogram, blood pressure

Procedia PDF Downloads 171
20106 [Keynote Talk]: Heavy Metals in Marine Sediments of Gulf of Izmir

Authors: E. Kam, Z. U. Yümün, D. Kurt

Abstract:

In this study, sediment samples were collected from four sampling sites located on the shores of the Gulf of İzmir. In the samples, Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn concentrations were determined using inductively coupled, plasma-optical emission spectrometry (ICP-OES). The average heavy metal concentrations were: Cd < LOD (limit of detection); Co 14.145 ± 0.13 μg g−1; Cr 112.868 ± 0.89 μg g−1; Cu 34.045 ± 0.53 μg g−1; Mn 481.43 ± 7.65 μg g−1; Ni 76.538 ± 3.81 μg g−1; Pb 11.059 ± 0.53 μg g−1 and Zn 140.133 ± 1.37 μg g−1, respectively. The results were compared with the average abundances of these elements in the Earth’s crust. The measured heavy metal concentrations can serve as reference values for further studies carried out on the shores of the Aegean Sea.

Keywords: heavy metal, Aegean Sea, ICP-OES, sediment

Procedia PDF Downloads 176
20105 Risk Assessment of Contamination by Heavy Metals in Sarcheshmeh Copper Complex of Iran Using Topsis Method

Authors: Hossein Hassani, Ali Rezaei

Abstract:

In recent years, the study of soil contamination problems surrounding mines and smelting plants has attracted some serious attention of the environmental experts. These elements due to the non- chemical disintegration and nature are counted as environmental stable and durable contaminants. Variability of these contaminants in the soil and the time and financial limitation for the favorable environmental application, in order to reduce the risk of their irreparable negative consequences on environment, caused to apply the favorable grading of these contaminant for the further success of the risk management processes. In this study, we use the contaminants factor risk indices, average concentration, enrichment factor and geoaccumulation indices for evaluating the metal contaminant of including Pb, Ni, Se, Mo and Zn in the soil of Sarcheshmeh copper mine area. For this purpose, 120 surface soil samples up to the depth of 30 cm have been provided from the study area. And the metals have been analyzed using ICP-MS method. Comparison of the heavy and potentially toxic elements concentration in the soil samples with the world average value of the uncontaminated soil and shale average indicates that the value of Zn, Pb, Ni, Se and Mo is higher than the world average value and only the Ni element shows the lower value than the shale average. Expert opinions on the relative importance of each indicators were used to assign a final weighting of the metals and the heavy metals were ranked using the TOPSIS approach. This allows us to carry out efficient environmental proceedings, leading to the reduction of environmental ricks form the contaminants. According to the results, Ni, Pb, Mo, Zn, and Se have the highest rate of risk contamination in the soil samples of the study area.

Keywords: contamination coefficient, geoaccumulation factor, TOPSIS techniques, Sarcheshmeh copper complex

Procedia PDF Downloads 258
20104 Leveraging Natural Language Processing for Legal Artificial Intelligence: A Longformer Approach for Taiwanese Legal Cases

Authors: Hsin Lee, Hsuan Lee

Abstract:

Legal artificial intelligence (LegalAI) has been increasing applications within legal systems, propelled by advancements in natural language processing (NLP). Compared with general documents, legal case documents are typically long text sequences with intrinsic logical structures. Most existing language models have difficulty understanding the long-distance dependencies between different structures. Another unique challenge is that while the Judiciary of Taiwan has released legal judgments from various levels of courts over the years, there remains a significant obstacle in the lack of labeled datasets. This deficiency makes it difficult to train models with strong generalization capabilities, as well as accurately evaluate model performance. To date, models in Taiwan have yet to be specifically trained on judgment data. Given these challenges, this research proposes a Longformer-based pre-trained language model explicitly devised for retrieving similar judgments in Taiwanese legal documents. This model is trained on a self-constructed dataset, which this research has independently labeled to measure judgment similarities, thereby addressing a void left by the lack of an existing labeled dataset for Taiwanese judgments. This research adopts strategies such as early stopping and gradient clipping to prevent overfitting and manage gradient explosion, respectively, thereby enhancing the model's performance. The model in this research is evaluated using both the dataset and the Average Entropy of Offense-charged Clustering (AEOC) metric, which utilizes the notion of similar case scenarios within the same type of legal cases. Our experimental results illustrate our model's significant advancements in handling similarity comparisons within extensive legal judgments. By enabling more efficient retrieval and analysis of legal case documents, our model holds the potential to facilitate legal research, aid legal decision-making, and contribute to the further development of LegalAI in Taiwan.

Keywords: legal artificial intelligence, computation and language, language model, Taiwanese legal cases

Procedia PDF Downloads 60
20103 Detection of Micro-Unmanned Ariel Vehicles Using a Multiple-Input Multiple-Output Digital Array Radar

Authors: Tareq AlNuaim, Mubashir Alam, Abdulrazaq Aldowesh

Abstract:

The usage of micro-Unmanned Ariel Vehicles (UAVs) has witnessed an enormous increase recently. Detection of such drones became a necessity nowadays to prevent any harmful activities. Typically, such targets have low velocity and low Radar Cross Section (RCS), making them indistinguishable from clutter and phase noise. Multiple-Input Multiple-Output (MIMO) Radars have many potentials; it increases the degrees of freedom on both transmit and receive ends. Such architecture allows for flexibility in operation, through utilizing the direct access to every element in the transmit/ receive array. MIMO systems allow for several array processing techniques, permitting the system to stare at targets for longer times, which improves the Doppler resolution. In this paper, a 2×2 MIMO radar prototype is developed using Software Defined Radio (SDR) technology, and its performance is evaluated against a slow-moving low radar cross section micro-UAV used by hobbyists. Radar cross section simulations were carried out using FEKO simulator, achieving an average of -14.42 dBsm at S-band. The developed prototype was experimentally evaluated achieving more than 300 meters of detection range for a DJI Mavic pro-drone

Keywords: digital beamforming, drone detection, micro-UAV, MIMO, phased array

Procedia PDF Downloads 121
20102 Seasonal Short-Term Effect of Air Pollution on Cardiovascular Mortality in Belgium

Authors: Natalia Bustos Sierra, Katrien Tersago

Abstract:

It is currently proven that both extremes of temperature are associated with increased mortality and that air pollution is associated with temperature. This relationship is complex, and in countries with important seasonal variations in weather such as Belgium, some effects can appear as non-significant when the analysis is done over the entire year. We, therefore, analyzed the effect of short-term outdoor air pollution exposure on cardiovascular mortality during the warmer and colder months separately. We used daily cardiovascular deaths from acute cardiovascular diagnostics according to the International Classification of Diseases, 10th Revision (ICD-10: I20-I24, I44-I49, I50, I60-I66) during the period 2008-2013. The environmental data were population-weighted concentrations of particulates with an aerodynamic diameter less than 10 µm (PM₁₀) and less than 2.5 µm (PM₂.₅) (daily average), nitrogen dioxide (NO₂) (daily maximum of the hourly average) and ozone (O₃) (daily maximum of the 8-hour running mean). A Generalized linear model was applied adjusting for the confounding effect of season, temperature, dew point temperature, the day of the week, public holidays and the incidence of influenza-like illness (ILI) per 100,000 inhabitants. The relative risks (RR) were calculated for an increase of one interquartile range (IQR) of the air pollutant (μg/m³). These were presented for the four hottest months (June, July, August, September) and coldest months (November, December, January, February) in Belgium. We applied both individual lag model and unconstrained distributed lag model methods. The cumulative effect of a four-day exposure (day of exposure and three consecutive days) was calculated from the unconstrained distributed lag model. The IQR for PM₁₀, PM₂.₅, NO₂, and O₃ were respectively 8.2, 6.9, 12.9 and 25.5 µg/m³ during warm months and 18.8, 17.6, 18.4 and 27.8 µg/m³ during cold months. The association with CV mortality was statistically significant for the four pollutants during warm months and only for NO₂ during cold months. During the warm months, the cumulative effect of an IQR increase of ozone for the age groups 25-64, 65-84 and 85+ was 1.066 (95%CI: 1.002-1.135), 1.041 (1.008-1.075) and 1.036 (1.013-1.058) respectively. The cumulative effect of an IQR increase of NO₂ for the age group 65-84 was 1.066 (1.020-1.114) during warm months and 1.096 (1.030-1.166) during cold months. The cumulative effect of an IQR increase of PM₁₀ during warm months reached 1.046 (1.011-1.082) and 1.038 (1.015-1.063) for the age groups 65-84 and 85+ respectively. Similar results were observed for PM₂.₅. The short-term effect of air pollution on cardiovascular mortality is greater during warm months for lower pollutant concentrations compared to cold months. Spending more time outside during warm months increases population exposure to air pollution and can, therefore, be a confounding factor for this association. Age can also affect the length of time spent outdoors and the type of physical activity exercised. This study supports the deleterious effect of air pollution on cardiovascular mortality (CV) which varies according to season and age groups in Belgium. Public health measures should, therefore, be adapted to seasonality.

Keywords: air pollution, cardiovascular, mortality, season

Procedia PDF Downloads 146
20101 Modified Weibull Approach for Bridge Deterioration Modelling

Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight

Abstract:

State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.

Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models

Procedia PDF Downloads 711
20100 Estimating Heavy Metal Leakage and Environmental Damage from Cigarette Butt Disposal in Urban Areas through CBPI Evaluation

Authors: Muhammad Faisal, Zai-Jin You, Muhammad Naeem

Abstract:

Concerns about the environment, public health, and the economy are raised by the fact that the world produces around 6 trillion cigarettes annually. Arguably the most pervasive forms of environmental litter, this dangerous trash must be eliminated. The researchers wanted to get an idea of how much pollution is seeping out of cigarette butts in metropolitan areas by studying their distribution and concentration. In order to accomplish this goal, the cigarette butt pollution indicator was applied in 29 different areas. The locations were monitored monthly for a full calendar year. The conditions for conducting the investigation of the venues were the same on both weekends and during the weekdays. By averaging the metal leakage ratio in various climates and the average weight of cigarette butts, we were able to estimate the total amount of heavy metal leakage. The findings revealed that the annual average value of the index for the areas that were investigated ranged from 1.38 to 10.4. According to these numbers, just 27.5% of the areas had a low pollution rating, while 43.5% had a major pollution status or worse. Weekends witnessed the largest fall (31% on average) in all locations' indices, while spring and summer saw the largest increase (26% on average) compared to autumn and winter. It was calculated that the average amount of heavy metals such as Cr, Cu, Cd, Zn, and Pb that seep into the environment from discarded cigarette butts in commercial, residential, and park areas, respectively, is 0.25 µg/m2, 0.078 µg/m2, and 0.18 µg/m2. Butt from cigarettes is one of the most prevalent forms of litter in the area that was examined. This litter is the origin of a wide variety of contaminants, including heavy metals. This toxic garbage poses a significant risk to the city.

Keywords: heavy metal, hazardous waste, waste management, litter

Procedia PDF Downloads 61
20099 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 24
20098 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach

Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma

Abstract:

Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.

Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX

Procedia PDF Downloads 114
20097 Built-Own-Lease-Transfer (BOLT): “An Alternative Model to Subsidy Schemes in Public Private Partnership Projects”

Authors: Nirali Shukla, Neel Shah

Abstract:

The World Bank Institute (WBI) is undertaking a review of government interventions aimed at facilitating sustainable investment in public private partnerships (PPPs) in various under developed countries. The study presents best practice for applying financial model to make PPPs financially viable. The lessons presented here, if properly implemented, can help countries use limited funds to attract more private investment, get more infrastructure built and, as a result, achieve greater economic growth. The four countries Brazil, Colombia, Mexico, and India in total develop an average of nearly US$50 billion in PPPs per year. There are a range of policies and institutional arrangements governments use to provide subsidies to PPPs. For example, some countries have created dedicated agencies, or ‘funds’, capitalized with money from the national budget to manage and allocate subsidies. Other countries have established well-defined policies for appropriating subsidies on an ad hoc basis through an annual budget process. In this context, subsidies are direct fiscal contributions or grants paid by the government to a project when revenues from user fees are insufficient to cover all capital and operating costs while still providing private investors with a reasonable rate of return. Without subsidies, some infrastructure projects that would provide economic or social gains, but are not financially viable, would go undeveloped. But the Financial model of BOLT (PPP) model described in this study suggests that it is most feasible option rather than going for subsidy schemes for making infrastructure projects financially viable. The major advantage for implementing this model is the government money is saved and can be used for other projects as well as the private investors are getting better rate of return than subsidized schemes.

Keywords: PPP, BOLT, subsidy schemes, financial model

Procedia PDF Downloads 730
20096 Scale Effects on the Wake Airflow of a Heavy Truck

Authors: Aude Pérard Lecomte, Georges Fokoua, Amine Mehel, Anne Tanière

Abstract:

Air quality in urban areas is deteriorated by pollution, mainly due to the constant increase of the traffic of different types of ground vehicles. In particular, particulate matter pollution with important concentrations in urban areas can cause serious health issues. Characterizing and understanding particle dynamics is therefore essential to establish recommendations to improve air quality in urban areas. To analyze the effects of turbulence on particulate pollutants dispersion, the first step is to focus on the single-phase flow structure and turbulence characteristics in the wake of a heavy truck model. To achieve this, Computational Fluid Dynamics (CFD) simulations were conducted with the aim of modeling the wake airflow of a full- and reduced-scale heavy truck. The Reynolds Average Navier-Stokes (RANS) approach with the Reynolds Stress Model (RSM)as the turbulence model closure was used. The simulations highlight the apparition of a large vortex coming from the under trailer. This vortex belongs to the recirculation region, located in the near-wake of the heavy truck. These vortical structures are expected to have a strong influence on particle dynamics that are emitted by the truck.

Keywords: CDF, heavy truck, recirculation region, reduced scale

Procedia PDF Downloads 201
20095 Artificial Neural Network and Statistical Method

Authors: Tomas Berhanu Bekele

Abstract:

Traffic congestion is one of the main problems related to transportation in developed as well as developing countries. Traffic control systems are based on the idea of avoiding traffic instabilities and homogenizing traffic flow in such a way that the risk of accidents is minimized and traffic flow is maximized. Lately, Intelligent Transport Systems (ITS) has become an important area of research to solve such road traffic-related issues for making smart decisions. It links people, roads and vehicles together using communication technologies to increase safety and mobility. Moreover, accurate prediction of road traffic is important to manage traffic congestion. The aim of this study is to develop an ANN model for the prediction of traffic flow and to compare the ANN model with the linear regression model of traffic flow predictions. Data extraction was carried out in intervals of 15 minutes from the video player. Video of mixed traffic flow was taken and then counted during office work in order to determine the traffic volume. Vehicles were classified into six categories, namely Car, Motorcycle, Minibus, mid-bus, Bus, and Truck vehicles. The average time taken by each vehicle type to travel the trap length was measured by time displayed on a video screen.

Keywords: intelligent transport system (ITS), traffic flow prediction, artificial neural network (ANN), linear regression

Procedia PDF Downloads 47
20094 Statistical and Analytical Comparison of GIS Overlay Modelings: An Appraisal on Groundwater Prospecting in Precambrian Metamorphics

Authors: Tapas Acharya, Monalisa Mitra

Abstract:

Overlay modeling is the most widely used conventional analysis for spatial decision support system. Overlay modeling requires a set of themes with different weightage computed in varied manners, which gives a resultant input for further integrated analysis. In spite of the popularity and most widely used technique; it gives inconsistent and erroneous results for similar inputs while processed in various GIS overlay techniques. This study is an attempt to compare and analyse the differences in the outputs of different overlay methods using GIS platform with same set of themes of the Precambrian metamorphic to obtain groundwater prospecting in Precambrian metamorphic rocks. The objective of the study is to emphasize the most suitable overlay method for groundwater prospecting in older Precambrian metamorphics. Seven input thematic layers like slope, Digital Elevation Model (DEM), soil thickness, lineament intersection density, average groundwater table fluctuation, stream density and lithology have been used in the spatial overlay models of fuzzy overlay, weighted overlay and weighted sum overlay methods to yield the suitable groundwater prospective zones. Spatial concurrence analysis with high yielding wells of the study area and the statistical comparative studies among the outputs of various overlay models using RStudio reveal that the Weighted Overlay model is the most efficient GIS overlay model to delineate the groundwater prospecting zones in the Precambrian metamorphic rocks.

Keywords: fuzzy overlay, GIS overlay model, groundwater prospecting, Precambrian metamorphics, weighted overlay, weighted sum overlay

Procedia PDF Downloads 109
20093 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms

Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano

Abstract:

In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.

Keywords: heuristic, MIP model, remedial course, school, timetabling

Procedia PDF Downloads 588
20092 Proposal for a Generic Context Meta-Model

Authors: Jaouadi Imen, Ben Djemaa Raoudha, Ben Abdallah Hanene

Abstract:

The access to relevant information that is adapted to users’ needs, preferences and environment is a challenge in many applications running. That causes an appearance of context-aware systems. To facilitate the development of this class of applications, it is necessary that these applications share a common context meta-model. In this article, we will present our context meta-model that is defined using the OMG Meta Object facility (MOF). This meta-model is based on the analysis and synthesis of context concepts proposed in literature.

Keywords: context, meta-model, MOF, awareness system

Procedia PDF Downloads 542
20091 Developing Norms for Sit and Reach Test in the Local Environment of Khyber Pakhtunkhwa, Pakistan

Authors: Hazratullah Khattak, Abdul Waheed Mughal, Inamullah Khattak

Abstract:

This study is envisaged as vital contribution as it intends to develop norms for the Sit and Reach Test in the Local Environment of Khyber Pakhtunkhwa Pakistan, for the age group between 12-14 years which will be used to measure the flexibility level of early adolescents (12-14 years). Sit and Reach test was applied on 2000 volunteers, 400 subjects from each selected district (Five (5) Districts, Peshawar, Nowshera, Karak, Dera Ismail Khan and Swat (20% percent of the total 25 districts) using convenient sampling technique. The population for this study is comprised of all the early adolescents aging 12-14 years (Age Mean 13 + 0.63, Height 154 + 046, Weight 46 + 7.17, BMI 19 + 1.45) representing various public and private sectors educational institutions of the Khyber Pakhtunkhwa. As for as the norms developed for Sit and Reach test, the score below 6.8 inches comes in the category of poor, 6.9 to 9.6 inches (below Average), 9.7 to 10.8 inches (Average), 10.9 to 13 inches (Above average) and above 13 inches score is considered as Excellent.

Keywords: fitness, flexibility, norms, sit and reach

Procedia PDF Downloads 258
20090 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges

Authors: Dianelys Vega, Carlos Magluta, Ney Roitman

Abstract:

The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.

Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction

Procedia PDF Downloads 115
20089 Developing Leadership and Teamwork Skills of Pre-Service Teachers through Learning Camp

Authors: Sirimanee Banjong

Abstract:

This study aimed to 1) develop pre-service teachers’ leadership skills through camp-based learning, and 2) develop pre-service teachers’ teamwork skills through camp-based learning. An applied research methodology was used. The target group was derived from a purposive selection. It involved 32 fourth-year students in Early Childhood Education Program enrolling in a course entitled Seminar in Early Childhood Education provided during the second semester of the academic year 2013. The treatment was camp-based learning activities which applied a PDCA process including four stages: 1) plan, 2) do, 3) check, and 4) act. Research instruments were a learning camp program, a camp-based learning management plan, a 5-level assessment form for leadership skills and a 5-level assessment form for assessing teamwork skills. Data were analyzed using descriptive statistics. Results were: 1) pre-service teachers’ leadership skills yielded the before treatment average score at ¯("x" )=3.4, S.D.= 0.62 and the after-treatment average score at ¯("x" ) 4.29, S.D.=0.66 pre-service teachers’ teamwork skills yielded the before-treatment average score at ¯("x" )=3.31, S.D.= 0.60 and the after-treatment average score at ¯("x" )=4.42, S.D.= 0.66. Both differences were statistically significant at the .05 level. Thus, the pre-service teachers’ leadership and teamwork skills were significantly improved through the camp-based learning approach.

Keywords: learning camp, leadership skills, teamwork skills, pre-service teachers

Procedia PDF Downloads 342
20088 Building Biodiversity Conservation Plans Robust to Human Land Use Uncertainty

Authors: Yingxiao Ye, Christopher Doehring, Angelos Georghiou, Hugh Robinson, Phebe Vayanos

Abstract:

Human development is a threat to biodiversity, and conservation organizations (COs) are purchasing land to protect areas for biodiversity preservation. However, COs have limited budgets and thus face hard prioritization decisions that are confounded by uncertainty in future human land use. This research proposes a data-driven sequential planning model to help COs choose land parcels that minimize the uncertain human impact on biodiversity. The proposed model is robust to uncertain development, and the sequential decision-making process is adaptive, allowing land purchase decisions to adapt to human land use as it unfolds. The cellular automata model is leveraged to simulate land use development based on climate data, land characteristics, and development threat index from NASA Socioeconomic Data and Applications Center. This simulation is used to model uncertainty in the problem. This research leverages state-of-the-art techniques in the robust optimization literature to propose a computationally tractable reformulation of the model, which can be solved routinely by off-the-shelf solvers like Gurobi or CPLEX. Numerical results based on real data from the Jaguar in Central and South America show that the proposed method reduces conservation loss by 19.46% on average compared to standard approaches such as MARXAN used in practice for biodiversity conservation. Our method may better help guide the decision process in land acquisition and thereby allow conservation organizations to maximize the impact of limited resources.

Keywords: data-driven robust optimization, biodiversity conservation, uncertainty simulation, adaptive sequential planning

Procedia PDF Downloads 186
20087 Economic Valuation of Forest Landscape Function Using a Conditional Logit Model

Authors: A. J. Julius, E. Imoagene, O. A. Ganiyu

Abstract:

The purpose of this study is to estimate the economic value of the services and functions rendered by the forest landscape using a conditional logit model. For this study, attributes and levels of forest landscape were chosen; specifically, attributes include topographical forest type, forest type, forest density, recreational factor (side trip, accessibility of valley), and willingness to participate (WTP). Based on these factors, 48 choices sets with balanced and orthogonal form using statistical analysis system (SAS) 9.1 was adopted. The efficiency of the questionnaire was 6.02 (D-Error. 0.1), and choice set and socio-economic variables were analyzed. To reduce the cognitive load of respondents, the 48 choice sets were divided into 4 types in the questionnaire, so that respondents could respond to 12 choice sets, respectively. The study populations were citizens from seven metropolitan cities including Ibadan, Ilorin, Osogbo, etc. and annual WTP per household was asked by using the interview questionnaire, a total of 267 copies were recovered. As a result, Oshogbo had 0.45, and the statistical similarities could not be found except for urban forests, forest density, recreational factor, and level of WTP. Average annual WTP per household for forest landscape was 104,758 Naira (Nigerian currency) based on the outcome from this model, total economic value of the services and functions enjoyed from Nigerian forest landscape has reached approximately 1.6 trillion Naira.

Keywords: economic valuation, urban cities, services, forest landscape, logit model, nigeria

Procedia PDF Downloads 108