Search results for: panel stochastic frontier models
6655 Optimization of Strategies and Models Review for Optimal Technologies-Based on Fuzzy Schemes for Green Architecture
Authors: Ghada Elshafei, A. Elazim Negm
Abstract:
Recently, Green architecture becomes a significant way to a sustainable future. Green building designs involve finding the balance between comfortable homebuilding and sustainable environment. Moreover, the utilization of the new technologies such as artificial intelligence techniques are used to complement current practices in creating greener structures to keep the built environment more sustainable. The most common objectives are green buildings should be designed to minimize the overall impact of the built environment on ecosystems in general and particularly on human health and on the natural environment. This will lead to protecting occupant health, improving employee productivity, reducing pollution and sustaining the environmental. In green building design, multiple parameters which may be interrelated, contradicting, vague and of qualitative/quantitative nature are broaden to use. This paper presents a comprehensive critical state of art review of current practices based on fuzzy and its combination techniques. Also, presented how green architecture/building can be improved using the technologies that been used for analysis to seek optimal green solutions strategies and models to assist in making the best possible decision out of different alternatives.Keywords: green architecture/building, technologies, optimization, strategies, fuzzy techniques, models
Procedia PDF Downloads 4756654 Parametric Estimation of U-Turn Vehicles
Authors: Yonas Masresha Aymeku
Abstract:
The purpose of capacity modelling at U-turns is to develop a relationship between capacity and its geometric characteristics. In fact, the few models available for the estimation of capacity at different transportation facilities do not provide specific guidelines for median openings. For this reason, an effort is made to estimate the capacity by collecting the data sets from median openings at different lane roads in Hyderabad City, India. Wide difference (43% -59%) among the capacity values estimated by the existing models shows the limitation to consider for mixed traffic situations. Thus, a distinct model is proposed for the estimation of the capacity of U-turn vehicles at median openings considering mixed traffic conditions, which would further prompt to investigate the effect of different factors that might affect the capacity.Keywords: geometric, guiddelines, median, vehicles
Procedia PDF Downloads 686653 Frontier Dynamic Tracking in the Field of Urban Plant and Habitat Research: Data Visualization and Analysis Based on Journal Literature
Authors: Shao Qi
Abstract:
The article uses the CiteSpace knowledge graph analysis tool to sort and visualize the journal literature on urban plants and habitats in the Web of Science and China National Knowledge Infrastructure databases. Based on a comprehensive interpretation of the visualization results of various data sources and the description of the intrinsic relationship between high-frequency keywords using knowledge mapping, the research hotspots, processes and evolution trends in this field are analyzed. Relevant case studies are also conducted for the hotspot contents to explore the means of landscape intervention and synthesize the understanding of research theories. The results show that (1) from 1999 to 2022, the research direction of urban plants and habitats gradually changed from focusing on plant and animal extinction and biological invasion to the field of human urban habitat creation, ecological restoration, and ecosystem services. (2) The results of keyword emergence and keyword growth trend analysis show that habitat creation research has shown a rapid and stable growth trend since 2017, and ecological restoration has gained long-term sustained attention since 2004. The hotspots of future research on urban plants and habitats in China may focus on habitat creation and ecological restoration.Keywords: research trends, visual analysis, habitat creation, ecological restoration
Procedia PDF Downloads 616652 European Food Safety Authority (EFSA) Safety Assessment of Food Additives: Data and Methodology Used for the Assessment of Dietary Exposure for Different European Countries and Population Groups
Authors: Petra Gergelova, Sofia Ioannidou, Davide Arcella, Alexandra Tard, Polly E. Boon, Oliver Lindtner, Christina Tlustos, Jean-Charles Leblanc
Abstract:
Objectives: To assess chronic dietary exposure to food additives in different European countries and population groups. Method and Design: The European Food Safety Authority’s (EFSA) Panel on Food Additives and Nutrient Sources added to Food (ANS) estimates chronic dietary exposure to food additives with the purpose of re-evaluating food additives that were previously authorized in Europe. For this, EFSA uses concentration values (usage and/or analytical occurrence data) reported through regular public calls for data by food industry and European countries. These are combined, at individual level, with national food consumption data from the EFSA Comprehensive European Food Consumption Database including data from 33 dietary surveys from 19 European countries and considering six different population groups (infants, toddlers, children, adolescents, adults and the elderly). EFSA ANS Panel estimates dietary exposure for each individual in the EFSA Comprehensive Database by combining the occurrence levels per food group with their corresponding consumption amount per kg body weight. An individual average exposure per day is calculated, resulting in distributions of individual exposures per survey and population group. Based on these distributions, the average and 95th percentile of exposure is calculated per survey and per population group. Dietary exposure is assessed based on two different sets of data: (a) Maximum permitted levels (MPLs) of use set down in the EU legislation (defined as regulatory maximum level exposure assessment scenario) and (b) usage levels and/or analytical occurrence data (defined as refined exposure assessment scenario). The refined exposure assessment scenario is sub-divided into the brand-loyal consumer scenario and the non-brand-loyal consumer scenario. For the brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the highest reported usage/analytical level for one food group, and at the mean level for the remaining food groups. For the non-brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the mean reported usage/analytical level for all food groups. An additional exposure from sources other than direct addition of food additives (i.e. natural presence, contaminants, and carriers of food additives) is also estimated, as appropriate. Results: Since 2014, this methodology has been applied in about 30 food additive exposure assessments conducted as part of scientific opinions of the EFSA ANS Panel. For example, under the non-brand-loyal scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 5.9 and 8.7 mg/kg body weight/day, respectively. The same estimates under the brand-loyal scenario in toddlers resulted in exposures of 8.1 and 20.7 mg/kg body weight/day, respectively. For the regulatory maximum level exposure assessment scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 11.9 and 30.3 mg/kg body weight/day, respectively. Conclusions: Detailed and up-to-date information on food additive concentration values (usage and/or analytical occurrence data) and food consumption data enable the assessment of chronic dietary exposure to food additives to more realistic levels.Keywords: α-tocopherol, ammonium phosphatides, dietary exposure assessment, European Food Safety Authority, food additives, food consumption data
Procedia PDF Downloads 3266651 Effects of Level Densities and Those of a-Parameter in the Framework of Preequilibrium Model for 63,65Cu(n,xp) Reactions in Neutrons at 9 to 15 MeV
Authors: L. Yettou
Abstract:
In this study, the calculations of proton emission spectra produced by 63Cu(n,xp) and 65Cu(n,xp) reactions are used in the framework of preequilibrium models using the EMPIRE code and TALYS code. Exciton Model predidtions combined with the Kalbach angular distribution systematics and the Hybrid Monte Carlo Simulation (HMS) were used. The effects of levels densities and those of a-parameter have been investigated for our calculations. The comparison with experimental data shows clear improvement over the Exciton Model and HMS calculations.Keywords: Preequilibrium models , level density, level density a-parameter., Empire code, Talys code.
Procedia PDF Downloads 1346650 Use of Multistage Transition Regression Models for Credit Card Income Prediction
Authors: Denys Osipenko, Jonathan Crook
Abstract:
Because of the variety of the card holders’ behaviour types and income sources each consumer account can be transferred to a variety of states. Each consumer account can be inactive, transactor, revolver, delinquent, defaulted and requires an individual model for the income prediction. The estimation of transition probabilities between statuses at the account level helps to avoid the memorylessness of the Markov Chains approach. This paper investigates the transition probabilities estimation approaches to credit cards income prediction at the account level. The key question of empirical research is which approach gives more accurate results: multinomial logistic regression or multistage conditional logistic regression with binary target. Both models have shown moderate predictive power. Prediction accuracy for conditional logistic regression depends on the order of stages for the conditional binary logistic regression. On the other hand, multinomial logistic regression is easier for usage and gives integrate estimations for all states without priorities. Thus further investigations can be concentrated on alternative modeling approaches such as discrete choice models.Keywords: multinomial regression, conditional logistic regression, credit account state, transition probability
Procedia PDF Downloads 4876649 Delineation of Oil – Polluted Sites in Ibeno LGA, Nigeria, Using Microbiological and Physicochemical Characterization
Authors: Ime R. Udotong, Justina I. R. Udotong, Ofonime U. M. John
Abstract:
Mobil Producing Nigeria Unlimited (MPNU), a subsidiary of ExxonMobil and the highest crude oil & condensate producer in Nigeria has its operational base and an oil terminal, the Qua Iboe terminal (QIT) located at Ibeno, Nigeria. Other oil companies like Network Exploration and Production Nigeria Ltd, Frontier Oil Ltd; Shell Petroleum Development Company Ltd; Elf Petroleum Nigeria Ltd and Nigerian Agip Energy, a subsidiary of the Italian ENI E&P operate onshore, on the continental shelf and in deep offshore of the Atlantic Ocean, respectively with the coastal waters of Ibeno, Nigeria as the nearest shoreline. This study was designed to delineate the oil-polluted sites in Ibeno, Nigeria using microbiological and physico-chemical characterization of soils, sediments and ground and surface water samples from the study area. Results obtained revealed that there have been significant recent hydrocarbon inputs into this environment as observed from the high counts of hydrocarbonoclastic microorganisms in excess of 1% at all the stations sampled. Moreover, high concentrations of THC, BTEX and heavy metals contents in all the samples analyzed corroborate the high recent crude oil input into the study area. The results also showed that the pollution of the different environmental media sampled were of varying degrees, following the trend: Ground water > surface water > sediments > soils.Keywords: microbiological characterization, oil-polluted sites, physico-chemical analyses, total hydrocarbon content
Procedia PDF Downloads 4166648 Deep Learning Approaches for Accurate Detection of Epileptic Seizures from Electroencephalogram Data
Authors: Ramzi Rihane, Yassine Benayed
Abstract:
Epilepsy is a chronic neurological disorder characterized by recurrent, unprovoked seizures resulting from abnormal electrical activity in the brain. Timely and accurate detection of these seizures is essential for improving patient care. In this study, we leverage the UK Bonn University open-source EEG dataset and employ advanced deep-learning techniques to automate the detection of epileptic seizures. By extracting key features from both time and frequency domains, as well as Spectrogram features, we enhance the performance of various deep learning models. Our investigation includes architectures such as Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), 1D Convolutional Neural Networks (1D-CNN), and hybrid CNN-LSTM and CNN-BiLSTM models. The models achieved impressive accuracies: LSTM (98.52%), Bi-LSTM (98.61%), CNN-LSTM (98.91%), CNN-BiLSTM (98.83%), and CNN (98.73%). Additionally, we utilized a data augmentation technique called SMOTE, which yielded the following results: CNN (97.36%), LSTM (97.01%), Bi-LSTM (97.23%), CNN-LSTM (97.45%), and CNN-BiLSTM (97.34%). These findings demonstrate the effectiveness of deep learning in capturing complex patterns in EEG signals, providing a reliable and scalable solution for real-time seizure detection in clinical environments.Keywords: electroencephalogram, epileptic seizure, deep learning, LSTM, CNN, BI-LSTM, seizure detection
Procedia PDF Downloads 146647 An IoT-Enabled Crop Recommendation System Utilizing Message Queuing Telemetry Transport (MQTT) for Efficient Data Transmission to AI/ML Models
Authors: Prashansa Singh, Rohit Bajaj, Manjot Kaur
Abstract:
In the modern agricultural landscape, precision farming has emerged as a pivotal strategy for enhancing crop yield and optimizing resource utilization. This paper introduces an innovative Crop Recommendation System (CRS) that leverages the Internet of Things (IoT) technology and the Message Queuing Telemetry Transport (MQTT) protocol to collect critical environmental and soil data via sensors deployed across agricultural fields. The system is designed to address the challenges of real-time data acquisition, efficient data transmission, and dynamic crop recommendation through the application of advanced Artificial Intelligence (AI) and Machine Learning (ML) models. The CRS architecture encompasses a network of sensors that continuously monitor environmental parameters such as temperature, humidity, soil moisture, and nutrient levels. This sensor data is then transmitted to a central MQTT server, ensuring reliable and low-latency communication even in bandwidth-constrained scenarios typical of rural agricultural settings. Upon reaching the server, the data is processed and analyzed by AI/ML models trained to correlate specific environmental conditions with optimal crop choices and cultivation practices. These models consider historical crop performance data, current agricultural research, and real-time field conditions to generate tailored crop recommendations. This implementation gets 99% accuracy.Keywords: Iot, MQTT protocol, machine learning, sensor, publish, subscriber, agriculture, humidity
Procedia PDF Downloads 696646 Evaluating Performance of Value at Risk Models for the MENA Islamic Stock Market Portfolios
Authors: Abderrazek Ben Maatoug, Ibrahim Fatnassi, Wassim Ben Ayed
Abstract:
In this paper we investigate the issue of market risk quantification for Middle East and North Africa (MENA) Islamic market equity. We use Value-at-Risk (VaR) as a measure of potential risk in Islamic stock market, for long and short position, based on Riskmetrics model and the conditional parametric ARCH class model volatility with normal, student and skewed student distribution. The sample consist of daily data for the 2006-2014 of 11 Islamic stock markets indices. We conduct Kupiec and Engle and Manganelli tests to evaluate the performance for each model. The main finding of our empirical results show that (i) the superior performance of VaR models based on the Student and skewed Student distribution, for the significance level of α=1% , for all Islamic stock market indices, and for both long and short trading positions (ii) Risk Metrics model, and VaR model based on conditional volatility with normal distribution provides the best accurate VaR estimations for both long and short trading positions for a significance level of α=5%.Keywords: value-at-risk, risk management, islamic finance, GARCH models
Procedia PDF Downloads 5926645 Estimation of the Parameters of Muskingum Methods for the Prediction of the Flood Depth in the Moudjar River Catchment
Authors: Fares Laouacheria, Said Kechida, Moncef Chabi
Abstract:
The objective of the study was based on the hydrological routing modelling for the continuous monitoring of the hydrological situation in the Moudjar river catchment, especially during floods with Hydrologic Engineering Center–Hydrologic Modelling Systems (HEC-HMS). The HEC-GeoHMS was used to transform data from geographic information system (GIS) to HEC-HMS for delineating and modelling the catchment river in order to estimate the runoff volume, which is used as inputs to the hydrological routing model. Two hydrological routing models were used, namely Muskingum and Muskingum routing models, for conducting this study. In this study, a comparison between the parameters of the Muskingum and Muskingum-Cunge routing models in HEC-HMS was used for modelling flood routing in the Moudjar river catchment and determining the relationship between these parameters and the physical characteristics of the river. The results indicate that the effects of input parameters such as the weighting factor "X" and travel time "K" on the output results are more significant, where the Muskingum routing model was more sensitive to input parameters than the Muskingum-Cunge routing model. This study can contribute to understand and improve the knowledge of the mechanisms of river floods, especially in ungauged river catchments.Keywords: HEC-HMS, hydrological modelling, Muskingum routing model, Muskingum-Cunge routing model
Procedia PDF Downloads 2786644 Decision Support System for Optimal Placement of Wind Turbines in Electric Distribution Grid
Authors: Ahmed Ouammi
Abstract:
This paper presents an integrated decision framework to support decision makers in the selection and optimal allocation of wind power plants in the electric grid. The developed approach intends to maximize the benefice related to the project investment during the planning period. The proposed decision model considers the main cost components, meteorological data, environmental impacts, operation and regulation constraints, and territorial information. The decision framework is expressed as a stochastic constrained optimization problem with the aim to identify the suitable locations and related optimal wind turbine technology considering the operational constraints and maximizing the benefice. The developed decision support system is applied to a case study to demonstrate and validate its performance.Keywords: decision support systems, electric power grid, optimization, wind energy
Procedia PDF Downloads 1536643 Computational Study of Chromatographic Behavior of a Series of S-Triazine Pesticides Based on Their in Silico Biological and Lipophilicity Descriptors
Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević
Abstract:
In this paper, quantitative structure-retention relationships (QSRR) analysis was applied in order to correlate in silico biological and lipophilicity molecular descriptors with retention values for the set of selected s-triazine herbicides. In silico generated biological and lipophilicity descriptors were discriminated using generalized pair correlation method (GPCM). According to this method, the significant difference between independent variables can be noticed regardless almost equal correlation with dependent variable. Using established multiple linear regression (MLR) models some biological characteristics could be predicted. Established MLR models were evaluated statistically and the most suitable models were selected and ranked using sum of ranking differences (SRD) method. In this method, as reference values, average experimentally obtained values are used. Additionally, using SRD method, similarities among investigated s-triazine herbicides can be noticed. These analysis were conducted in order to characterize selected s-triazine herbicides for future investigations regarding their biodegradability. This study is financially supported by COST action TD1305.Keywords: descriptors, generalized pair correlation method, pesticides, sum of ranking differences
Procedia PDF Downloads 2956642 Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration
Authors: Payam Haghparast, Mikhail V. Sorin, Hakim Nesreddine
Abstract:
The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.Keywords: 1D model, constant area-mixing, constant pressure-mixing, normal shock location, ejector dimensions
Procedia PDF Downloads 1946641 Performance of Reinforced Concrete Beams under Different Fire Durations
Authors: Arifuzzaman Nayeem, Tafannum Torsha, Tanvir Manzur, Shaurav Alam
Abstract:
Performance evaluation of reinforced concrete (RC) beams subjected to accidental fire is significant for post-fire capacity measurement. Mechanical properties of any RC beam degrade due to heating since the strength and modulus of concrete and reinforcement suffer considerable reduction under elevated temperatures. Moreover, fire-induced thermal dilation and shrinkage cause internal stresses within the concrete and eventually result in cracking, spalling, and loss of stiffness, which ultimately leads to lower service life. However, conducting full-scale comprehensive experimental investigation for RC beams exposed to fire is difficult and cost-intensive, where the finite element (FE) based numerical study can provide an economical alternative for evaluating the post-fire capacity of RC beams. In this study, an attempt has been made to study the fire behavior of RC beams using FE software package ABAQUS under different durations of fire. The damaged plasticity model of concrete in ABAQUS was used to simulate behavior RC beams. The effect of temperature on strength and modulus of concrete and steel was simulated following relevant Eurocodes. Initially, the result of FE models was validated using several experimental results from available scholarly articles. It was found that the response of the developed FE models matched quite well with the experimental outcome for beams without heat. The FE analysis of beams subjected to fire showed some deviation from the experimental results, particularly in terms of stiffness degradation. However, the ultimate strength and deflection of FE models were similar to that of experimental values. The developed FE models, thus, exhibited the good potential to predict the fire behavior of RC beams. Once validated, FE models were then used to analyze several RC beams having different strengths (ranged between 20 MPa and 50 MPa) exposed to the standard fire curve (ASTM E119) for different durations. The post-fire performance of RC beams was investigated in terms of load-deflection behavior, flexural strength, and deflection characteristics.Keywords: fire durations, flexural strength, post fire capacity, reinforced concrete beam, standard fire
Procedia PDF Downloads 1426640 Transition Dynamic Analysis of the Urban Disparity in Iran “Case Study: Iran Provinces Center”
Authors: Marzieh Ahmadi, Ruhullah Alikhan Gorgani
Abstract:
The usual methods of measuring regional inequalities can not reflect the internal changes of the country in terms of their displacement in different development groups, and the indicators of inequalities are not effective in demonstrating the dynamics of the distribution of inequality. For this purpose, this paper examines the dynamics of the urban inertial transport in the country during the period of 2006-2016 using the CIRD multidimensional index and stochastic kernel density method. it firstly selects 25 indicators in five dimensions including macroeconomic conditions, science and innovation, environmental sustainability, human capital and public facilities, and two-stage Principal Component Analysis methodology are developed to create a composite index of inequality. Then, in the second stage, using a nonparametric analytical approach to internal distribution dynamics and a stochastic kernel density method, the convergence hypothesis of the CIRD index of the Iranian provinces center is tested, and then, based on the ergodic density, long-run equilibrium is shown. Also, at this stage, for the purpose of adopting accurate regional policies, the distribution dynamics and process of convergence or divergence of the Iranian provinces for each of the five. According to the results of the first Stage, in 2006 & 2016, the highest level of development is related to Tehran and zahedan is at the lowest level of development. The results show that the central cities of the country are at the highest level of development due to the effects of Tehran's knowledge spillover and the country's lower cities are at the lowest level of development. The main reason for this may be the lack of access to markets in the border provinces. Based on the results of the second stage, which examines the dynamics of regional inequality transmission in the country during 2006-2016, the first year (2006) is not multifaceted and according to the kernel density graph, the CIRD index of about 70% of the cities. The value is between -1.1 and -0.1. The rest of the sequence on the right is distributed at a level higher than -0.1. In the kernel distribution, a convergence process is observed and the graph points to a single peak. Tends to be a small peak at about 3 but the main peak at about-0.6. According to the chart in the final year (2016), the multidimensional pattern remains and there is no mobility in the lower level groups, but at the higher level, the CIRD index accounts for about 45% of the provinces at about -0.4 Take it. That this year clearly faces the twin density pattern, which indicates that the cities tend to be closely related to each other in terms of development, so that the cities are low in terms of development. Also, according to the distribution dynamics results, the provinces of Iran follow the single-density density pattern in 2006 and the double-peak density pattern in 2016 at low and moderate inequality index levels and also in the development index. The country diverges during the years 2006 to 2016.Keywords: Urban Disparity, CIRD Index, Convergence, Distribution Dynamics, Random Kernel Density
Procedia PDF Downloads 1246639 Periodontal Disease or Cement Disease: New Frontier in the Treatment of Periodontal Disease in Dogs
Authors: C. Gallottini, W. Di Mari, A. Amaddeo, K. Barbaro, A. Dolci, G. Dolci, L. Gallottini, G. Barraco, S. Eramo
Abstract:
A group of 10 dogs (group A) with Periodontal Disease in the third stage, were subjected to regenerative therapy of periodontal tissues, by use of nano hydroxy apatite (NHA). These animals induced by general anesthesia, where treated by ultrasonic scaling, root planning, and at the end by a mucogingival flap in which it was applied NHA. The flap was closed and sutured with simple steps. Another group of 10 dogs (group B), control group, was treated only by scaling and root planning. No patient was subjected to antibiotic therapy. After three months, a check was made by inspection of the oral cavity, radiography and bone biopsy at the alveolar level. Group A showed a total restitutio ad integrum of the periodontal structures, and in group B still mild gingivitis in 70% of cases and 30% of the state remains unchanged. Numerous experimental studies both in animals and humans have documented that the grafts of porous hydroxyapatite are rapidly invaded by fibrovascular tissue which is subsequently converted into mature lamellar bone tissue by activating osteoblast. Since we acted on the removal of necrotic cementum and rehabilitating the root tissue by polishing without intervention in the ligament but only on anatomical functional interface of cement-blasts, we can connect the positive evolution of the clinical-only component of the cement that could represent this perspective, the only reason that Periodontal Disease become a Cement Disease, while all other clinical elements as nothing more than a clinical pathological accompanying.Keywords: nanoidroxiaphatite, parodontal disease, cement disease, regenerative therapy
Procedia PDF Downloads 4506638 Using Simulation Modeling Approach to Predict USMLE Steps 1 and 2 Performances
Authors: Chau-Kuang Chen, John Hughes, Jr., A. Dexter Samuels
Abstract:
The prediction models for the United States Medical Licensure Examination (USMLE) Steps 1 and 2 performances were constructed by the Monte Carlo simulation modeling approach via linear regression. The purpose of this study was to build robust simulation models to accurately identify the most important predictors and yield the valid range estimations of the Steps 1 and 2 scores. The application of simulation modeling approach was deemed an effective way in predicting student performances on licensure examinations. Also, sensitivity analysis (a/k/a what-if analysis) in the simulation models was used to predict the magnitudes of Steps 1 and 2 affected by changes in the National Board of Medical Examiners (NBME) Basic Science Subject Board scores. In addition, the study results indicated that the Medical College Admission Test (MCAT) Verbal Reasoning score and Step 1 score were significant predictors of the Step 2 performance. Hence, institutions could screen qualified student applicants for interviews and document the effectiveness of basic science education program based on the simulation results.Keywords: prediction model, sensitivity analysis, simulation method, USMLE
Procedia PDF Downloads 3406637 Mathematical Modeling of the Fouling Phenomenon in Ultrafiltration of Latex Effluent
Authors: Amira Abdelrasoul, Huu Doan, Ali Lohi
Abstract:
An efficient and well-planned ultrafiltration process is becoming a necessity for monetary returns in the industrial settings. The aim of the present study was to develop a mathematical model for an accurate prediction of ultrafiltration membrane fouling of latex effluent applied to homogeneous and heterogeneous membranes with uniform and non-uniform pore sizes, respectively. The models were also developed for an accurate prediction of power consumption that can handle the large-scale purposes. The model incorporated the fouling attachments as well as chemical and physical factors in membrane fouling for accurate prediction and scale-up application. Both Polycarbonate and Polysulfone flat membranes, with pore sizes of 0.05 µm and a molecular weight cut-off of 60,000, respectively, were used under a constant feed flow rate and a cross-flow mode in ultrafiltration of the simulated paint effluent. Furthermore, hydrophilic ultrafilic and hydrophobic PVDF membranes with MWCO of 100,000 were used to test the reliability of the models. Monodisperse particles of 50 nm and 100 nm in diameter, and a latex effluent with a wide range of particle size distributions were utilized to validate the models. The aggregation and the sphericity of the particles indicated a significant effect on membrane fouling.Keywords: membrane fouling, mathematical modeling, power consumption, attachments, ultrafiltration
Procedia PDF Downloads 4706636 The Effect of Filter Design and Face Velocity on Air Filter Performance
Authors: Iyad Al-Attar
Abstract:
Air filters installed in HVAC equipment and gas turbine for power generation confront several atmospheric contaminants with various concentrations while operating in different environments (tropical, coastal, hot). This leads to engine performance degradation, as contaminants are capable of deteriorating components and fouling compressor assembly. Compressor fouling is responsible for 70 to 85% of gas turbine performance degradation leading to reduction in power output and availability and an increase in the heat rate and fuel consumption. Therefore, filter design must take into account face velocities, pleat count and its corresponding surface area; to verify filter performance characteristics (Efficiency and Pressure Drop). The experimental work undertaken in the current study examined two groups of four filters with different pleating densities were investigated for the initial pressure drop response and fractional efficiencies. The pleating densities used for this study is 28, 30, 32 and 34 pleats per 100mm for each pleated panel and measured for ten different flow rates ranging from 500 to 5000 m3/h with increment of 500m3/h. This experimental work of the current work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase in face velocity and pleat density. The reasons that led to surface area losses of filtration media are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. It is evident from entire array of experiments that as the particle size increases, the efficiency decreases until the MPPS is reached. Beyond the MPPS, the efficiency increases with increase in particle size. The MPPS shifts to a smaller particle size as the face velocity increases, while the pleating density and orientation did not have a pronounced effect on the MPPS. Throughout the study, an optimal pleat count which satisfies initial pressure drop and efficiency requirements may not have necessarily existed. The work has also suggested that a valid comparison of the pleat densities should be based on the effective surface area that participates in the filtration action and not the total surface area the pleat density provides.Keywords: air filters, fractional efficiency, gas cleaning, glass fibre, HEPA filter, permeability, pressure drop
Procedia PDF Downloads 1356635 Determination of Brominated Flame Retardants In Recycled Plastic Toys Using Thermal Desorption GC/MS
Authors: Athena Nguyen, Rojin Belganeh
Abstract:
In recycling plastics industries, waste plastics are converted into monomers and other useful molecules by chemical reactions. Thermal energy generated by incineration is recovered when waste plastics melt. During the process, Flame retardants containing products get in, and brominated flame retardants (BFRs) are often used to reduce the flammability of products. Some of the originally formulated brominated flame retardants additives are restricted by the RoHS Directive, such as PBDE and PBB. The determination of BFRs other than those restricted by the RoHS directive is required. Frontier Lab developed a pyrolyzer based on the vertical micro-furnace design. The multi-mode pyrolyzer with different modes of operations, including evolve gas analysis (EGA), flash pyrolysis, thermal desorption, heart cutting, allows users to choose among the techniques for their analysis purposes. The method requires very little sample preparation. The first step is to perform an EGA using temperature programs. This technique provides information about the thermal temperature behaviors of the sample. The EGA thermogram is then used to determine the next steps in the analysis process. In this presentation, with an Optimal thermal temperature zone identified based on EGA thermogram, thermal desorption GC/MS is a chosen technique for the determination of brominated flame retardants in recycled plastic toys. Five types of general-purpose brominated flame retardants other than those restricted by the RoHS Directive are determined by the standard addition method.Keywords: gas chromatography/mass spectrometry, pyrolysis, pyrolyzer, thermal desorption-GC/MS
Procedia PDF Downloads 1936634 Assessment of Korea's Natural Gas Portfolio Considering Panama Canal Expansion
Authors: Juhan Kim, Jinsoo Kim
Abstract:
South Korea cannot import natural gas in any form other than LNG because of the division of South and North Korea. Further, the high proportion of natural gas in the national energy mix makes this resource crucial for energy security in Korea. Expansion of Panama Canal will allow for reducing the cost of shipping between the Far East and U.S East. Panama Canal expansion can have significant impacts on South Korea. Due to this situation, we review the natural gas optimal portfolio by considering the uniqueness of the Korean Natural gas market and expansion of Panama Canal. In order to assess Korea’s natural gas optimal portfolio, we developed natural gas portfolio model. The model comprises two steps. First, to obtain the optimal long-term spot contract ratio, the study examines the price level and the correlation between spot and long-term contracts by using the Markowitz, portfolio model. The optimal long-term spot contract ratio follows the efficient frontier of the cost/risk level related to this price level and degree of correlation. Second, by applying the obtained long-term contract purchase ratio as the constraint in the linear programming portfolio model, we determined the natural gas optimal import portfolio that minimizes total intangible and tangible costs. Using this model, we derived the optimal natural gas portfolio considering the expansion of Panama Canal. Based on these results, we assess the portfolio for natural gas import to Korea from the perspective of energy security and present some relevant policy proposals.Keywords: natural gas, Panama Canal, portfolio analysis, South Korea
Procedia PDF Downloads 2916633 Structural Performance of Mechanically Connected Stone Panels under Cyclic Loading: Application to Aesthetic and Environmental Building Skin Design
Authors: Michel Soto Chalhoub
Abstract:
Building designers in the Mediterranean region and other parts of the world utilize natural stone panels on the exterior façades as skin cover. This type of finishing is not only intended for aesthetic reasons but also environmental. The stone, since the earliest ages of civilization, has been used in construction and to-date some of the most appealing buildings owe their beauty to stone finishing. The stone also provides warmth in winter and freshness in summer as it moderates heat transfer and absorbs radiation. However, as structural codes became increasingly stringent about the dynamic performance of buildings, it became essential to study the performance of stone panels under cyclic loading – a condition that arises under the building is subjected to wind or earthquakes. The present paper studies the performance of stone panels using mechanical connectors when subjected to load reversal. In this paper, we present a theoretical model that addresses modes of failure in the steel connectors, by yield, and modes of failure in the stone, by fracture. Then we provide an experimental set-up and test results for rectangular stone panels of varying thickness. When the building is subjected to an earthquake, its rectangular panels within the structural system are subjected to shear deformations, which in turn impart stress into the stone cover. Rectangular stone panels, which typically range from 40cmx80cm to 60cmx120cm, need to be designed to withstand transverse loading from the direct application of lateral loads, and to withstand simultaneously in-plane loading (membrane stress) caused by inter-story drift and overall building lateral deflection. Results show correlation between the theoretical model which we derive from solid mechanics fundamentals and the experimental results, and lead to practical design recommendations. We find that for panel thickness below a certain threshold, it is more advantageous to utilize structural adhesive materials to connect stone panels to the main structural system of the building. For larger panel thicknesses, it is recommended to utilize mechanical connectors with special detailing to ensure a minimum level of ductility and energy dissipation.Keywords: solid mechanics, cyclic loading, mechanical connectors, natural stone, seismic, wind, building skin
Procedia PDF Downloads 2556632 Voltage Profile Enhancement in the Unbalanced Distribution Systems during Fault Conditions
Authors: K. Jithendra Gowd, Ch. Sai Babu, S. Sivanagaraju
Abstract:
Electric power systems are daily exposed to service interruption mainly due to faults and human accidental interference. Short circuit currents are responsible for several types of disturbances in power systems. The fault currents are high and the voltages are reduced at the time of fault. This paper presents two suitable methods, consideration of fault resistance and Distributed Generator are implemented and analyzed for the enhancement of voltage profile during fault conditions. Fault resistance is a critical parameter of electric power systems operation due to its stochastic nature. If not considered, this parameter may interfere in fault analysis studies and protection scheme efficiency. The effect of Distributed Generator is also considered. The proposed methods are tested on the IEEE 37 bus test systems and the results are compared.Keywords: distributed generation, electrical distribution systems, fault resistance
Procedia PDF Downloads 5166631 Designing the Maturity Model of Smart Digital Transformation through the Foundation Data Method
Authors: Mohammad Reza Fazeli
Abstract:
Nowadays, the fourth industry, known as the digital transformation of industries, is seen as one of the top subjects in the history of structural revolution, which has led to the high-tech and tactical dominance of the organization. In the face of these profits, the undefined and non-transparent nature of the after-effects of investing in digital transformation has hindered many organizations from attempting this area of this industry. One of the important frameworks in the field of understanding digital transformation in all organizations is the maturity model of digital transformation. This model includes two main parts of digital transformation maturity dimensions and digital transformation maturity stages. Mediating factors of digital maturity and organizational performance at the individual (e.g., motivations, attitudes) and at the organizational level (e.g., organizational culture) should be considered. For successful technology adoption processes, organizational development and human resources must go hand in hand and be supported by a sound communication strategy. Maturity models are developed to help organizations by providing broad guidance and a roadmap for improvement. However, as a result of a systematic review of the literature and its analysis, it was observed that none of the 18 maturity models in the field of digital transformation fully meet all the criteria of appropriateness, completeness, clarity, and objectivity. A maturity assessment framework potentially helps systematize assessment processes that create opportunities for change in processes and organizations enabled by digital initiatives and long-term improvements at the project portfolio level. Cultural characteristics reflecting digital culture are not systematically integrated, and specific digital maturity models for the service sector are less clearly presented. It is also clearly evident that research on the maturity of digital transformation as a holistic concept is scarce and needs more attention in future research.Keywords: digital transformation, organizational performance, maturity models, maturity assessment
Procedia PDF Downloads 1076630 Series Network-Structured Inverse Models of Data Envelopment Analysis: Pitfalls and Solutions
Authors: Zohreh Moghaddas, Morteza Yazdani, Farhad Hosseinzadeh
Abstract:
Nowadays, data envelopment analysis (DEA) models featuring network structures have gained widespread usage for evaluating the performance of production systems and activities (Decision-Making Units (DMUs)) across diverse fields. By examining the relationships between the internal stages of the network, these models offer valuable insights to managers and decision-makers regarding the performance of each stage and its impact on the overall network. To further empower system decision-makers, the inverse data envelopment analysis (IDEA) model has been introduced. This model allows the estimation of crucial information for estimating parameters while keeping the efficiency score unchanged or improved, enabling analysis of the sensitivity of system inputs or outputs according to managers' preferences. This empowers managers to apply their preferences and policies on resources, such as inputs and outputs, and analyze various aspects like production, resource allocation processes, and resource efficiency enhancement within the system. The results obtained can be instrumental in making informed decisions in the future. The top result of this study is an analysis of infeasibility and incorrect estimation that may arise in the theory and application of the inverse model of data envelopment analysis with network structures. By addressing these pitfalls, novel protocols are proposed to circumvent these shortcomings effectively. Subsequently, several theoretical and applied problems are examined and resolved through insightful case studies.Keywords: inverse models of data envelopment analysis, series network, estimation of inputs and outputs, efficiency, resource allocation, sensitivity analysis, infeasibility
Procedia PDF Downloads 526629 Interaction between Space Syntax and Agent-Based Approaches for Vehicle Volume Modelling
Authors: Chuan Yang, Jing Bie, Panagiotis Psimoulis, Zhong Wang
Abstract:
Modelling and understanding vehicle volume distribution over the urban network are essential for urban design and transport planning. The space syntax approach was widely applied as the main conceptual and methodological framework for contemporary vehicle volume models with the help of the statistical method of multiple regression analysis (MRA). However, the MRA model with space syntax variables shows a limitation in vehicle volume predicting in accounting for the crossed effect of the urban configurational characters and socio-economic factors. The aim of this paper is to construct models by interacting with the combined impact of the street network structure and socio-economic factors. In this paper, we present a multilevel linear (ML) and an agent-based (AB) vehicle volume model at an urban scale interacting with space syntax theoretical framework. The ML model allowed random effects of urban configurational characteristics in different urban contexts. And the AB model was developed with the incorporation of transformed space syntax components of the MRA models into the agents’ spatial behaviour. Three models were implemented in the same urban environment. The ML model exhibit superiority over the original MRA model in identifying the relative impacts of the configurational characters and macro-scale socio-economic factors that shape vehicle movement distribution over the city. Compared with the ML model, the suggested AB model represented the ability to estimate vehicle volume in the urban network considering the combined effects of configurational characters and land-use patterns at the street segment level.Keywords: space syntax, vehicle volume modeling, multilevel model, agent-based model
Procedia PDF Downloads 1466628 A Machine Learning Approach for Intelligent Transportation System Management on Urban Roads
Authors: Ashish Dhamaniya, Vineet Jain, Rajesh Chouhan
Abstract:
Traffic management is one of the gigantic issue in most of the urban roads in al-most all metropolitan cities in India. Speed is one of the critical traffic parameters for effective Intelligent Transportation System (ITS) implementation as it decides the arrival rate of vehicles on an intersection which are majorly the point of con-gestions. The study aimed to leverage Machine Learning (ML) models to produce precise predictions of speed on urban roadway links. The research objective was to assess how categorized traffic volume and road width, serving as variables, in-fluence speed prediction. Four tree-based regression models namely: Decision Tree (DT), Random Forest (RF), Extra Tree (ET), and Extreme Gradient Boost (XGB)are employed for this purpose. The models' performances were validated using test data, and the results demonstrate that Random Forest surpasses other machine learning techniques and a conventional utility theory-based model in speed prediction. The study is useful for managing the urban roadway network performance under mixed traffic conditions and effective implementation of ITS.Keywords: stream speed, urban roads, machine learning, traffic flow
Procedia PDF Downloads 706627 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 2216626 Model of Optimal Centroids Approach for Multivariate Data Classification
Authors: Pham Van Nha, Le Cam Binh
Abstract:
Particle swarm optimization (PSO) is a population-based stochastic optimization algorithm. PSO was inspired by the natural behavior of birds and fish in migration and foraging for food. PSO is considered as a multidisciplinary optimization model that can be applied in various optimization problems. PSO’s ideas are simple and easy to understand but PSO is only applied in simple model problems. We think that in order to expand the applicability of PSO in complex problems, PSO should be described more explicitly in the form of a mathematical model. In this paper, we represent PSO in a mathematical model and apply in the multivariate data classification. First, PSOs general mathematical model (MPSO) is analyzed as a universal optimization model. Then, Model of Optimal Centroids (MOC) is proposed for the multivariate data classification. Experiments were conducted on some benchmark data sets to prove the effectiveness of MOC compared with several proposed schemes.Keywords: analysis of optimization, artificial intelligence based optimization, optimization for learning and data analysis, global optimization
Procedia PDF Downloads 208