Search results for: computational model(s)
7159 Numerical Investigation of Fluid Flow and Temperature Distribution on Power Transformer Windings Using Open Foam
Authors: Saeed Khandan Siar, Stefan Tenbohlen, Christian Breuer, Raphael Lebreton
Abstract:
The goal of this article is to investigate the detailed temperature distribution and the fluid flow of an oil cooled winding of a power transformer by means of computational fluid dynamics (CFD). The experimental setup consists of three passes of a zig-zag cooled disc type winding, in which losses are modeled by heating cartridges in each winding segment. A precise temperature sensor measures the temperature of each turn. The laboratory setup allows the exact control of the boundary conditions, e.g. the oil flow rate and the inlet temperature. Furthermore, a simulation model is solved using the open source computational fluid dynamics solver OpenFOAM and validated with the experimental results. The model utilizes the laminar and turbulent flow for the different mass flow rate of the oil. The good agreement of the simulation results with experimental measurements validates the model.Keywords: CFD, conjugated heat transfer, power transformers, temperature distribution
Procedia PDF Downloads 4207158 A Density Functional Theory Computational Study on the Inhibiting Action of Some Derivatives of 1,8-Bis(Benzylideneamino)Naphthalene against Aluminum Corrosion
Authors: Taher S. Ababneh, Taghreed M. A. Jazzazi, Tareq M. A. Alshboul
Abstract:
The inhibiting action against aluminum corrosion by three derivatives of 1,8-bis (benzylideneamino) naphthalene (BN) Schiff base has been investigated by means of DFT quantum chemical calculations at the B3LYP/6-31G(d) level of theory. The derivatives (CBN, NBN and MBN) were prepared from the condensation reaction of 1,8-diaminonaphthalene with substituted benzaldehyde (4-CN, 3-NO₂ and 3,4-(OMe)₂, respectively). Calculations were conducted to study the adsorption of each Schiff base on aluminum surface to evaluate its potential as a corrosion inhibitor. The computational structural features and electronic properties of each derivative such as relative energies and energies of the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) have been reported. Thermodynamic functions and quantum chemical parameters such as the hardness of the inhibitor, the softness and the electrophilicity index were calculated to determine the derivative of the highest inhibition efficiency.Keywords: corrosion, aluminum, DFT calculation, 1, 8-diaminonaphthalene, benzaldehyde
Procedia PDF Downloads 3467157 Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration
Authors: Payam Haghparast, Mikhail V. Sorin, Hakim Nesreddine
Abstract:
The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.Keywords: 1D model, constant area-mixing, constant pressure-mixing, normal shock location, ejector dimensions
Procedia PDF Downloads 1927156 Performance of Reinforced Concrete Beams under Different Fire Durations
Authors: Arifuzzaman Nayeem, Tafannum Torsha, Tanvir Manzur, Shaurav Alam
Abstract:
Performance evaluation of reinforced concrete (RC) beams subjected to accidental fire is significant for post-fire capacity measurement. Mechanical properties of any RC beam degrade due to heating since the strength and modulus of concrete and reinforcement suffer considerable reduction under elevated temperatures. Moreover, fire-induced thermal dilation and shrinkage cause internal stresses within the concrete and eventually result in cracking, spalling, and loss of stiffness, which ultimately leads to lower service life. However, conducting full-scale comprehensive experimental investigation for RC beams exposed to fire is difficult and cost-intensive, where the finite element (FE) based numerical study can provide an economical alternative for evaluating the post-fire capacity of RC beams. In this study, an attempt has been made to study the fire behavior of RC beams using FE software package ABAQUS under different durations of fire. The damaged plasticity model of concrete in ABAQUS was used to simulate behavior RC beams. The effect of temperature on strength and modulus of concrete and steel was simulated following relevant Eurocodes. Initially, the result of FE models was validated using several experimental results from available scholarly articles. It was found that the response of the developed FE models matched quite well with the experimental outcome for beams without heat. The FE analysis of beams subjected to fire showed some deviation from the experimental results, particularly in terms of stiffness degradation. However, the ultimate strength and deflection of FE models were similar to that of experimental values. The developed FE models, thus, exhibited the good potential to predict the fire behavior of RC beams. Once validated, FE models were then used to analyze several RC beams having different strengths (ranged between 20 MPa and 50 MPa) exposed to the standard fire curve (ASTM E119) for different durations. The post-fire performance of RC beams was investigated in terms of load-deflection behavior, flexural strength, and deflection characteristics.Keywords: fire durations, flexural strength, post fire capacity, reinforced concrete beam, standard fire
Procedia PDF Downloads 1377155 Using Simulation Modeling Approach to Predict USMLE Steps 1 and 2 Performances
Authors: Chau-Kuang Chen, John Hughes, Jr., A. Dexter Samuels
Abstract:
The prediction models for the United States Medical Licensure Examination (USMLE) Steps 1 and 2 performances were constructed by the Monte Carlo simulation modeling approach via linear regression. The purpose of this study was to build robust simulation models to accurately identify the most important predictors and yield the valid range estimations of the Steps 1 and 2 scores. The application of simulation modeling approach was deemed an effective way in predicting student performances on licensure examinations. Also, sensitivity analysis (a/k/a what-if analysis) in the simulation models was used to predict the magnitudes of Steps 1 and 2 affected by changes in the National Board of Medical Examiners (NBME) Basic Science Subject Board scores. In addition, the study results indicated that the Medical College Admission Test (MCAT) Verbal Reasoning score and Step 1 score were significant predictors of the Step 2 performance. Hence, institutions could screen qualified student applicants for interviews and document the effectiveness of basic science education program based on the simulation results.Keywords: prediction model, sensitivity analysis, simulation method, USMLE
Procedia PDF Downloads 3387154 Two Stage Assembly Flowshop Scheduling Problem Minimizing Total Tardiness
Authors: Ali Allahverdi, Harun Aydilek, Asiye Aydilek
Abstract:
The two stage assembly flowshop scheduling problem has lots of application in real life. To the best of our knowledge, the two stage assembly flowshop scheduling problem with total tardiness performance measure and separate setup times has not been addressed so far, and hence, it is addressed in this paper. Different dominance relations are developed and several algorithms are proposed. Extensive computational experiments are conducted to evaluate the proposed algorithms. The computational experiments have shown that one of the algorithms performs much better than the others. Moreover, the experiments have shown that the best performing algorithm performs much better than the best existing algorithm for the case of zero setup times in the literature. Therefore, the proposed best performing algorithm not only can be used for problems with separate setup times but also for the case of zero setup times.Keywords: scheduling, assembly flowshop, total tardiness, algorithm
Procedia PDF Downloads 3427153 Mathematical Modeling of the Fouling Phenomenon in Ultrafiltration of Latex Effluent
Authors: Amira Abdelrasoul, Huu Doan, Ali Lohi
Abstract:
An efficient and well-planned ultrafiltration process is becoming a necessity for monetary returns in the industrial settings. The aim of the present study was to develop a mathematical model for an accurate prediction of ultrafiltration membrane fouling of latex effluent applied to homogeneous and heterogeneous membranes with uniform and non-uniform pore sizes, respectively. The models were also developed for an accurate prediction of power consumption that can handle the large-scale purposes. The model incorporated the fouling attachments as well as chemical and physical factors in membrane fouling for accurate prediction and scale-up application. Both Polycarbonate and Polysulfone flat membranes, with pore sizes of 0.05 µm and a molecular weight cut-off of 60,000, respectively, were used under a constant feed flow rate and a cross-flow mode in ultrafiltration of the simulated paint effluent. Furthermore, hydrophilic ultrafilic and hydrophobic PVDF membranes with MWCO of 100,000 were used to test the reliability of the models. Monodisperse particles of 50 nm and 100 nm in diameter, and a latex effluent with a wide range of particle size distributions were utilized to validate the models. The aggregation and the sphericity of the particles indicated a significant effect on membrane fouling.Keywords: membrane fouling, mathematical modeling, power consumption, attachments, ultrafiltration
Procedia PDF Downloads 4697152 Elitist Self-Adaptive Step-Size Search in Optimum Sizing of Steel Structures
Authors: Oğuzhan Hasançebi, Saeid Kazemzadeh Azad
Abstract:
Keywords: structural design optimization, optimal sizing, metaheuristics, self-adaptive step-size search, steel trusses, steel frames
Procedia PDF Downloads 3737151 Application of Computational Chemistry for Searching Anticancer Derivatives of 2-Phenazinamines as Bcr-Abl Tyrosine Kinase Inhibitors
Authors: Gajanan M. Sonwane
Abstract:
The computational studies on 2-phenazinamines with their protein targets have been carried out to design compounds with potential anticancer activity. This strategy of designing compounds possessing selectivity over specific tyrosine kinase has been achieved through G-QSAR and molecular docking studies. The objective of this research has been to design newer 2-phenazinamine derivatives as Bcr-Abl tyrosine kinase inhibitors by G-QSAR, molecular docking studies followed by wet-lab studies along with evaluation of their anticancer potential. Computational chemistry was done by using VLife MDS 4.3 and Autodock 4.2 followed by wet-lab experiments for synthesizing 2-phenazinamine derivatives. The chemical structures of ligands in 2D were drawn by employing Chemdraw 2D Ultra 8.0 and were converted into 3D. These were optimized by using a semi-empirical method called MOPAC. The protein structure was retrieved from RCSC protein data bank as a PDB file. The binding interactions of protein and ligands were done by using PYMOL. The molecular properties of the designed compounds were predicted in silico by using Osiris property explorer. The parent compound 2-phenazinamine was synthesized by reduction of 2, 4-dinitro-N-phenyl-benzenamine in the presence of tin chloride followed by cyclization in the presence of nitrobenzene and magnesium sulfate. The derivatization at the amino function of 2-phenazinamine was performed by treating parent compound with various aldehydes in the presence of dicyclohexylcarbodiimide (DCC) and urea to afford 2-(2-chlorophenyl)-3-(phenazine-2-yl) thiazolidine-4-one. Synthesized 39 novel derivatives of 2-phenazinamine and performed antioxidant activity, anti antiproliferative on the bulb of onion and anticancer activity on cell line showing significant competition with marked blockbuster drug imatinib.Keywords: computer-aided drug design, tyrosin kinases, anticancer, docking
Procedia PDF Downloads 1387150 Designing the Maturity Model of Smart Digital Transformation through the Foundation Data Method
Authors: Mohammad Reza Fazeli
Abstract:
Nowadays, the fourth industry, known as the digital transformation of industries, is seen as one of the top subjects in the history of structural revolution, which has led to the high-tech and tactical dominance of the organization. In the face of these profits, the undefined and non-transparent nature of the after-effects of investing in digital transformation has hindered many organizations from attempting this area of this industry. One of the important frameworks in the field of understanding digital transformation in all organizations is the maturity model of digital transformation. This model includes two main parts of digital transformation maturity dimensions and digital transformation maturity stages. Mediating factors of digital maturity and organizational performance at the individual (e.g., motivations, attitudes) and at the organizational level (e.g., organizational culture) should be considered. For successful technology adoption processes, organizational development and human resources must go hand in hand and be supported by a sound communication strategy. Maturity models are developed to help organizations by providing broad guidance and a roadmap for improvement. However, as a result of a systematic review of the literature and its analysis, it was observed that none of the 18 maturity models in the field of digital transformation fully meet all the criteria of appropriateness, completeness, clarity, and objectivity. A maturity assessment framework potentially helps systematize assessment processes that create opportunities for change in processes and organizations enabled by digital initiatives and long-term improvements at the project portfolio level. Cultural characteristics reflecting digital culture are not systematically integrated, and specific digital maturity models for the service sector are less clearly presented. It is also clearly evident that research on the maturity of digital transformation as a holistic concept is scarce and needs more attention in future research.Keywords: digital transformation, organizational performance, maturity models, maturity assessment
Procedia PDF Downloads 1057149 Series Network-Structured Inverse Models of Data Envelopment Analysis: Pitfalls and Solutions
Authors: Zohreh Moghaddas, Morteza Yazdani, Farhad Hosseinzadeh
Abstract:
Nowadays, data envelopment analysis (DEA) models featuring network structures have gained widespread usage for evaluating the performance of production systems and activities (Decision-Making Units (DMUs)) across diverse fields. By examining the relationships between the internal stages of the network, these models offer valuable insights to managers and decision-makers regarding the performance of each stage and its impact on the overall network. To further empower system decision-makers, the inverse data envelopment analysis (IDEA) model has been introduced. This model allows the estimation of crucial information for estimating parameters while keeping the efficiency score unchanged or improved, enabling analysis of the sensitivity of system inputs or outputs according to managers' preferences. This empowers managers to apply their preferences and policies on resources, such as inputs and outputs, and analyze various aspects like production, resource allocation processes, and resource efficiency enhancement within the system. The results obtained can be instrumental in making informed decisions in the future. The top result of this study is an analysis of infeasibility and incorrect estimation that may arise in the theory and application of the inverse model of data envelopment analysis with network structures. By addressing these pitfalls, novel protocols are proposed to circumvent these shortcomings effectively. Subsequently, several theoretical and applied problems are examined and resolved through insightful case studies.Keywords: inverse models of data envelopment analysis, series network, estimation of inputs and outputs, efficiency, resource allocation, sensitivity analysis, infeasibility
Procedia PDF Downloads 517148 Interaction between Space Syntax and Agent-Based Approaches for Vehicle Volume Modelling
Authors: Chuan Yang, Jing Bie, Panagiotis Psimoulis, Zhong Wang
Abstract:
Modelling and understanding vehicle volume distribution over the urban network are essential for urban design and transport planning. The space syntax approach was widely applied as the main conceptual and methodological framework for contemporary vehicle volume models with the help of the statistical method of multiple regression analysis (MRA). However, the MRA model with space syntax variables shows a limitation in vehicle volume predicting in accounting for the crossed effect of the urban configurational characters and socio-economic factors. The aim of this paper is to construct models by interacting with the combined impact of the street network structure and socio-economic factors. In this paper, we present a multilevel linear (ML) and an agent-based (AB) vehicle volume model at an urban scale interacting with space syntax theoretical framework. The ML model allowed random effects of urban configurational characteristics in different urban contexts. And the AB model was developed with the incorporation of transformed space syntax components of the MRA models into the agents’ spatial behaviour. Three models were implemented in the same urban environment. The ML model exhibit superiority over the original MRA model in identifying the relative impacts of the configurational characters and macro-scale socio-economic factors that shape vehicle movement distribution over the city. Compared with the ML model, the suggested AB model represented the ability to estimate vehicle volume in the urban network considering the combined effects of configurational characters and land-use patterns at the street segment level.Keywords: space syntax, vehicle volume modeling, multilevel model, agent-based model
Procedia PDF Downloads 1457147 A Machine Learning Approach for Intelligent Transportation System Management on Urban Roads
Authors: Ashish Dhamaniya, Vineet Jain, Rajesh Chouhan
Abstract:
Traffic management is one of the gigantic issue in most of the urban roads in al-most all metropolitan cities in India. Speed is one of the critical traffic parameters for effective Intelligent Transportation System (ITS) implementation as it decides the arrival rate of vehicles on an intersection which are majorly the point of con-gestions. The study aimed to leverage Machine Learning (ML) models to produce precise predictions of speed on urban roadway links. The research objective was to assess how categorized traffic volume and road width, serving as variables, in-fluence speed prediction. Four tree-based regression models namely: Decision Tree (DT), Random Forest (RF), Extra Tree (ET), and Extreme Gradient Boost (XGB)are employed for this purpose. The models' performances were validated using test data, and the results demonstrate that Random Forest surpasses other machine learning techniques and a conventional utility theory-based model in speed prediction. The study is useful for managing the urban roadway network performance under mixed traffic conditions and effective implementation of ITS.Keywords: stream speed, urban roads, machine learning, traffic flow
Procedia PDF Downloads 697146 Hardware Implementation and Real-time Experimental Validation of a Direction of Arrival Estimation Algorithm
Authors: Nizar Tayem, AbuMuhammad Moinuddeen, Ahmed A. Hussain, Redha M. Radaydeh
Abstract:
This research paper introduces an approach for estimating the direction of arrival (DOA) of multiple RF noncoherent sources in a uniform linear array (ULA). The proposed method utilizes a Capon-like estimation algorithm and incorporates LU decomposition to enhance the accuracy of DOA estimation while significantly reducing computational complexity compared to existing methods like the Capon method. Notably, the proposed method does not require prior knowledge of the number of sources. To validate its effectiveness, the proposed method undergoes validation through both software simulations and practical experimentation on a prototype testbed constructed using a software-defined radio (SDR) platform and GNU Radio software. The results obtained from MATLAB simulations and real-time experiments provide compelling evidence of the proposed method's efficacy.Keywords: DOA estimation, real-time validation, software defined radio, computational complexity, Capon's method, GNU radio
Procedia PDF Downloads 737145 Simple Model of Social Innovation Based on Entrepreneurship Incidence in Mexico
Authors: Vicente Espinola, Luis Torres, Christhian Gonzalez
Abstract:
Entrepreneurship is a topic of current interest in Mexico and the World, which has been fostered through public policies with great impact on its generation. The strategies used in Mexico have not been successful, being motivational strategies aimed at the masses with the intention that someone in the process generates a venture. The strategies used for its development have been "picking of winners" favoring those who have already overcome the initial stages of undertaking without effective support. This situation shows a disarticulation that appears even more in social entrepreneurship; due to this, it is relevant to research on those elements that could develop them and thus integrate a model of entrepreneurship and social innovation for Mexico. Social entrepreneurship should be generating social innovation, which is translated into business models in order to make the benefits reach the population. These models are proposed putting the social impact before the economic impact, without forgetting its sustainability in the medium and long term. In this work, we present a simple model of innovation and social entrepreneurship for Guanajuato, Mexico. This algorithm was based on how social innovation could be generated in a systemic way for Mexico through different institutions that promote innovation. In this case, the technological parks of the state of Guanajuato were studied because these are considered one of the areas of Mexico where its main objectives are to make technology transfer to companies but overlooking the social sector and entrepreneurs. An experimental design of n = 60 was carried out with potential entrepreneurs to identify their perception of the social approach that the enterprises should have, the skills they consider required to create a venture, as well as their interest in generating ventures that solve social problems. This experiment had a 2K design, the value of k = 3 and the computational simulation was performed in R statistical language. A simple model of interconnected variables is proposed, which allows us to identify where it is necessary to increase efforts for the generation of social enterprises. The 96.67% of potential entrepreneurs expressed interest in ventures that solve social problems. In the analysis of the variables interaction, it was identified that the isolated development of entrepreneurial skills would only replicate the generation of traditional ventures. The variable of social approach presented positive interactions, which may influence the generation of social entrepreneurship if this variable was strengthened and permeated in the processes of training and development of entrepreneurs. In the future, it will be necessary to analyze the institutional actors that are present in the social entrepreneurship ecosystem, in order to analyze the interaction necessary to strengt the innovation and social entrepreneurship ecosystem.Keywords: social innovation, model, entrepreneurship, technological parks
Procedia PDF Downloads 2727144 The Model Establishment and Analysis of TRACE/FRAPTRAN for Chinshan Nuclear Power Plant Spent Fuel Pool
Authors: J. R. Wang, H. T. Lin, Y. S. Tseng, W. Y. Li, H. C. Chen, S. W. Chen, C. Shih
Abstract:
TRACE is developed by U.S. NRC for the nuclear power plants (NPPs) safety analysis. We focus on the establishment and application of TRACE/FRAPTRAN/SNAP models for Chinshan NPP (BWR/4) spent fuel pool in this research. The geometry is 12.17 m × 7.87 m × 11.61 m for the spent fuel pool. In this study, there are three TRACE/SNAP models: one-channel, two-channel, and multi-channel TRACE/SNAP model. Additionally, the cooling system failure of the spent fuel pool was simulated and analyzed by using the above models. According to the analysis results, the peak cladding temperature response was more accurate in the multi-channel TRACE/SNAP model. The results depicted that the uncovered of the fuels occurred at 2.7 day after the cooling system failed. In order to estimate the detailed fuel rods performance, FRAPTRAN code was used in this research. According to the results of FRAPTRAN, the highest cladding temperature located on the node 21 of the fuel rod (the highest node at node 23) and the cladding burst roughly after 3.7 day.Keywords: TRACE, FRAPTRAN, BWR, spent fuel pool
Procedia PDF Downloads 3557143 Analytical Description of Disordered Structures in Continuum Models of Pattern Formation
Authors: Gyula I. Tóth, Shaho Abdalla
Abstract:
Even though numerical simulations indeed have a significant precursory/supportive role in exploring the disordered phase displaying no long-range order in pattern formation models, studying the stability properties of this phase and determining the order of the ordered-disordered phase transition in these models necessitate an analytical description of the disordered phase. First, we will present the results of a comprehensive statistical analysis of a large number (1,000-10,000) of numerical simulations in the Swift-Hohenberg model, where the bulk disordered (or amorphous) phase is stable. We will show that the average free energy density (over configurations) converges, while the variance of the energy density vanishes with increasing system size in numerical simulations, which suggest that the disordered phase is a thermodynamic phase (i.e., its properties are independent of the configuration in the macroscopic limit). Furthermore, the structural analysis of this phase in the Fourier space suggests that the phase can be modeled by a colored isotropic Gaussian noise, where any instant of the noise describes a possible configuration. Based on these results, we developed the general mathematical framework of finding a pool of solutions to partial differential equations in the sense of continuous probability measure, which we will present briefly. Applying the general idea to the Swift-Hohenberg model we show, that the amorphous phase can be found, and its properties can be determined analytically. As the general mathematical framework is not restricted to continuum theories, we hope that the proposed methodology will open a new chapter in studying disordered phases.Keywords: fundamental theory, mathematical physics, continuum models, analytical description
Procedia PDF Downloads 1307142 Fair Federated Learning in Wireless Communications
Authors: Shayan Mohajer Hamidi
Abstract:
Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization
Procedia PDF Downloads 757141 Control of a Plane Jet Spread by Tabs at the Nozzle Exit
Authors: Makito Sakai, Takahiro Kiwata, Takumi Awa, Hiroshi Teramoto, Takaaki Kono, Kuniaki Toyoda
Abstract:
Using experimental and numerical results, this paper describes the effects of tabs on the flow characteristics of a plane jet at comparatively low Reynolds numbers while focusing on the velocity field and the vortical structure. The flow visualization and velocity measurements were respectively carried out using laser Doppler velocimetry (LDV) and particle image velocimetry (PIV). In addition, three-dimensional (3D) plane jet numerical simulations were performed using ANSYS Fluent, a commercially available computational fluid dynamics (CFD) software application. We found that the spreads of jets perturbed by large delta tabs and round tabs were larger than those produced by the other tabs tested. Additionally, it was determined that a plane jet with square tabs had the smallest jet spread downstream, and the jet’s centerline velocity was larger than those of jets perturbed by the other tabs tested. It was also observed that the spanwise vortical structure of a plane jet with tabs disappeared completely. Good agreement was found between the experimental and numerical simulation velocity profiles in the area near the nozzle exit when the laminar flow model was used. However, we also found that large eddy simulation (LES) is better at predicting the developing flow field of a plane jet than the laminar and the standard k-ε turbulent models.Keywords: plane jet, flow control, tab, flow measurement, numerical simulation
Procedia PDF Downloads 3327140 Numerical Investigation of the Jacketing Method of Reinforced Concrete Column
Authors: S. Boukais, A. Nekmouche, N. Khelil, A. Kezmane
Abstract:
The first intent of this study is to develop a finite element model that can predict correctly the behavior of the reinforced concrete column. Second aim is to use the finite element model to investigate and evaluate the effect of the strengthening method by jacketing of the reinforced concrete column, by considering different interface contact between the old and the new concrete. Four models were evaluated, one by considering perfect contact, the other three models by using friction coefficient of 0.1, 0.3 and 0.5. The simulation was carried out by using Abaqus software. The obtained results show that the jacketing reinforcement led to significant increase of the global performance of the behavior of the simulated reinforced concrete column.Keywords: strengthening, jacketing, rienforced concrete column, Abaqus, simulation
Procedia PDF Downloads 1447139 Matching Law in Autoshaped Choice in Neural Networks
Authors: Giselle Maggie Fer Castañeda, Diego Iván González
Abstract:
The objective of this work was to study the autoshaped choice behavior in the Donahoe, Burgos and Palmer (DBP) neural network model and analyze it under the matching law. Autoshaped choice can be viewed as a form of economic behavior defined as the preference between alternatives according to their relative outcomes. The Donahoe, Burgos and Palmer (DBP) model is a connectionist proposal that unifies operant and Pavlovian conditioning. This model has been used for more than three decades as a neurobehavioral explanation of conditioning phenomena, as well as a generator of predictions suitable for experimental testing with non-human animals and humans. The study consisted of different simulations in which, in each one, a ratio of reinforcement was established for two alternatives, and the responses (i.e., activations) in each of them were measured. Choice studies with animals have demonstrated that the data generally conform closely to the generalized matching law equation, which states that the response ratio equals proportionally to the reinforcement ratio; therefore, it was expected to find similar results with the neural networks of the Donahoe, Burgos and Palmer (DBP) model since these networks have simulated and predicted various conditioning phenomena. The results were analyzed by the generalized matching law equation, and it was observed that under some contingencies, the data from the networks adjusted approximately to what was established by the equation. Implications and limitations are discussed.Keywords: matching law, neural networks, computational models, behavioral sciences
Procedia PDF Downloads 737138 Seismic Hazard Assessment of Offshore Platforms
Authors: F. D. Konstandakopoulou, G. A. Papagiannopoulos, N. G. Pnevmatikos, G. D. Hatzigeorgiou
Abstract:
This paper examines the effects of pile-soil-structure interaction on the dynamic response of offshore platforms under the action of near-fault earthquakes. Two offshore platforms models are investigated, one with completely fixed supports and one with piles which are clamped into deformable layered soil. The soil deformability for the second model is simulated using non-linear springs. These platform models are subjected to near-fault seismic ground motions. The role of fault mechanism on platforms’ response is additionally investigated, while the study also examines the effects of different angles of incidence of seismic records on the maximum response of each platform.Keywords: hazard analysis, offshore platforms, earthquakes, safety
Procedia PDF Downloads 1457137 A Biometric Template Security Approach to Fingerprints Based on Polynomial Transformations
Authors: Ramon Santana
Abstract:
The use of biometric identifiers in the field of information security, access control to resources, authentication in ATMs and banking among others, are of great concern because of the safety of biometric data. In the general architecture of a biometric system have been detected eight vulnerabilities, six of them allow obtaining minutiae template in plain text. The main consequence of obtaining minutia templates is the loss of biometric identifier for life. To mitigate these vulnerabilities several models to protect minutiae templates have been proposed. Several vulnerabilities in the cryptographic security of these models allow to obtain biometric data in plain text. In order to increase the cryptographic security and ease of reversibility, a minutiae templates protection model is proposed. The model aims to make the cryptographic protection and facilitate the reversibility of data using two levels of security. The first level of security is the data transformation level. In this level generates invariant data to rotation and translation, further transformation is irreversible. The second level of security is the evaluation level, where the encryption key is generated and data is evaluated using a defined evaluation function. The model is aimed at mitigating known vulnerabilities of the proposed models, basing its security on the impossibility of the polynomial reconstruction.Keywords: fingerprint, template protection, bio-cryptography, minutiae protection
Procedia PDF Downloads 1687136 Segregation Patterns of Trees and Grass Based on a Modified Age-Structured Continuous-Space Forest Model
Authors: Jian Yang, Atsushi Yagi
Abstract:
Tree-grass coexistence system is of great importance for forest ecology. Mathematical models are being proposed to study the dynamics of tree-grass coexistence and the stability of the systems. However, few of the models concentrates on spatial dynamics of the tree-grass coexistence. In this study, we modified an age-structured continuous-space population model for forests, obtaining an age-structured continuous-space population model for the tree-grass competition model. In the model, for thermal competitions, adult trees can out-compete grass, and grass can out-compete seedlings. We mathematically studied the model to make sure tree-grass coexistence solutions exist. Numerical experiments demonstrated that a fraction of area that trees or grass occupies can affect whether the coexistence is stable or not. We also tried regulating the mortality of adult trees with other parameters and the fraction of area trees and grass occupies were fixed; results show that the mortality of adult trees is also a factor affecting the stability of the tree-grass coexistence in this model.Keywords: population-structured models, stabilities of ecosystems, thermal competitions, tree-grass coexistence systems
Procedia PDF Downloads 1587135 Comparison of Applicability of Time Series Forecasting Models VAR, ARCH and ARMA in Management Science: Study Based on Empirical Analysis of Time Series Techniques
Authors: Muhammad Tariq, Hammad Tahir, Fawwad Mahmood Butt
Abstract:
Purpose: This study attempts to examine the best forecasting methodologies in the time series. The time series forecasting models such as VAR, ARCH and the ARMA are considered for the analysis. Methodology: The Bench Marks or the parameters such as Adjusted R square, F-stats, Durban Watson, and Direction of the roots have been critically and empirically analyzed. The empirical analysis consists of time series data of Consumer Price Index and Closing Stock Price. Findings: The results show that the VAR model performed better in comparison to other models. Both the reliability and significance of VAR model is highly appreciable. In contrary to it, the ARCH model showed very poor results for forecasting. However, the results of ARMA model appeared double standards i.e. the AR roots showed that model is stationary and that of MA roots showed that the model is invertible. Therefore, the forecasting would remain doubtful if it made on the bases of ARMA model. It has been concluded that VAR model provides best forecasting results. Practical Implications: This paper provides empirical evidences for the application of time series forecasting model. This paper therefore provides the base for the application of best time series forecasting model.Keywords: forecasting, time series, auto regression, ARCH, ARMA
Procedia PDF Downloads 3467134 A Look at the Quantum Theory of Atoms in Molecules from the Discrete Morse Theory
Authors: Dairo Jose Hernandez Paez
Abstract:
The quantum theory of atoms in molecules (QTAIM) allows us to obtain topological information on electronic density in quantum mechanical systems. The QTAIM starts by considering the electron density as a continuous mathematical object. On the other hand, the discretization of electron density is also a mathematical object, which, from discrete mathematics, would allow a new approach to its topological study. From this point of view, it is necessary to develop a series of steps that provide the theoretical support that guarantees its application. Some of the steps that we consider most important are mentioned below: (1) obtain good representations of the electron density through computational calculations, (2) design a methodology for the discretization of electron density, and construct the simplicial complex. (3) Make an analysis of the discrete vector field associating the simplicial complex. (4) Finally, in this research, we propose to use the discrete Morse theory as a mathematical tool to carry out studies of electron density topology.Keywords: discrete mathematics, Discrete Morse theory, electronic density, computational calculations
Procedia PDF Downloads 1017133 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation
Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro
Abstract:
This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.Keywords: acceptance, block size, mixed linear model, testing order, testing order
Procedia PDF Downloads 3207132 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts
Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig
Abstract:
This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.Keywords: expert interview, hazard management, modeling, simulation, snow avalanche
Procedia PDF Downloads 3247131 Supervised Machine Learning Approach for Studying the Effect of Different Joint Sets on Stability of Mine Pit Slopes Under the Presence of Different External Factors
Authors: Sudhir Kumar Singh, Debashish Chakravarty
Abstract:
Slope stability analysis is an important aspect in the field of geotechnical engineering. It is also important from safety, and economic point of view as any slope failure leads to loss of valuable lives and damage to property worth millions. This paper aims at mitigating the risk of slope failure by studying the effect of different joint sets on the stability of mine pit slopes under the influence of various external factors, namely degree of saturation, rainfall intensity, and seismic coefficients. Supervised machine learning approach has been utilized for making accurate and reliable predictions regarding the stability of slopes based on the value of Factor of Safety. Numerous cases have been studied for analyzing the stability of slopes using the popular Finite Element Method, and the data thus obtained has been used as training data for the supervised machine learning models. The input data has been trained on different supervised machine learning models, namely Random Forest, Decision Tree, Support vector Machine, and XGBoost. Distinct test data that is not present in training data has been used for measuring the performance and accuracy of different models. Although all models have performed well on the test dataset but Random Forest stands out from others due to its high accuracy of greater than 95%, thus helping us by providing a valuable tool at our disposition which is neither computationally expensive nor time consuming and in good accordance with the numerical analysis result.Keywords: finite element method, geotechnical engineering, machine learning, slope stability
Procedia PDF Downloads 997130 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques
Authors: Soheila Sadeghi
Abstract:
In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes
Procedia PDF Downloads 38