Search results for: computational error
1964 Social Responsibility in the Theory of Organisation Management
Authors: Patricia Crentsil, Alvina Oriekhova
Abstract:
The aim of the study is to determine social responsibility in the theory of organisation management. The main objectives are to examine the link between accountability,transparency, and ethical onorganisation management. The study seeks to answer questions that have received inadequate attention in social responsibility literature. Specifically, how accountability, transparency of policy, and ethical aspect enhanced organisation management? The target population of the study comprises of Deans and Head of Departments of Public Universities and Technical Universities in Ghana. The study used purposive sampling technique to select the Public Universities and technical universities in Ghana and adopted simple random Technique to select 300 participants from all Technical Universities in Ghana and 500 participants from all Traditional Universities in Ghana. The sample size will be 260 using confidence level = 95%, Margin of Error = 5%. The study used both primary and secondary data. The study adopted exploratory design to address the research questions. Results indicated thataccountability, transparency, and ethical have a positive significant link with organisation management. The study suggested that management can motivate an organization to act in a socially responsible manner.Keywords: corporate social responsibility, organisation management, organisation management theory, social responsibility
Procedia PDF Downloads 1261963 Comparison Analysis of Multi-Channel Echo Cancellation Using Adaptive Filters
Authors: Sahar Mobeen, Anam Rafique, Irum Baig
Abstract:
Acoustic echo cancellation in multichannel is a system identification application. In real time environment, signal changes very rapidly which required adaptive algorithms such as Least Mean Square (LMS), Leaky Least Mean Square (LLMS), Normalized Least Mean square (NLMS) and average (AFA) having high convergence rate and stable. LMS and NLMS are widely used adaptive algorithm due to less computational complexity and AFA used of its high convergence rate. This research is based on comparison of acoustic echo (generated in a room) cancellation thorough LMS, LLMS, NLMS, AFA and newly proposed average normalized leaky least mean square (ANLLMS) adaptive filters.Keywords: LMS, LLMS, NLMS, AFA, ANLLMS
Procedia PDF Downloads 5681962 Thermodynamic Trends in Co-Based Alloys via Inelastic Neutron Scattering
Authors: Paul Stonaha, Mariia Romashchenko, Xaio Xu
Abstract:
Magnetic shape memory alloys (MSMAs) are promising technological materials for a range of fields, from biomaterials to energy harvesting. We have performed inelastic neutron scattering on two powder samples of cobalt-based high-entropy MSMAs across a range of temperatures in an effort to compare calculations of thermodynamic properties (entropy, specific heat, etc.) to the measured ones. The measurements were correct for multiphonon scattering and multiple scattering contributions. We present herein the neutron-weighted vibrational density of states. Future work will utilize DFT calculations of the disordered lattice to correct for the neutron weighting and retrieve the true thermodynamical properties.Keywords: neutron scattering, vibrational dynamics, computational physics, material science
Procedia PDF Downloads 371961 An Agent-Based Modeling and Simulation of Human Muscle
Authors: Sina Saadati, Mohammadreza Razzazi
Abstract:
In this article, we have tried to present an agent-based model of human muscle. A suitable model of muscle is necessary for the analysis of mankind's movements. It can be used by clinical researchers who study the influence of motion sicknesses, like Parkinson's disease. It is also useful in the development of a prosthesis that receives the electromyography signals and generates force as a reaction. Since we have focused on computational efficiency in this research, the model can compute the calculations very fast. As far as it concerns prostheses, the model can be known as a charge-efficient method. In this paper, we are about to illustrate an agent-based model. Then, we will use it to simulate the human gait cycle. This method can also be done reversely in the analysis of gait in motion sicknesses.Keywords: agent-based modeling and simulation, human muscle, gait cycle, motion sickness
Procedia PDF Downloads 1161960 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures
Authors: Adriano Z. Zambom, Preethi Ravikumar
Abstract:
One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria
Procedia PDF Downloads 2671959 Thermal Performance of a Pair of Synthetic Jets Equipped in Microchannel
Authors: J. Mohammadpour, G. E. Lau, S. Cheng, A. Lee
Abstract:
Numerical study was conducted using two synthetic jet actuators attached underneath a micro-channel. By fixing the oscillating frequency and diaphragm amplitude, the effects on the heat transfer within the micro-channel were investigated with two synthetic jets being in-phase and 180° out-of-phase at different orifice spacing. There was a significant benefit identified with two jets being 180° out-of-phase with each other at the orifice spacing of 2 mm. By having this configuration, there was a distinct pattern of vortex forming which disrupts the main channel flow as well as promoting thermal mixing at high velocity within the channel. Therefore, this configuration achieved higher cooling performance compared to the other cases studied in terms of the reduction in the maximum temperature and cooling uniformity in the silicon wafer.Keywords: synthetic jets, microchannel, electronic cooling, computational fluid dynamics
Procedia PDF Downloads 2011958 Approach to Formulate Intuitionistic Fuzzy Regression Models
Authors: Liang-Hsuan Chen, Sheng-Shing Nien
Abstract:
This study aims to develop approaches to formulate intuitionistic fuzzy regression (IFR) models for many decision-making applications in the fuzzy environments using intuitionistic fuzzy observations. Intuitionistic fuzzy numbers (IFNs) are used to characterize the fuzzy input and output variables in the IFR formulation processes. A mathematical programming problem (MPP) is built up to optimally determine the IFR parameters. Each parameter in the MPP is defined as a couple of alternative numerical variables with opposite signs, and an intuitionistic fuzzy error term is added to the MPP to characterize the uncertainty of the model. The IFR model is formulated based on the distance measure to minimize the total distance errors between estimated and observed intuitionistic fuzzy responses in the MPP resolution processes. The proposed approaches are simple/efficient in the formulation/resolution processes, in which the sign of parameters can be determined so that the problem to predetermine the sign of parameters is avoided. Furthermore, the proposed approach has the advantage that the spread of the predicted IFN response will not be over-increased, since the parameters in the established IFR model are crisp. The performance of the obtained models is evaluated and compared with the existing approaches.Keywords: fuzzy sets, intuitionistic fuzzy number, intuitionistic fuzzy regression, mathematical programming method
Procedia PDF Downloads 1401957 Robust Fractional Order Controllers for Minimum and Non-Minimum Phase Systems – Studies on Design and Development
Authors: Anand Kishore Kola, G. Uday Bhaskar Babu, Kotturi Ajay Kumar
Abstract:
The modern dynamic systems used in industries are complex in nature and hence the fractional order controllers have been contemplated as a fresh approach to control system design that takes the complexity into account. Traditional integer order controllers use integer derivatives and integrals to control systems, whereas fractional order controllers use fractional derivatives and integrals to regulate memory and non-local behavior. This study provides a method based on the maximumsensitivity (Ms) methodology to discover all resilient fractional filter Internal Model Control - proportional integral derivative (IMC-PID) controllers that stabilize the closed-loop system and deliver the highest performance for a time delay system with a Smith predictor configuration. Additionally, it helps to enhance the range of PID controllers that are used to stabilize the system. This study also evaluates the effectiveness of the suggested controller approach for minimum phase system in comparison to those currently in use which are based on Integral of Absolute Error (IAE) and Total Variation (TV).Keywords: modern dynamic systems, fractional order controllers, maximum-sensitivity, IMC-PID controllers, Smith predictor, IAE and TV
Procedia PDF Downloads 681956 Comparison Analysis of CFD Turbulence Fluid Numerical Study for Quick Coupling
Authors: JoonHo Lee, KyoJin An, JunSu Kim, Young-Chul Park
Abstract:
In this study, the fluid flow characteristics and performance numerical study through CFD model of the Non-split quick coupling for flow control in hydraulic system equipment for the aerospace business group focused to predict. In this study, we considered turbulence models for the application of Computational Fluid Dynamics for the CFD model of the Non-split Quick Coupling for aerospace business. In addition to this, the adequacy of the CFD model were verified by comparing with standard value. Based on this analysis, accurate the fluid flow characteristics can be predicted. It is, therefore, the design of the fluid flow characteristic contribute the reliability for the Quick Coupling which is required in industries on the basis of research results.Keywords: CFD, FEM, quick coupling, turbulence
Procedia PDF Downloads 3861955 Coupling Concept of Two Parallel Research Codes for Two and Three Dimensional Fluid Structure Interaction Analysis
Authors: Luciano Garelli, Marco Schauer, Jorge D’Elia, Mario A. Storti, Sabine C. Langer
Abstract:
This paper discuss a coupling strategy of two different software packages to provide fluid structure interaction (FSI) analysis. The basic idea is to combine the advantages of the two codes to create a powerful FSI solver for two and three dimensional analysis. The fluid part is computed by a program called PETSc-FEM, a software developed at Centro de Investigación de Métodos Computacionales (CIMEC). The structural part of the coupled process is computed by the research code elementary Parallel Solver (elPaSo) of the Technische Universität Braunschweig, Institut für Konstruktionstechnik (IK).Keywords: computational fluid dynamics (CFD), fluid structure interaction (FSI), finite element method (FEM), software
Procedia PDF Downloads 5551954 Investigation of the Turbulent Cavitating Flows from the Viewpoint of the Lift Coefficient
Authors: Ping-Ben Liu, Chien-Chou Tseng
Abstract:
The objective of this study is to investigate the relationship between the lift coefficient and dynamic behaviors of cavitating flow around a two-dimensional Clark Y hydrofoil at 8° angle of attack, cavitation number of 0.8, and Reynolds number of 7.10⁵. The flow field is investigated numerically by using a vapor transfer equation and a modified turbulence model which applies the filter and local density correction. The results including time-averaged lift/drag coefficient and shedding frequency agree well with experimental observations, which confirmed the reliability of this simulation. According to the variation of lift coefficient, the cycle which consists of growth and shedding of cavitation can be divided into three stages, and the lift coefficient at each stage behaves similarly due to the formation and shedding of the cavity around the trailing edge.Keywords: Computational Fluid Dynamics, cavitation, turbulence, lift coefficient
Procedia PDF Downloads 3541953 Modeling Studies on the Elevated Temperatures Formability of Tube Ends Using RSM
Authors: M. J. Davidson, N. Selvaraj, L. Venugopal
Abstract:
The elevated temperature forming studies on the expansion of thin walled tubes have been studied in the present work. The influence of process parameters namely the die angle, the die ratio and the operating temperatures on the expansion of tube ends at elevated temperatures is carried out. The range of operating parameters have been identified by perfoming extensive simulation studies. The hot forming parameters have been evaluated for AA2014 alloy for performing the simulation studies. Experimental matrix has been developed from the feasible range got from the simulation results. The design of experiments is used for the optimization of process parameters. Response Surface Method’s (RSM) and Box-Behenken design (BBD) is used for developing the mathematical model for expansion. Analysis of variance (ANOVA) is used to analyze the influence of process parameters on the expansion of tube ends. The effect of various process combinations of expansion are analyzed through graphical representations. The developed model is found to be appropriate as the coefficient of determination value is very high and is equal to 0.9726. The predicted values are found to coincide well with the experimental results, within acceptable error limits.Keywords: expansion, optimization, Response Surface Method (RSM), ANOVA, bbd, residuals, regression, tube
Procedia PDF Downloads 5111952 Artificial Intelligence in the Design of High-Strength Recycled Concrete
Authors: Hadi Rouhi Belvirdi, Davoud Beheshtizadeh
Abstract:
The increasing demand for sustainable construction materials has led to a growing interest in high-strength recycled concrete (HSRC). Utilizing recycled materials not only reduces waste but also minimizes the depletion of natural resources. This study explores the application of artificial intelligence (AI) techniques to model and predict the properties of HSRC. In the past two decades, the production levels in various industries and, consequently, the amount of waste have increased significantly. Continuing this trend will undoubtedly cause irreparable damage to the environment. For this reason, engineers have been constantly seeking practical solutions for recycling industrial waste in recent years. This research utilized the results of the compressive strength of 90-day high-strength recycled concrete. The method for creating recycled concrete involved replacing sand with crushed glass and using glass powder instead of cement. Subsequently, a feedforward artificial neural network was employed to model the compressive strength results for 90 days. The regression and error values obtained indicate that this network is suitable for modeling the compressive strength data.Keywords: high-strength recycled concrete, feedforward artificial neural network, regression, construction materials
Procedia PDF Downloads 181951 Solving Operating Room Scheduling Problem by Using Dispatching Rule
Authors: Yang-Kuei Lin, Yin-Yi Chou
Abstract:
In this research, we have considered operating room scheduling problem. The objective is to minimize total operating cost. The total operating cost includes idle cost and overtime cost. We have proposed a dispatching rule that can guarantee to find feasible solutions for the studied problem efficiently. We compared the proposed dispatching rule with the optimal solutions found by solving Inter Programming, and other solutions found by using modified existing dispatching rules. The computational results indicates that the proposed heuristic can find near optimal solutions efficiently.Keywords: assignment, dispatching rule, operation rooms, scheduling
Procedia PDF Downloads 2331950 Adaptive Anchor Weighting for Improved Localization with Levenberg-Marquardt Optimization
Authors: Basak Can
Abstract:
This paper introduces an iterative and weighted localization method that utilizes a unique cost function formulation to significantly enhance the performance of positioning systems. The system employs locators, such as Gateways (GWs), to estimate and track the position of an End Node (EN). Performance is evaluated relative to the number of locators, with known locations determined through calibration. Performance evaluation is presented utilizing low cost single-antenna Bluetooth Low Energy (BLE) devices. The proposed approach can be applied to alternative Internet of Things (IoT) modulation schemes, as well as Ultra WideBand (UWB) or millimeter-wave (mmWave) based devices. In non-line-of-sight (NLOS) scenarios, using four or eight locators yields a 95th percentile localization performance of 2.2 meters and 1.5 meters, respectively, in a 4,305 square feet indoor area with BLE 5.1 devices. This method outperforms conventional RSSI-based techniques, achieving a 51% improvement with four locators and a 52 % improvement with eight locators. Future work involves modeling interference impact and implementing data curation across multiple channels to mitigate such effects.Keywords: lateration, least squares, Levenberg-Marquardt algorithm, localization, path-loss, RMS error, RSSI, sensors, shadow fading, weighted localization
Procedia PDF Downloads 311949 Investigation of Extreme Gradient Boosting Model Prediction of Soil Strain-Shear Modulus
Authors: Ehsan Mehryaar, Reza Bushehri
Abstract:
One of the principal parameters defining the clay soil dynamic response is the strain-shear modulus relation. Predicting the strain and, subsequently, shear modulus reduction of the soil is essential for performance analysis of structures exposed to earthquake and dynamic loadings. Many soil properties affect soil’s dynamic behavior. In order to capture those effects, in this study, a database containing 1193 data points consists of maximum shear modulus, strain, moisture content, initial void ratio, plastic limit, liquid limit, initial confining pressure resulting from dynamic laboratory testing of 21 clays is collected for predicting the shear modulus vs. strain curve of soil. A model based on an extreme gradient boosting technique is proposed. A tree-structured parzan estimator hyper-parameter tuning algorithm is utilized simultaneously to find the best hyper-parameters for the model. The performance of the model is compared to the existing empirical equations using the coefficient of correlation and root mean square error.Keywords: XGBoost, hyper-parameter tuning, soil shear modulus, dynamic response
Procedia PDF Downloads 2061948 Application of the Discrete Rationalized Haar Transform to Distributed Parameter System
Authors: Joon-Hoon Park
Abstract:
In this paper the rationalized Haar transform is applied for distributed parameter system identification and estimation. A distributed parameter system is a dynamical and mathematical model described by a partial differential equation. And system identification concerns the problem of determining mathematical models from observed data. The Haar function has some disadvantages of calculation because it contains irrational numbers, for these reasons the rationalized Haar function that has only rational numbers. The algorithm adopted in this paper is based on the transform and operational matrix of the rationalized Haar function. This approach provides more convenient and efficient computational results.Keywords: distributed parameter system, rationalized Haar transform, operational matrix, system identification
Procedia PDF Downloads 5101947 Exploring Deep Neural Network Compression: An Overview
Authors: Ghorab Sara, Meziani Lila, Rubin Harvey Stuart
Abstract:
The rapid growth of deep learning has led to intricate and resource-intensive deep neural networks widely used in computer vision tasks. However, their complexity results in high computational demands and memory usage, hindering real-time application. To address this, research focuses on model compression techniques. The paper provides an overview of recent advancements in compressing neural networks and categorizes the various methods into four main approaches: network pruning, quantization, network decomposition, and knowledge distillation. This paper aims to provide a comprehensive outline of both the advantages and limitations of each method.Keywords: model compression, deep neural network, pruning, knowledge distillation, quantization, low-rank decomposition
Procedia PDF Downloads 471946 Multi-Objective Multi-Mode Resource-Constrained Project Scheduling Problem by Preemptive Fuzzy Goal Programming
Authors: Busaba Phurksaphanrat
Abstract:
This research proposes a pre-emptive fuzzy goal programming model for multi-objective multi-mode resource constrained project scheduling problem. The objectives of the problem are minimization of the total time and the total cost of the project. Objective in a multi-mode resource-constrained project scheduling problem is often a minimization of make-span. However, both time and cost should be considered at the same time with different level of important priorities. Moreover, all elements of cost functions in a project are not included in the conventional cost objective function. Incomplete total project cost causes an error in finding the project scheduling time. In this research, pre-emptive fuzzy goal programming is presented to solve the multi-objective multi-mode resource constrained project scheduling problem. It can find the compromise solution of the problem. Moreover, it is also flexible in adjusting to find a variety of alternative solutions.Keywords: multi-mode resource constrained project scheduling problem, fuzzy set, goal programming, pre-emptive fuzzy goal programming
Procedia PDF Downloads 4391945 Production of Ultra-Low Temperature by the Vapor Compression Refrigeration Cycles with Environment Friendly Working Fluids
Authors: Sameh Frikha, Mohamed Salah Abid
Abstract:
We investigate the performance of an integrated cascade (IC) refrigeration system which uses environment friendly zeotropic mixtures. Computational calculation has been carried out by varying pressure level at the evaporator and the condenser of the system. Effects of mass flow rate of the refrigerant on the coefficient of performance (COP) are presented. We show that the integrated cascade system produces ultra-low temperatures in the evaporator by using environment friendly zeotropic mixture.Keywords: coefficient of performance, environment friendly zeotropic mixture, integrated cascade, ultra low temperature, vapor compression refrigeration cycles
Procedia PDF Downloads 2641944 Numerical Investigation of Multiphase Flow Structure for the Flue Gas Desulfurization
Authors: Cheng-Jui Li, Chien-Chou Tseng
Abstract:
This study adopts Computational Fluid Dynamics (CFD) technique to build the multiphase flow numerical model where the interface between the flue gas and desulfurization liquid can be traced by Eulerian-Eulerian model. Inside the tower, the contact of the desulfurization liquid flow from the spray nozzles and flue gas flow can trigger chemical reactions to remove the sulfur dioxide from the exhaust gas. From experimental observations of the industrial scale plant, the desulfurization mechanism depends on the mixing level between the flue gas and the desulfurization liquid. In order to significantly improve the desulfurization efficiency, the mixing efficiency and the residence time can be increased by perforated sieve trays. Hence, the purpose of this research is to investigate the flow structure of sieve trays for the flue gas desulfurization by numerical simulation. In this study, there is an outlet at the top of FGD tower to discharge the clean gas and the FGD tower has a deep tank at the bottom, which is used to collect the slurry liquid. In the major desulfurization zone, the desulfurization liquid and flue gas have a complex mixing flow. Because there are four perforated plates in the major desulfurization zone, which spaced 0.4m from each other, and the spray array is placed above the top sieve tray, which includes 33 nozzles. Each nozzle injects desulfurization liquid that consists of the Mg(OH)2 solution. On each sieve tray, the outside diameter, the hole diameter, and the porosity are 0.6m, 20 mm and 34.3%. The flue gas flows into the FGD tower from the space between the major desulfurization zone and the deep tank can finally become clean. The desulfurization liquid and the liquid slurry goes to the bottom tank and is discharged as waste. When the desulfurization solution flow impacts the sieve tray, the downward momentum will be converted to the upper surface of the sieve tray. As a result, a thin liquid layer can be developed above the sieve tray, which is the so-called the slurry layer. And the volume fraction value within the slurry layer is around 0.3~0.7. Therefore, the liquid phase can't be considered as a discrete phase under the Eulerian-Lagrangian framework. Besides, there is a liquid column through the sieve trays. The downward liquid column becomes narrow as it interacts with the upward gas flow. After the flue gas flows into the major desulfurization zone, the flow direction of the flue gas is upward (+y) in the tube between the liquid column and the solid boundary of the FGD tower. As a result, the flue gas near the liquid column may be rolled down to slurry layer, which developed a vortex or a circulation zone between any two sieve trays. The vortex structure between two sieve trays results in a sufficient large two-phase contact area. It also increases the number of times that the flue gas interacts with the desulfurization liquid. On the other hand, the sieve trays improve the two-phase mixing, which may improve the SO2 removal efficiency.Keywords: Computational Fluid Dynamics (CFD), Eulerian-Eulerian Model, Flue Gas Desulfurization (FGD), perforated sieve tray
Procedia PDF Downloads 2861943 Maximum Deformation Estimation for Reinforced Concrete Buildings Using Equivalent Linearization Method
Authors: Chien-Kuo Chiu
Abstract:
In the displacement-based seismic design and evaluation, equivalent linearization method is one of the approximation methods to estimate the maximum inelastic displacement response of a system. In this study, the accuracy of two equivalent linearization methods are investigated. The investigation consists of three soil condition in Taiwan (Taipei Basin 1, 2, and 3) and five different heights of building (H_r= 10, 20, 30, 40, and 50 m). The first method is the Taiwan equivalent linearization method (TELM) which was proposed based on Japanese equivalent linear method considering the modification factor, α_T= 0.85. On the basis of Lin and Miranda study, the second method is proposed with some modification considering Taiwan soil conditions. From this study, it is shown that Taiwanese equivalent linearization method gives better estimation compared to the modified Lin and Miranda method (MLM). The error index for the Taiwanese equivalent linearization method are 16%, 13%, and 12% for Taipei Basin 1, 2, and 3, respectively. Furthermore, a ductility demand spectrum of single-degree-of-freedom (SDOF) system is presented in this study as a guide for engineers to estimate the ductility demand of a structure.Keywords: displacement-based design, ductility demand spectrum, equivalent linearization method, RC buildings, single-degree-of-freedom
Procedia PDF Downloads 1641942 Computational Insights Into Allosteric Regulation of Lyn Protein Kinase: Structural Dynamics and Impacts of Cancer-Related Mutations
Authors: Mina Rabipour, Elena Pallaske, Floyd Hassenrück, Rocio Rebollido-Rios
Abstract:
Protein tyrosine kinases, including Lyn kinase of the Src family kinases (SFK), regulate cell proliferation, survival, and differentiation. Lyn kinase has been implicated in various cancers, positioning it as a promising therapeutic target. However, the conserved ATP-binding pocket across SFKs makes developing selective inhibitors challenging. This study aims to address this limitation by exploring the potential for allosteric modulation of Lyn kinase, focusing on how its structural dynamics and specific oncogenic mutations impact its conformation and function. To achieve this, we combined homology modeling, molecular dynamics simulations, and data science techniques to conduct microsecond-length simulations. Our approach allowed a detailed investigation into the interplay between Lyn’s catalytic and regulatory domains, identifying key conformational states involved in allosteric regulation. Additionally, we evaluated the structural effects of Dasatinib, a competitive inhibitor, and ATP binding on Lyn active conformation. Notably, our simulations show that cancer-related mutations, specifically I364L/N and E290D/K, shift Lyn toward an inactive conformation, contrasting with the active state of the wild-type protein. This may suggest how these mutations contribute to aberrant signaling in cancer cells. We conducted a dynamical network analysis to assess residue-residue interactions and the impact of mutations on the Lyn intramolecular network. This revealed significant disruptions due to mutations, especially in regions distant from the ATP-binding site. These disruptions suggest potential allosteric sites as therapeutic targets, offering an alternative strategy for Lyn inhibition with higher specificity and fewer off-target effects compared to ATP-competitive inhibitors. Our findings provide insights into Lyn kinase regulation and highlight allosteric sites as avenues for selective drug development. Targeting these sites may modulate Lyn activity in cancer cells, reducing toxicity and improving outcomes. Furthermore, our computational strategy offers a scalable approach for analyzing other SFK members or kinases with similar properties, facilitating the discovery of selective allosteric modulators and contributing to precise cancer therapies.Keywords: lyn tyrosine kinase, mutation analysis, conformational changes, dynamic network analysis, allosteric modulation, targeted inhibition
Procedia PDF Downloads 181941 Optimal Sensing Technique for Estimating Stress Distribution of 2-D Steel Frame Structure Using Genetic Algorithm
Authors: Jun Su Park, Byung Kwan Oh, Jin Woo Hwang, Yousok Kim, Hyo Seon Park
Abstract:
For the structural safety, the maximum stress calculated from the stress distribution of a structure is widely used. The stress distribution can be estimated by deformed shape of the structure obtained from measurement. Although the estimation of stress is strongly affected by the location and number of sensing points, most studies have conducted the stress estimation without reasonable basis on sensing plan such as the location and number of sensors. In this paper, an optimal sensing technique for estimating the stress distribution is proposed. This technique proposes the optimal location and number of sensing points for a 2-D frame structure while minimizing the error of stress distribution between analytical model and estimation by cubic smoothing splines using genetic algorithm. To verify the proposed method, the optimal sensor measurement technique is applied to simulation tests on 2-D steel frame structure. The simulation tests are performed under various loading scenarios. Through those tests, the optimal sensing plan for the structure is suggested and verified.Keywords: genetic algorithm, optimal sensing, optimizing sensor placements, steel frame structure
Procedia PDF Downloads 5371940 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems
Authors: Jianhua Zhou, Yuwen Zhang
Abstract:
A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.Keywords: conduction, inverse problems, conjugated gradient method, laser
Procedia PDF Downloads 3721939 Implementation of Data Science in Field of Homologation
Authors: Shubham Bhonde, Nekzad Doctor, Shashwat Gawande
Abstract:
For the use and the import of Keys and ID Transmitter as well as Body Control Modules with radio transmission in a lot of countries, homologation is required. Final deliverables in homologation of the product are certificates. In considering the world of homologation, there are approximately 200 certificates per product, with most of the certificates in local languages. It is challenging to manually investigate each certificate and extract relevant data from the certificate, such as expiry date, approval date, etc. It is most important to get accurate data from the certificate as inaccuracy may lead to missing re-homologation of certificates that will result in an incompliance situation. There is a scope of automation in reading the certificate data in the field of homologation. We are using deep learning as a tool for automation. We have first trained a model using machine learning by providing all country's basic data. We have trained this model only once. We trained the model by feeding pdf and jpg files using the ETL process. Eventually, that trained model will give more accurate results later. As an outcome, we will get the expiry date and approval date of the certificate with a single click. This will eventually help to implement automation features on a broader level in the database where certificates are stored. This automation will help to minimize human error to almost negligible.Keywords: homologation, re-homologation, data science, deep learning, machine learning, ETL (extract transform loading)
Procedia PDF Downloads 1641938 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data
Authors: S. Jurado, E. Pazmino
Abstract:
Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.Keywords: medial axis, pore-throat distribution, porosity, porous media
Procedia PDF Downloads 1181937 The Effect of Institutions on Economic Growth: An Analysis Based on Bayesian Panel Data Estimation
Authors: Mohammad Anwar, Shah Waliullah
Abstract:
This study investigated panel data regression models. This paper used Bayesian and classical methods to study the impact of institutions on economic growth from data (1990-2014), especially in developing countries. Under the classical and Bayesian methodology, the two-panel data models were estimated, which are common effects and fixed effects. For the Bayesian approach, the prior information is used in this paper, and normal gamma prior is used for the panel data models. The analysis was done through WinBUGS14 software. The estimated results of the study showed that panel data models are valid models in Bayesian methodology. In the Bayesian approach, the effects of all independent variables were positively and significantly affected by the dependent variables. Based on the standard errors of all models, we must say that the fixed effect model is the best model in the Bayesian estimation of panel data models. Also, it was proved that the fixed effect model has the lowest value of standard error, as compared to other models.Keywords: Bayesian approach, common effect, fixed effect, random effect, Dynamic Random Effect Model
Procedia PDF Downloads 701936 A Probabilistic Theory of the Buy-Low and Sell-High for Algorithmic Trading
Authors: Peter Shi
Abstract:
Algorithmic trading is a rapidly expanding domain within quantitative finance, constituting a substantial portion of trading volumes in the US financial market. The demand for rigorous and robust mathematical theories underpinning these trading algorithms is ever-growing. In this study, the author establishes a new stock market model that integrates the Efficient Market Hypothesis and the statistical arbitrage. The model, for the first time, finds probabilistic relations between the rational price and the market price in terms of the conditional expectation. The theory consequently leads to a mathematical justification of the old market adage: buy-low and sell-high. The thresholds for “low” and “high” are precisely derived using a max-min operation on Bayes’s error. This explicit connection harmonizes the Efficient Market Hypothesis and Statistical Arbitrage, demonstrating their compatibility in explaining market dynamics. The amalgamation represents a pioneering contribution to quantitative finance. The study culminates in comprehensive numerical tests using historical market data, affirming that the “buy-low” and “sell-high” algorithm derived from this theory significantly outperforms the general market over the long term in four out of six distinct market environments.Keywords: efficient market hypothesis, behavioral finance, Bayes' decision, algorithmic trading, risk control, stock market
Procedia PDF Downloads 741935 Study the Sloshing Phenomenon in the Tank Filled Partially with Liquid Using Computational Fluid Dynamics (CFD) Simulation
Authors: Amit Kumar, Jaikumar V, Pradeep AG, Shivakumar Bhavi
Abstract:
Reducing sloshing is one of the major challenges in industries where transporting of liquid involved. The present study investigates the sloshing effect for different liquid levels 25%, 50%, and 75% of the tank capacity. CFD simulation for three different liquid levels has been carried out using a time-based multiphase Volume of fluid (VOF) scheme. Baffles were introduced to examine the sloshing effect inside the tank. Results were compared against the baseline case to assess the effectiveness of baffles. Maximum liquid height over the period of the simulation was considered as the parameter for measuring the sloshing effect inside the tank. It was found that the addition of baffles reduced the sloshing effect inside the tank as compared to the baseline model.Keywords: sloshing, CFD, VOF, baffles
Procedia PDF Downloads 259