Search results for: fuzzy set Models
4389 Climate Related Financial Risk on Automobile Industry and the Impact to the Financial Institutions
Authors: Mahalakshmi Vivekanandan S.
Abstract:
As per the recent changes happening in the global policies, climate-related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate-related changes can often happen and lead to risk and a lot of uncertainty, but needs to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed so that the financial institutions can plan to mitigate it. Climate-related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and other risk types. And the models required to compute this has to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out the suggestion that the climate-related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, the author presents a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios and how the different transition risks affect the risk associated with the different parties. This research paper delves into the topic of the increase in the concentration of greenhouse gases that in turn cause global warming. It then considers the various scenarios of having the different risk drivers impacting the Credit and market risk of an institution by understanding the transmission channels and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: the automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II Capital calculations and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.Keywords: capital calculation, climate risk, credit risk, pillar ii risk, scenario modeling
Procedia PDF Downloads 1404388 Droplet Entrainment and Deposition in Horizontal Stratified Two-Phase Flow
Authors: Joshua Kim Schimpf, Kyun Doo Kim, Jaseok Heo
Abstract:
In this study, the droplet behavior of under horizontal stratified flow regime for air and water flow in horizontal pipe experiments from a 0.24 m, 0.095 m, and 0.0486 m size diameter pipe are examined. The effects of gravity, pipe diameter, and turbulent diffusion on droplet deposition are considered. Models for droplet entrainment and deposition are proposed that considers developing length. Validation for experimental data dedicated from the REGARD, CEA and Williams, University of Illinois, experiment were performed using SPACE (Safety and Performance Analysis Code for Nuclear Power Plants).Keywords: droplet, entrainment, deposition, horizontal
Procedia PDF Downloads 3774387 Numerical Modeling of the Depth-Averaged Flow over a Hill
Authors: Anna Avramenko, Heikki Haario
Abstract:
This paper reports the development and application of a 2D depth-averaged model. The main goal of this contribution is to apply the depth averaged equations to a wind park model in which the treatment of the geometry, introduced on the mathematical model by the mass and momentum source terms. The depth-averaged model will be used in future to find the optimal position of wind turbines in the wind park. K-E and 2D LES turbulence models were consider in this article. 2D CFD simulations for one hill was done to check the depth-averaged model in practise.Keywords: depth-averaged equations, numerical modeling, CFD, wind park model
Procedia PDF Downloads 6034386 Interfacing and Replication of Electronic Machinery Using MATLAB/SIMULINK
Authors: Abdulatif Abdulsalam, Mohamed Shaban
Abstract:
This paper introduces interfacing and replication of electronic tools based on the MATLAB/ SIMULINK mock-up package. Mock-up components contain dc-dc converters, power issue rectifiers, motivation machines, dc gear, synchronous gear, and more entire systems. Power issue rectifier model includes solid state device models. The tools are the clear-cut structure and mock-up of complex energetic systems connecting with power electronic machines.Keywords: power electronics, machine, MATLAB, simulink
Procedia PDF Downloads 3574385 The Use of Artificial Intelligence in Diagnosis of Mastitis in Cows
Authors: Djeddi Khaled, Houssou Hind, Miloudi Abdellatif, Rabah Siham
Abstract:
In the field of veterinary medicine, there is a growing application of artificial intelligence (AI) for diagnosing bovine mastitis, a prevalent inflammatory disease in dairy cattle. AI technologies, such as automated milking systems, have streamlined the assessment of key metrics crucial for managing cow health during milking and identifying prevalent diseases, including mastitis. These automated milking systems empower farmers to implement automatic mastitis detection by analyzing indicators like milk yield, electrical conductivity, fat, protein, lactose, blood content in the milk, and milk flow rate. Furthermore, reports highlight the integration of somatic cell count (SCC), thermal infrared thermography, and diverse systems utilizing statistical models and machine learning techniques, including artificial neural networks, to enhance the overall efficiency and accuracy of mastitis detection. According to a review of 15 publications, machine learning technology can predict the risk and detect mastitis in cattle with an accuracy ranging from 87.62% to 98.10% and sensitivity and specificity ranging from 84.62% to 99.4% and 81.25% to 98.8%, respectively. Additionally, machine learning algorithms and microarray meta-analysis are utilized to identify mastitis genes in dairy cattle, providing insights into the underlying functional modules of mastitis disease. Moreover, AI applications can assist in developing predictive models that anticipate the likelihood of mastitis outbreaks based on factors such as environmental conditions, herd management practices, and animal health history. This proactive approach supports farmers in implementing preventive measures and optimizing herd health. By harnessing the power of artificial intelligence, the diagnosis of bovine mastitis can be significantly improved, enabling more effective management strategies and ultimately enhancing the health and productivity of dairy cattle. The integration of artificial intelligence presents valuable opportunities for the precise and early detection of mastitis, providing substantial benefits to the dairy industry.Keywords: artificial insemination, automatic milking system, cattle, machine learning, mastitis
Procedia PDF Downloads 654384 Reproductive Biology and Lipid Content of Albacore Tuna (Thunnus alalunga) in the Western Indian Ocean
Authors: Zahirah Dhurmeea, Iker Zudaire, Heidi Pethybridge, Emmanuel Chassot, Maria Cedras, Natacha Nikolic, Jerome Bourjea, Wendy West, Chandani Appadoo, Nathalie Bodin
Abstract:
Scientific advice on the status of fish stocks relies on indicators that are based on strong assumptions on biological parameters such as condition, maturity and fecundity. Currently, information on the biology of albacore tuna, Thunnus alalunga, in the Indian Ocean is scarce. Consequently, many parameters used in stock assessment models for Indian Ocean albacore originate largely from other studied stocks or species of tuna. Inclusion of incorrect biological data in stock assessment models would lead to inappropriate estimates of stock status used by fisheries manager’s to establish future catch allowances. The reproductive biology of albacore tuna in the western Indian Ocean was examined through analysis of the sex ratio, spawning season, length-at-maturity (L50), spawning frequency, fecundity and fish condition. In addition, the total lipid content (TL) and lipid class composition in the gonads, liver and muscle tissues of female albacore during the reproductive cycle was investigated. A total of 923 female and 867 male albacore were sampled from 2013 to 2015. A bias in sex-ratio was found in favour of females with fork length (LF) <100 cm. Using histological analyses and gonadosomatic index, spawning was found to occur between 10°S and 30°S, mainly to the east of Madagascar from October to January. Large females contributed more to reproduction through their longer spawning period compared to small individuals. The L50 (mean ± standard error) of female albacore was estimated at 85.3 ± 0.7 cm LF at the vitellogenic 3 oocyte stage maturity threshold. Albacore spawn on average every 2.2 days within the spawning region and spawning months from November to January. Batch fecundity varied between 0.26 and 2.09 million eggs and the relative batch fecundity (mean standard deviation) was estimated at 53.4 ± 23.2 oocytes g-1 of somatic-gutted weight. Depending on the maturity stage, TL in ovaries ranged from 7.5 to 577.8 mg g-1 of wet weight (ww) with different proportions of phospholipids (PL), wax esters (WE), triacylglycerol (TAG) and sterol (ST). The highest TL were observed in immature (mostly TAG and PL) and spawning capable ovaries (mostly PL, WE and TAG). Liver TL varied from 21.1 to 294.8 mg g-1 (ww) and acted as an energy (mainly TAG and PL) storage prior to reproduction when the lowest TL was observed. Muscle TL varied from 2.0 to 71.7 g-1 (ww) in mature females without a clear pattern between maturity stages, although higher values of up to 117.3 g-1 (ww) was found in immature females. TL results suggest that albacore could be viewed predominantly as a capital breeder relying mostly on lipids stored before the onset of reproduction and with little additional energy derived from feeding. This study is the first one to provide new information on the reproductive development and classification of albacore in the western Indian Ocean. The reproductive parameters will reduce uncertainty in current stock assessment models which will eventually promote sustainability of the fishery.Keywords: condition, size-at-maturity, spawning behaviour, temperate tuna, total lipid content
Procedia PDF Downloads 2604383 Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks
Authors: Mehrdad Shafiei Dizaji, Hoda Azari
Abstract:
The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models.Keywords: physics-informed neural networks, deep learning, ground-penetrating radar (GPR), NDE, ConvLSTM, physics, data driven
Procedia PDF Downloads 404382 Analytical Investigation of Modeling and Simulation of Different Combinations of Sinusoidal Supplied Autotransformer under Linear Loading Conditions
Authors: M. Salih Taci, N. Tayebi, I. Bozkır
Abstract:
This paper investigates the operation of a sinusoidal supplied autotransformer on the different states of magnetic polarity of primary and secondary terminals for four different step-up and step-down analytical conditions. In this paper, a new analytical modeling and equations for dot-marked and polarity-based step-up and step-down autotransformer are presented. These models are validated by the simulation of current and voltage waveforms for each state. PSpice environment was used for simulation.Keywords: autotransformer modeling, autotransformer simulation, step-up autotransformer, step-down autotransformer, polarity
Procedia PDF Downloads 3194381 The Competitiveness of Small and Medium Sized Enterprises: Digital Transformation of Business Models
Authors: Chante Van Tonder, Bart Bossink, Chris Schachtebeck, Cecile Nieuwenhuizen
Abstract:
Small and Medium-Sized Enterprises (SMEs) play a key role in national economies around the world, being contributors to economic and social well-being. Due to this, the success, growth and competitiveness of SMEs are critical. However, there are many factors that undermine this, such as resource constraints, poor information communication infrastructure (ICT), skills shortages and poor management. The Fourth Industrial Revolution offers new tools and opportunities such as digital transformation and business model innovation (BMI) to the SME sector to enhance its competitiveness. Adopting and leveraging digital technologies such as cloud, mobile technologies, big data and analytics can significantly improve business efficiencies, value proposition and customer experiences. Digital transformation can contribute to the growth and competitiveness of SMEs. However, SMEs are lagging behind in the participation of digital transformation. Extant research lacks conceptual and empirical research on how digital transformation drives BMI and the impact it has on the growth and competitiveness of SMEs. The purpose of the study is, therefore, to close this gap by developing and empirically validating a conceptual model to determine if SMEs are achieving BMI through digital transformation and how this is impacting the growth, competitiveness and overall business performance. An empirical study is being conducted on 300 SMEs, consisting of 150 South-African and 150 Dutch SMEs, to achieve this purpose. Structural equation modeling is used, since it is a multivariate statistical analysis technique that is used to analyse structural relationships and is a suitable research method to test the hypotheses in the model. Empirical research is needed to gather more insight into how and if SMEs are digitally transformed and how BMI can be driven through digital transformation. The findings of this study can be used by SME business owners, managers and employees at all levels. The findings will indicate if digital transformation can indeed impact the growth, competitiveness and overall performance of an SME, reiterating the importance and potential benefits of adopting digital technologies. In addition, the findings will also exhibit how BMI can be achieved in light of digital transformation. This study contributes to the body of knowledge in a highly relevant and important topic in management studies by analysing the impact of digital transformation on BMI on a large number of SMEs that are distinctly different in economic and cultural factorsKeywords: business models, business model innovation, digital transformation, SMEs
Procedia PDF Downloads 2404380 Reliability Modeling of Repairable Subsystems in Semiconductor Fabrication: A Virtual Age and General Repair Framework
Authors: Keshav Dubey, Swajeeth Panchangam, Arun Rajendran, Swarnim Gupta
Abstract:
In the semiconductor capital equipment industry, effective modeling of repairable system reliability is crucial for optimizing maintenance strategies and ensuring operational efficiency. However, repairable system reliability modeling using a renewal process is not as popular in the semiconductor equipment industry as it is in the locomotive and automotive industries. Utilization of this approach will help optimize maintenance practices. This paper presents a structured framework that leverages both parametric and non-parametric approaches to model the reliability of repairable subsystems based on operational data, maintenance schedules, and system-specific conditions. Data is organized at the equipment ID level, facilitating trend testing to uncover failure patterns and system degradation over time. For non-parametric modeling, the Mean Cumulative Function (Mean Cumulative Function) approach is applied, offering a flexible method to estimate the cumulative number of failures over time without assuming an underlying statistical distribution. This allows for empirical insights into subsystem failure behavior based on historical data. On the parametric side, virtual age modeling, along with Homogeneous and Non-Homogeneous Poisson Process (Homogeneous Poisson Process and Non-Homogeneous Poisson Process) models, is employed to quantify the effect of repairs and the aging process on subsystem reliability. These models allow for a more structured analysis by characterizing repair effectiveness and system wear-out trends over time. A comparison of various Generalized Renewal Process (GRP) approaches highlights their utility in modeling different repair effectiveness scenarios. These approaches provide a robust framework for assessing the impact of maintenance actions on system performance and reliability. By integrating both parametric and non-parametric methods, this framework offers a comprehensive toolset for reliability engineers to better understand equipment behavior, assess the effectiveness of maintenance activities, and make data-driven decisions that enhance system availability and operational performance in semiconductor fabrication facilities.Keywords: reliability, maintainability, homegenous poission process, repairable system
Procedia PDF Downloads 194379 Methodologies, Systems Development Life Cycle and Modeling Languages in Agile Software Development
Authors: I. D. Arroyo
Abstract:
This article seeks to integrate different concepts from contemporary software engineering with an agile development approach. We seek to clarify some definitions and uses, we make a difference between the Systems Development Life Cycle (SDLC) and the methodologies, we differentiate the types of frameworks such as methodological, philosophical and behavioral, standards and documentation. We define relationships based on the documentation of the development process through formal and ad hoc models, and we define the usefulness of using DevOps and Agile Modeling as integrative methodologies of principles and best practices.Keywords: methodologies, modeling languages, agile modeling, UML
Procedia PDF Downloads 1864378 Algorithms Inspired from Human Behavior Applied to Optimization of a Complex Process
Authors: S. Curteanu, F. Leon, M. Gavrilescu, S. A. Floria
Abstract:
Optimization algorithms inspired from human behavior were applied in this approach, associated with neural networks models. The algorithms belong to human behaviors of learning and cooperation and human competitive behavior classes. For the first class, the main strategies include: random learning, individual learning, and social learning, and the selected algorithms are: simplified human learning optimization (SHLO), social learning optimization (SLO), and teaching-learning based optimization (TLBO). For the second class, the concept of learning is associated with competitiveness, and the selected algorithms are sports-inspired algorithms (with Football Game Algorithm, FGA and Volleyball Premier League, VPL) and Imperialist Competitive Algorithm (ICA). A real process, the synthesis of polyacrylamide-based multicomponent hydrogels, where some parameters are difficult to obtain experimentally, is considered as a case study. Reaction yield and swelling degree are predicted as a function of reaction conditions (acrylamide concentration, initiator concentration, crosslinking agent concentration, temperature, reaction time, and amount of inclusion polymer, which could be starch, poly(vinyl alcohol) or gelatin). The experimental results contain 175 data. Artificial neural networks are obtained in optimal form with biologically inspired algorithm; the optimization being perform at two level: structural and parametric. Feedforward neural networks with one or two hidden layers and no more than 25 neurons in intermediate layers were obtained with values of correlation coefficient in the validation phase over 0.90. The best results were obtained with TLBO algorithm, correlation coefficient being 0.94 for an MLP(6:9:20:2) – a feedforward neural network with two hidden layers and 9 and 20, respectively, intermediate neurons. Good results obtained prove the efficiency of the optimization algorithms. More than the good results, what is important in this approach is the simulation methodology, including neural networks and optimization biologically inspired algorithms, which provide satisfactory results. In addition, the methodology developed in this approach is general and has flexibility so that it can be easily adapted to other processes in association with different types of models.Keywords: artificial neural networks, human behaviors of learning and cooperation, human competitive behavior, optimization algorithms
Procedia PDF Downloads 1084377 Why Do We Need Hierachical Linear Models?
Authors: Mustafa Aydın, Ali Murat Sunbul
Abstract:
Hierarchical or nested data structures usually are seen in many research areas. Especially, in the field of education, if we examine most of the studies, we can see the nested structures. Students in classes, classes in schools, schools in cities and cities in regions are similar nested structures. In a hierarchical structure, students being in the same class, sharing the same physical conditions and similar experiences and learning from the same teachers, they demonstrate similar behaviors between them rather than the students in other classes.Keywords: hierarchical linear modeling, nested data, hierarchical structure, data structure
Procedia PDF Downloads 6524376 Multilevel Modelling of Modern Contraceptive Use in Nigeria: Analysis of the 2013 NDHS
Authors: Akiode Ayobami, Akiode Akinsewa, Odeku Mojisola, Salako Busola, Odutolu Omobola, Nuhu Khadija
Abstract:
Purpose: Evidence exists that family planning use can contribute to reduction in infant and maternal mortality in any country. Despite these benefits, contraceptive use in Nigeria still remains very low, only 10% among married women. Understanding factors that predict contraceptive use is very important in order to improve the situation. In this paper, we analysed data from the 2013 Nigerian Demographic and Health Survey (NDHS) to better understand predictors of contraceptive use in Nigeria. The use of logistics regression and other traditional models in this type of situation is not appropriate as they do not account for social structure influence brought about by the hierarchical nature of the data on response variable. We therefore used multilevel modelling to explore the determinants of contraceptive use in order to account for the significant variation in modern contraceptive use by socio-demographic, and other proximate variables across the different Nigerian states. Method: This data has a two-level hierarchical structure. We considered the data of 26, 403 married women of reproductive age at level 1 and nested them within the 36 states and the Federal Capital Territory, Abuja at level 2. We modelled use of modern contraceptive against demographic variables, being told about FP at health facility, heard of FP on TV, Magazine or radio, husband desire for more children nested within the state. Results: Our results showed that the independent variables in the model were significant predictors of modern contraceptive use. The estimated variance component for the null model, random intercept, and random slope models were significant (p=0.00), indicating that the variation in contraceptive use across the Nigerian states is significant, and needs to be accounted for in order to accurately determine the predictors of contraceptive use, hence the data is best fitted by the multilevel model. Only being told about family planning at the health facility and religion have a significant random effect, implying that their predictability of contraceptive use varies across the states. Conclusion and Recommendation: Results showed that providing FP information at the health facility and religion needs to be considered when programming to improve contraceptive use at the state levels.Keywords: multilevel modelling, family planning, predictors, Nigeria
Procedia PDF Downloads 4194375 Brainwave Classification for Brain Balancing Index (BBI) via 3D EEG Model Using k-NN Technique
Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan
Abstract:
In this paper, the comparison between k-Nearest Neighbor (kNN) algorithms for classifying the 3D EEG model in brain balancing is presented. The EEG signal recording was conducted on 51 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, maximum PSD values were extracted as features from the model. There are three indexes for the balanced brain; index 3, index 4 and index 5. There are significant different of the EEG signals due to the brain balancing index (BBI). Alpha-α (8–13 Hz) and beta-β (13–30 Hz) were used as input signals for the classification model. The k-NN classification result is 88.46% accuracy. These results proved that k-NN can be used in order to predict the brain balancing application.Keywords: power spectral density, 3D EEG model, brain balancing, kNN
Procedia PDF Downloads 4874374 Modelling the Physicochemical Properties of Papaya Based-Cookies Using Response Surface Methodology
Authors: Mayowa Saheed Sanusi A, Musiliu Olushola Sunmonua, Abdulquadri Alakab Owolabi Raheema, Adeyemi Ikimot Adejokea
Abstract:
The development of healthy cookies for health-conscious consumers cannot be overemphasized in the present global health crisis. This study was aimed to evaluate and model the influence of ripeness levels of papaya puree (unripe, ripe and overripe), oven temperature (130°C, 150°C and 170°C) and oven rack speed (stationary, 10 and 20 rpm) on physicochemical properties of papaya-based cookies using Response Surface Methodology (RSM). The physicochemical properties (baking time, cookies mass, cookies thickness, spread ratio, proximate composition, Calcium, Vitamin C and Total Phenolic Content) were determined using standard procedures. The data obtained were statistically analysed at p≤0.05 using ANOVA. The polynomial regression model of response surface methodology was used to model the physicochemical properties. The adequacy of the models was determined using the coefficient of determination (R²) and the response optimizer of RSM was used to determine the optimum physicochemical properties for the papaya-based cookies. Cookies produced from overripe papaya puree were observed to have the shortest baking time; ripe papaya puree favors cookies spread ratio, while the unripe papaya puree gives cookies with the highest mass and thickness. The highest crude protein content, fiber content, calcium content, Vitamin C and Total Phenolic Content (TPC) were observed in papaya based-cookies produced from overripe puree. The models for baking time, cookies mass, cookies thickness, spread ratio, moisture content, crude protein and TPC were significant, with R2 ranging from 0.73 – 0.95. The optimum condition for producing papaya based-cookies with desirable physicochemical properties was obtained at 149°C oven temperature, 17 rpm oven rack speed and with the use of overripe papaya puree. The Information on the use of puree from unripe, ripe and overripe papaya can help to increase the use of underutilized unripe or overripe papaya and also serve as a strategic means of obtaining a fat substitute to produce new products with lower production cost and health benefit.Keywords: papaya based-cookies, modeling, response surface methodology, physicochemical properties
Procedia PDF Downloads 1674373 The Volume–Volatility Relationship Conditional to Market Efficiency
Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese
Abstract:
The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent
Procedia PDF Downloads 784372 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation
Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke
Abstract:
Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform
Procedia PDF Downloads 3094371 Towards Automatic Calibration of In-Line Machine Processes
Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales
Abstract:
In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820Keywords: data model, machine learning, industrial winding, calibration
Procedia PDF Downloads 2414370 Characterizing the Rectification Process for Designing Scoliosis Braces: Towards Digital Brace Design
Authors: Inigo Sanz-Pena, Shanika Arachchi, Dilani Dhammika, Sanjaya Mallikarachchi, Jeewantha S. Bandula, Alison H. McGregor, Nicolas Newell
Abstract:
The use of orthotic braces for adolescent idiopathic scoliosis (AIS) patients is the most common non-surgical treatment to prevent deformity progression. The traditional method to create an orthotic brace involves casting the patient’s torso to obtain a representative geometry, which is then rectified by an orthotist to the desired geometry of the brace. Recent improvements in 3D scanning technologies, rectification software, CNC, and additive manufacturing processes have given the possibility to compliment, or in some cases, replace manual methods with digital approaches. However, the rectification process remains dependent on the orthotist’s skills. Therefore, the rectification process needs to be carefully characterized to ensure that braces designed through a digital workflow are as efficient as those created using a manual process. The aim of this study is to compare 3D scans of patients with AIS against 3D scans of both pre- and post-rectified casts that have been manually shaped by an orthotist. Six AIS patients were recruited from the Ragama Rehabilitation Clinic, Colombo, Sri Lanka. All patients were between 10 and 15 years old, were skeletally immature (Risser grade 0-3), and had Cobb angles between 20-45°. Seven spherical markers were placed at key anatomical locations on each patient’s torso and on the pre- and post-rectified molds so that distances could be reliably measured. 3D scans were obtained of 1) the patient’s torso and pelvis, 2) the patient’s pre-rectification plaster mold, and 3) the patient’s post-rectification plaster mold using a Structure Sensor Mark II 3D scanner (Occipital Inc., USA). 3D stick body models were created for each scan to represent the distances between anatomical landmarks. The 3D stick models were used to analyze the changes in position and orientation of the anatomical landmarks between scans using Blender open-source software. 3D Surface deviation maps represented volume differences between the scans using CloudCompare open-source software. The 3D stick body models showed changes in the position and orientation of thorax anatomical landmarks between the patient and the post-rectification scans for all patients. Anatomical landmark position and volume differences were seen between 3D scans of the patient’s torsos and the pre-rectified molds. Between the pre- and post-rectified molds, material removal was consistently seen on the anterior side of the thorax and the lateral areas below the ribcage. Volume differences were seen in areas where the orthotist planned to place pressure pads (usually at the trochanter on the side to which the lumbar curve was tilted (trochanter pad), at the lumbar apical vertebra (lumbar pad), on the rib connected to the apical vertebrae at the mid-axillary line (thoracic pad), and on the ribs corresponding to the upper thoracic vertebra (axillary extension pad)). The rectification process requires the skill and experience of an orthotist; however, this study demonstrates that the brace shape, location, and volume of material removed from the pre-rectification mold can be characterized and quantified. Results from this study can be fed into software that can accelerate the brace design process and make steps towards the automated digital rectification process.Keywords: additive manufacturing, orthotics, scoliosis brace design, sculpting software, spinal deformity
Procedia PDF Downloads 1454369 A Two-Step Framework for Unsupervised Speaker Segmentation Using BIC and Artificial Neural Network
Authors: Ahmad Alwosheel, Ahmed Alqaraawi
Abstract:
This work proposes a new speaker segmentation approach for two speakers. It is an online approach that does not require a prior information about speaker models. It has two phases, a conventional approach such as unsupervised BIC-based is utilized in the first phase to detect speaker changes and train a Neural Network, while in the second phase, the output trained parameters from the Neural Network are used to predict next incoming audio stream. Using this approach, a comparable accuracy to similar BIC-based approaches is achieved with a significant improvement in terms of computation time.Keywords: artificial neural network, diarization, speaker indexing, speaker segmentation
Procedia PDF Downloads 5024368 Pressure Gradient Prediction of Oil-Water Two Phase Flow through Horizontal Pipe
Authors: Ahmed I. Raheem
Abstract:
In this thesis, stratified and stratified wavy flow regimes have been investigated numerically for the oil (1.57 mPa s viscosity and 780 kg/m3 density) and water twophase flow in small and large horizontal steel pipes with a diameter between 0.0254 to 0.508 m by ANSYS Fluent software. Volume of fluid (VOF) with two phases flows using two equations family models (Realizable k-Keywords: CFD, two-phase flow, pressure gradient, volume of fluid, large diameter, horizontal pipe, oil-water stratified and stratified wavy flow
Procedia PDF Downloads 4334367 Creative Mathematically Modelling Videos Developed by Engineering Students
Authors: Esther Cabezas-Rivas
Abstract:
Ordinary differential equations (ODE) are a fundamental part of the curriculum for most engineering degrees, and students typically have difficulties in the subsequent abstract mathematical calculations. To enhance their motivation and profit that they are digital natives, we propose a teamwork project that includes the creation of a video. It should explain how to model mathematically a real-world problem transforming it into an ODE, which should then be solved using the tools learned in the lectures. This idea was indeed implemented with first-year students of a BSc in Engineering and Management during the period of online learning caused by the outbreak of COVID-19 in Spain. Each group of 4 students was assigned a different topic: model a hot water heater, search for the shortest path, design the quickest route for delivery, cooling a computer chip, the shape of the hanging cables of the Golden Gate, detecting land mines, rocket trajectories, etc. These topics should be worked out through two complementary channels: a written report describing the problem and a 10-15 min video on the subject. The report includes the following items: description of the problem to be modeled, detailed obtention of the ODE that models the problem, its complete solution, and interpretation in the context of the original problem. We report the outcomes of this teaching in context and active learning experience, including the feedback received by the students. They highlighted the encouragement of creativity and originality, which are skills that they do not typically relate to mathematics. Additionally, the video format (unlike a common presentation) has the advantage of allowing them to critically review and self-assess the recording, repeating some parts until the result is satisfactory. As a side effect, they felt more confident about their oral abilities. In short, students agreed that they had fun preparing the video. They recognized that it was tricky to combine deep mathematical contents with entertainment since, without the latter, it is impossible to engage people to view the video till the end. Despite this difficulty, after the activity, they claimed to understand better the material, and they enjoyed showing the videos to family and friends during and after the project.Keywords: active learning, contextual teaching, models in differential equations, student-produced videos
Procedia PDF Downloads 1464366 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection
Authors: S. Delgado, C. Cerrada, R. S. Gómez
Abstract:
This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.Keywords: voxelization, GPU acceleration, computer graphics, compute shaders
Procedia PDF Downloads 734365 Mechanical Characterization of Banana by Inverse Analysis Method Combined with Indentation Test
Authors: Juan F. P. Ramírez, Jésica A. L. Isaza, Benjamín A. Rojano
Abstract:
This study proposes a novel use of a method to determine the mechanical properties of fruits by the use of the indentation tests. The method combines experimental results with a numerical finite elements model. The results presented correspond to a simplified numerical modeling of banana. The banana was assumed as one-layer material with an isotropic linear elastic mechanical behavior, the Young’s modulus found is 0.3Mpa. The method will be extended to multilayer models in further studies.Keywords: finite element method, fruits, inverse analysis, mechanical properties
Procedia PDF Downloads 3584364 Molecular Insights into the 5α-Reductase Inhibitors: Quantitative Structure Activity Relationship, Pre-Absorption, Distribution, Metabolism, and Excretion and Docking Studies
Authors: Richa Dhingra, Monika, Manav Malhotra, Tilak Raj Bhardwaj, Neelima Dhingra
Abstract:
5-Alpha-reductases (5AR), a membrane bound, NADPH dependent enzyme and convert male hormone testosterone (T) into more potent androgen dihydrotestosterone (DHT). DHT is the required for the development and function of male sex organs, but its overproduction has been found to be associated with physiological conditions like Benign Prostatic Hyperplasia (BPH). Thus the inhibition of 5ARs could be a key target for the treatment of BPH. In present study, 2D and 3D Quantitative Structure Activity Relationship (QSAR) pharmacophore models have been generated for 5AR based on known inhibitory concentration (IC₅₀) values with extensive validations. The four featured 2D pharmacophore based PLS model correlated the topological interactions (–OH group connected with one single bond) (SsOHE-index); semi-empirical (Quadrupole2) and physicochemical descriptors (Mol. wt, Bromines Count, Chlorines Count) with 5AR inhibitory activity, and has the highest correlation coefficient (r² = 0.98, q² =0.84; F = 57.87, pred r² = 0.88). Internal and external validation was carried out using test and proposed set of compounds. The contribution plot of electrostatic field effects and steric interactions generated by 3D-QSAR showed interesting results in terms of internal and external predictability. The well validated 2D Partial Least Squares (PLS) and 3D k-nearest neighbour (kNN) models were used to search novel 5AR inhibitors with different chemical scaffold. To gain more insights into the molecular mechanism of action of these steroidal derivatives, molecular docking and in silico absorption, distribution, metabolism, and excretion (ADME) studies were also performed. Studies have revealed the hydrophobic and hydrogen bonding of the ligand with residues Alanine (ALA) 63A, Threonine (THR) 60A, and Arginine (ARG) 456A of 4AT0 protein at the hinge region. The results of QSAR, molecular docking, in silico ADME studies provide guideline and mechanistic scope for the identification of more potent 5-Alpha-reductase inhibitors (5ARI).Keywords: 5α-reductase inhibitor, benign prostatic hyperplasia, ligands, molecular docking, QSAR
Procedia PDF Downloads 1634363 Model Order Reduction of Complex Airframes Using Component Mode Synthesis for Dynamic Aeroelasticity Load Analysis
Authors: Paul V. Thomas, Mostafa S. A. Elsayed, Denis Walch
Abstract:
Airframe structural optimization at different design stages results in new mass and stiffness distributions which modify the critical design loads envelop. Determination of aircraft critical loads is an extensive analysis procedure which involves simulating the aircraft at thousands of load cases as defined in the certification requirements. It is computationally prohibitive to use a Global Finite Element Model (GFEM) for the load analysis, hence reduced order structural models are required which closely represent the dynamic characteristics of the GFEM. This paper presents the implementation of Component Mode Synthesis (CMS) method for the generation of high fidelity Reduced Order Model (ROM) of complex airframes. Here, sub-structuring technique is used to divide the complex higher order airframe dynamical system into a set of subsystems. Each subsystem is reduced to fewer degrees of freedom using matrix projection onto a carefully chosen reduced order basis subspace. The reduced structural matrices are assembled for all the subsystems through interface coupling and the dynamic response of the total system is solved. The CMS method is employed to develop the ROM of a Bombardier Aerospace business jet which is coupled with an aerodynamic model for dynamic aeroelasticity loads analysis under gust turbulence. Another set of dynamic aeroelastic loads is also generated employing a stick model of the same aircraft. Stick model is the reduced order modelling methodology commonly used in the aerospace industry based on stiffness generation by unitary loading application. The extracted aeroelastic loads from both models are compared against those generated employing the GFEM. Critical loads Modal participation factors and modal characteristics of the different ROMs are investigated and compared against those of the GFEM. Results obtained show that the ROM generated using Craig Bampton CMS reduction process has a superior dynamic characteristics compared to the stick model.Keywords: component mode synthesis, craig bampton reduction method, dynamic aeroelasticity analysis, model order reduction
Procedia PDF Downloads 2094362 Flood Risk Management in the Semi-Arid Regions of Lebanon - Case Study “Semi Arid Catchments, Ras Baalbeck and Fekha”
Authors: Essam Gooda, Chadi Abdallah, Hamdi Seif, Safaa Baydoun, Rouya Hdeib, Hilal Obeid
Abstract:
Floods are common natural disaster occurring in semi-arid regions in Lebanon. This results in damage to human life and deterioration of environment. Despite their destructive nature and their immense impact on the socio-economy of the region, flash floods have not received adequate attention from policy and decision makers. This is mainly because of poor understanding of the processes involved and measures needed to manage the problem. The current understanding of flash floods remains at the level of general concepts; most policy makers have yet to recognize that flash floods are distinctly different from normal riverine floods in term of causes, propagation, intensity, impacts, predictability, and management. Flash floods are generally not investigated as a separate class of event but are rather reported as part of the overall seasonal flood situation. As a result, Lebanon generally lacks policies, strategies, and plans relating specifically to flash floods. Main objective of this research is to improve flash flood prediction by providing new knowledge and better understanding of the hydrological processes governing flash floods in the East Catchments of El Assi River. This includes developing rainstorm time distribution curves that are unique for this type of study region; analyzing, investigating, and developing a relationship between arid watershed characteristics (including urbanization) and nearby villages flow flood frequency in Ras Baalbeck and Fekha. This paper discusses different levels of integration approach¬es between GIS and hydrological models (HEC-HMS & HEC-RAS) and presents a case study, in which all the tasks of creating model input, editing data, running the model, and displaying output results. The study area corresponds to the East Basin (Ras Baalbeck & Fakeha), comprising nearly 350 km2 and situated in the Bekaa Valley of Lebanon. The case study presented in this paper has a database which is derived from Lebanese Army topographic maps for this region. Using ArcMap to digitizing the contour lines, streams & other features from the topographic maps. The digital elevation model grid (DEM) is derived for the study area. The next steps in this research are to incorporate rainfall time series data from Arseal, Fekha and Deir El Ahmar stations to build a hydrologic data model within a GIS environment and to combine ArcGIS/ArcMap, HEC-HMS & HEC-RAS models, in order to produce a spatial-temporal model for floodplain analysis at a regional scale. In this study, HEC-HMS and SCS methods were chosen to build the hydrologic model of the watershed. The model then calibrated using flood event that occurred between 7th & 9th of May 2014 which considered exceptionally extreme because of the length of time the flows lasted (15 hours) and the fact that it covered both the watershed of Aarsal and Ras Baalbeck. The strongest reported flood in recent times lasted for only 7 hours covering only one watershed. The calibrated hydrologic model is then used to build the hydraulic model & assessing of flood hazards maps for the region. HEC-RAS Model is used in this issue & field trips were done for the catchments in order to calibrated both Hydrologic and Hydraulic models. The presented models are a kind of flexible procedures for an ungaged watershed. For some storm events it delivers good results, while for others, no parameter vectors can be found. In order to have a general methodology based on these ideas, further calibration and compromising of results on the dependence of many flood events parameters and catchment properties is required.Keywords: flood risk management, flash flood, semi arid region, El Assi River, hazard maps
Procedia PDF Downloads 4784361 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System
Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa
Abstract:
Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)
Procedia PDF Downloads 3094360 Differential Transform Method: Some Important Examples
Authors: M. Jamil Amir, Rabia Iqbal, M. Yaseen
Abstract:
In this paper, we solve some differential equations analytically by using differential transform method. For this purpose, we consider four models of Laplace equation with two Dirichlet and two Neumann boundary conditions and K(2,2) equation and obtain the corresponding exact solutions. The obtained results show the simplicity of the method and massive reduction in calculations when one compares it with other iterative methods, available in literature. It is worth mentioning that here only a few number of iterations are required to reach the closed form solutions as series expansions of some known functions.Keywords: differential transform method, laplace equation, Dirichlet boundary conditions, Neumann boundary conditions
Procedia PDF Downloads 537