Search results for: reduced order models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22402

Search results for: reduced order models

21922 Electrochemical Deposition of Pb and PbO2 on Polymer Composites Electrodes

Authors: A. Merzouki, N. Haddaoui

Abstract:

Polymers have a large reputation as electric insulators. These materials are characterized by weak weight, reduced price and a large domain of physical and chemical properties. They conquered new application domains that were until a recent past the exclusivity of metals. In this work, we used some composite materials (polymers/conductive fillers), as electrodes and we try to cover them with metallic lead layers in order to use them as courant collector grids in lead-acid battery plates.

Keywords: electrodeposition, polymer composites, carbon black, acetylene black

Procedia PDF Downloads 456
21921 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 169
21920 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours

Authors: Fikret Yalcinkaya, Hamza Unsal

Abstract:

To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.

Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models

Procedia PDF Downloads 181
21919 Experimental and Numerical Study of Thermal Effects in Variable Density Turbulent Jets

Authors: DRIS Mohammed El-Amine, BOUNIF Abdelhamid

Abstract:

This paper considers an experimental and numerical investigation of variable density in axisymmetric turbulent free jets. Special attention is paid to the study of the scalar dissipation rate. In this case, dynamic field equations are coupled to scalar field equations by the density which can vary by the thermal effect (jet heating). The numerical investigation is based on the first and second order turbulence models. For the discretization of the equations system characterizing the flow, the finite volume method described by Patankar (1980) was used. The experimental study was conducted in order to evaluate dynamical characteristics of a heated axisymmetric air flow using the Laser Doppler Anemometer (LDA) which is a very accurate optical measurement method. Experimental and numerical results are compared and discussed. This comparison do not show large difference and the results obtained are in general satisfactory.

Keywords: Scalar dissipation rate, thermal effects, turbulent axisymmetric jets, second order modelling, Velocimetry Laser Doppler.

Procedia PDF Downloads 450
21918 Designing a Crowbar for Women: An Ergonomic Approach

Authors: Prakash Chandra Dhara, Rupa Maity, Mousumi Chatterjee

Abstract:

Crowbars are used for the gardening purpose. The same tools are used by both male and female gardeners. The existing crowbars are suitable for the female gardeners. The present study was aimed to design a crowbar, which was required to use by the women for the gardening purpose, from the viewpoints of ergonomics. The study was carried out on 50 women in different villages of Howrah districts in West Bengal state. Different models of existing crowbars which were commonly used by the women were collected and evaluated by examining their shape and size. The problems of using existing crowbar were assessed by direct observation during its operation. The musculoskeletal disorder of the subjects for using the crowbar was evaluated by modified Nordic questionnaire method. The anthropometric dimensions, especially hand dimension, of the subjects were taken in standardized static conditions. Considering the problems of using the existing crowbars some design concepts were developed and accordingly three prototypes models (P1, P2, P3) of crowbar were prepared for designing of a modified crowbar for women. Psychophysical analysis of those prototypes was made by paired comparison tests. In the above test subjective preference for different characteristics of the crowbar, e.g., length, weight, length and breadth of the blade, handle diameter, position of the handle, were determined. From the results of the paired comparison test and percentile values of hand dimensions, a modified design of crowbar was suggested. The prototype model P1 possessed more preferred characteristics of the tool than that of other prototype models. In the final design, the weight of the tool and length of the blade was reduced from that of the existing crowbar. Other dimensions were also changed. Two handles were suggested in the redesigned tool for better gripping and operation. The modified crowbar was evaluated by studying the body joint angles, viz., wrist, shoulder and elbow, for assessing the suitability of the design. It was concluded that the redesigned crowbar was suitable for women’s use.

Keywords: body dimension, crowbar, ergo-design, women, hand anthropometry

Procedia PDF Downloads 255
21917 Multidimensional Sports Spectators Segmentation and Social Media Marketing

Authors: B. Schmid, C. Kexel, E. Djafarova

Abstract:

Understanding consumers is elementary for practitioners in marketing. Consumers of sports events, the sports spectators, are a particularly complex consumer crowd. In order to identify and define their profiles different segmentation approaches can be found in literature, one of them being multidimensional segmentation. Multidimensional segmentation models correspond to the broad range of attitudes, behaviours, motivations and beliefs of sports spectators, other than earlier models. Moreover, in sports there are some well-researched disciplines (e.g. football or North American sports) where consumer profiles and marketing strategies are elaborate and others where no research at all can be found. For example, there is almost no research on athletics spectators. This paper explores the current state of research on sports spectators segmentation. An in-depth literature review provides the framework for a spectators segmentation in athletics. On this basis, additional potential consumer groups and implications for social media marketing will be explored. The findings are the basis for further research.

Keywords: multidimensional segmentation, social media, sports marketing, sports spectators segmentation

Procedia PDF Downloads 307
21916 Spatial Econometric Approaches for Count Data: An Overview and New Directions

Authors: Paula Simões, Isabel Natário

Abstract:

This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.

Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data

Procedia PDF Downloads 594
21915 Rogue Waves Arising on the Standing Periodic Wave in the High-Order Ablowitz-Ladik Equation

Authors: Yanpei Zhen

Abstract:

The nonlinear Schrödinger (NLS) equation models wave dynamics in many physical problems related to fluids, plasmas, and optics. The standing periodic waves are known to be modulationally unstable, and rogue waves (localized perturbations in space and time) have been observed on their backgrounds in numerical experiments. The exact solutions for rogue waves arising on the periodic standing waves have been obtained analytically. It is natural to ask if the rogue waves persist on the standing periodic waves in the integrable discretizations of the integrable NLS equation. We study the standing periodic waves in the semidiscrete integrable system modeled by the high-order Ablowitz-Ladik (AL) equation. The standing periodic wave of the high-order AL equation is expressed by the Jacobi cnoidal elliptic function. The exact solutions are obtained by using the separation of variables and one-fold Darboux transformation. Since the cnoidal wave is modulationally unstable, the rogue waves are generated on the periodic background.

Keywords: Darboux transformation, periodic wave, Rogue wave, separating the variables

Procedia PDF Downloads 183
21914 Advancing Communication Theory in the Age of Digital Technology: Bridging the Gap Between Traditional Models and Emerging Platforms

Authors: Sidique Fofanah

Abstract:

This paper explores the intersection of traditional communication theories and modern digital technologies, analyzing how established models adapt to contemporary communication platforms. It examines the evolving nature of interpersonal, group, and mass communication within digital environments, emphasizing the role of social media, AI-driven communication tools, and virtual reality in reshaping communication paradigms. The paper also discusses the implications for future research and practice in communication studies, proposing an integrated framework that accommodates both classical and emerging theories.

Keywords: communication, traditional models, emerging platforms, digital media

Procedia PDF Downloads 26
21913 Mathematical Modeling of Carotenoids and Polyphenols Content of Faba Beans (Vicia faba L.) during Microwave Treatments

Authors: Ridha Fethi Mechlouch, Ahlem Ayadi, Ammar Ben Brahim

Abstract:

Given the importance of the preservation of polyphenols and carotenoids during thermal processing, we attempted in this study to investigate the variation of these two parameters in faba beans during microwave treatment using different power densities (1; 2; and 3W/g), then to perform a mathematical modeling by using non-linear regression analysis to evaluate the models constants. The variation of the carotenoids and polyphenols ratio of faba beans and the models are tested to validate the experimental results. Exponential models were found to be suitable to describe the variation of caratenoid ratio (R²= 0.945, 0.927 and 0.946) for power densities (1; 2; and 3W/g) respectively, and polyphenol ratio (R²= 0.931, 0.989 and 0.982) for power densities (1; 2; and 3W/g) respectively. The effect of microwave power density Pd(W/g) on the coefficient k of models were also investigated. The coefficient is highly correlated (R² = 1) and can be expressed as a polynomial function.

Keywords: microwave treatment, power density, carotenoid, polyphenol, modeling

Procedia PDF Downloads 259
21912 Downside Risk Analysis of the Nigerian Stock Market: A Value at Risk Approach

Authors: Godwin Chigozie Okpara

Abstract:

This paper using standard GARCH, EGARCH, and TARCH models on day of the week return series (of 246 days) from the Nigerian Stock market estimated the model variants’ VaR. An asymmetric return distribution and fat-tail phenomenon in financial time series were considered by estimating the models with normal, student t and generalized error distributions. The analysis based on Akaike Information Criterion suggests that the EGARCH model with student t innovation distribution can furnish more accurate estimate of VaR. In the light of this, we apply the likelihood ratio tests of proportional failure rates to VaR derived from EGARCH model in order to determine the short and long positions VaR performances. The result shows that as alpha ranges from 0.05 to 0.005 for short positions, the failure rate significantly exceeds the prescribed quintiles while it however shows no significant difference between the failure rate and the prescribed quantiles for long positions. This suggests that investors and portfolio managers in the Nigeria stock market have long trading position or can buy assets with concern on when the asset prices will fall. Precisely, the VaR estimates for the long position range from -4.7% for 95 percent confidence level to -10.3% for 99.5 percent confidence level.

Keywords: downside risk, value-at-risk, failure rate, kupiec LR tests, GARCH models

Procedia PDF Downloads 443
21911 Exchange Rate Forecasting by Econometric Models

Authors: Zahid Ahmad, Nosheen Imran, Nauman Ali, Farah Amir

Abstract:

The objective of the study is to forecast the US Dollar and Pak Rupee exchange rate by using time series models. For this purpose, daily exchange rates of US and Pakistan for the period of January 01, 2007 - June 2, 2017, are employed. The data set is divided into in sample and out of sample data set where in-sample data are used to estimate as well as forecast the models, whereas out-of-sample data set is exercised to forecast the exchange rate. The ADF test and PP test are used to make the time series stationary. To forecast the exchange rate ARIMA model and GARCH model are applied. Among the different Autoregressive Integrated Moving Average (ARIMA) models best model is selected on the basis of selection criteria. Due to the volatility clustering and ARCH effect the GARCH (1, 1) is also applied. Results of analysis showed that ARIMA (0, 1, 1 ) and GARCH (1, 1) are the most suitable models to forecast the future exchange rate. Further the GARCH (1,1) model provided the volatility with non-constant conditional variance in the exchange rate with good forecasting performance. This study is very useful for researchers, policymakers, and businesses for making decisions through accurate and timely forecasting of the exchange rate and helps them in devising their policies.

Keywords: exchange rate, ARIMA, GARCH, PAK/USD

Procedia PDF Downloads 562
21910 Study on Flexible Diaphragm In-Plane Model of Irregular Multi-Storey Industrial Plant

Authors: Cheng-Hao Jiang, Mu-Xuan Tao

Abstract:

The rigid diaphragm model may cause errors in the calculation of internal forces due to neglecting the in-plane deformation of the diaphragm. This paper thus studies the effects of different diaphragm in-plane models (including in-plane rigid model and in-plane flexible model) on the seismic performance of structures. Taking an actual industrial plant as an example, the seismic performance of the structure is predicted using different floor diaphragm models, and the analysis errors caused by different diaphragm in-plane models including deformation error and internal force error are calculated. Furthermore, the influence of the aspect ratio on the analysis errors is investigated. Finally, the code rationality is evaluated by assessing the analysis errors of the structure models whose floors were determined as rigid according to the code’s criterion. It is found that different floor models may cause great differences in the distribution of structural internal forces, and the current code may underestimate the influence of the floor in-plane effect.

Keywords: industrial plant, diaphragm, calculating error, code rationality

Procedia PDF Downloads 140
21909 Structural Assessment of Low-Rise Reinforced Concrete Frames under Tsunami Loads

Authors: Hussain Jiffry, Kypros Pilakoutas, Reyes Garcia Lopez

Abstract:

This study examines the effect of tsunami loads on reinforced concrete (RC) frame buildings analytically. The impact of tsunami wave loads and waterborne objects are analyzed using a typical substandard full-scale two-story RC frame building tested as part of the EU-funded Ecoleader project. The building was subjected to shake table tests in bare condition and subsequently strengthened using Carbon Fiber Reinforced Polymers (CFRP) composites and retested. Numerical models of the building in both bare and CFRP-strengthened conditions are calibrated in DRAIN-3DX software to match the test results. To investigate the response of wave loads and impact forces, the numerical models are subjected to nonlinear dynamic analyses using force-time history input records. The analytical results are compared in terms of displacements at the floors and the 'impact point' of a boat. The results show that the roof displacement of the CFRP-strengthened building reduced by 63% when compared to the bare building. The results also indicate that strengthening only the mid-height of the impact column using CFRP is more efficient at reducing damage when compared to strengthening other parts of the column. Alternative solutions to mitigate damage due to tsunami loads are suggested.

Keywords: tsunami loads, hydrodynamic load, impact load, waterborne objects, RC buildings

Procedia PDF Downloads 456
21908 Phase Behavior Modelling of Libyan Near-Critical Gas-Condensate Field

Authors: M. Khazam, M. Altawil, A. Eljabri

Abstract:

Fluid properties in states near a vapor-liquid critical region are the most difficult to measure and to predict with EoS models. The principal model difficulty is that near-critical property variations do not follow the same mathematics as at conditions far away from the critical region. Libyan NC98 field in Sirte basin is a typical example of near critical fluid characterized by high initial condensate gas ratio (CGR) greater than 160 bbl/MMscf and maximum liquid drop-out of 25%. The objective of this paper is to model NC98 phase behavior with the proper selection of EoS parameters and also to model reservoir depletion versus gas cycling option using measured PVT data and EoS Models. The outcomes of our study revealed that, for accurate gas and condensate recovery forecast during depletion, the most important PVT data to match are the gas phase Z-factor and C7+ fraction as functions of pressure. Reasonable match, within -3% error, was achieved for ultimate condensate recovery at abandonment pressure of 1500 psia. The smooth transition from gas-condensate to volatile oil was fairly simulated by the tuned PR-EoS. The predicted GOC was approximately at 14,380 ftss. The optimum gas cycling scheme, in order to maximize condensate recovery, should not be performed at pressures less than 5700 psia. The contribution of condensate vaporization for such field is marginal, within 8% to 14%, compared to gas-gas miscible displacement. Therefore, it is always recommended, if gas recycle scheme to be considered for this field, to start it at the early stage of field development.

Keywords: EoS models, gas-condensate, gas cycling, near critical fluid

Procedia PDF Downloads 318
21907 Probing Language Models for Multiple Linguistic Information

Authors: Bowen Ding, Yihao Kuang

Abstract:

In recent years, large-scale pre-trained language models have achieved state-of-the-art performance on a variety of natural language processing tasks. The word vectors produced by these language models can be viewed as dense encoded presentations of natural language that in text form. However, it is unknown how much linguistic information is encoded and how. In this paper, we construct several corresponding probing tasks for multiple linguistic information to clarify the encoding capabilities of different language models and performed a visual display. We firstly obtain word presentations in vector form from different language models, including BERT, ELMo, RoBERTa and GPT. Classifiers with a small scale of parameters and unsupervised tasks are then applied on these word vectors to discriminate their capability to encode corresponding linguistic information. The constructed probe tasks contain both semantic and syntactic aspects. The semantic aspect includes the ability of the model to understand semantic entities such as numbers, time, and characters, and the grammatical aspect includes the ability of the language model to understand grammatical structures such as dependency relationships and reference relationships. We also compare encoding capabilities of different layers in the same language model to infer how linguistic information is encoded in the model.

Keywords: language models, probing task, text presentation, linguistic information

Procedia PDF Downloads 110
21906 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 455
21905 Emancipation through the Inclusion of Civil Society in Contemporary Peacebuilding: A Case Study of Peacebuilding Efforts in Colombia

Authors: D. Romero Espitia

Abstract:

Research on peacebuilding has taken a critical turn into examining the neoliberal and hegemonic conception of peace operations. Alternative peacebuilding models have been analyzed, but the scholarly discussion fails to bring them together or form connections between them. The objective of this paper is to rethink peacebuilding by extracting the positive aspects of the various peacebuilding models, connecting them with the local context, and therefore promote emancipation in contemporary peacebuilding efforts. Moreover, local ownership has been widely labelled as one, if not the core principle necessary for a successful peacebuilding project. Yet, definitions of what constitutes the 'local' remain debated. Through a qualitative review of literature, this paper unpacks the contemporary conception of peacebuilding in nexus with 'local ownership' as manifested through civil society. Using Colombia as a case study, this paper argues that a new peacebuilding framework, one that reconsiders the terms of engagement between international and national actors, is needed in order to foster effective peacebuilding efforts in contested transitional states.

Keywords: civil society, Colombia, emancipation, peacebuilding

Procedia PDF Downloads 134
21904 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 153
21903 Effect of Ultrasonic Assisted High Pressure Soaking of Soybean on Soymilk Properties

Authors: Rahul Kumar, Pavuluri Srinivasa Rao

Abstract:

This study investigates the effect of ultrasound-assisted high pressure (HP) treatment on the soaking characteristic of soybeans and extracted soy milk quality. The soybean (variety) was subjected to sonication (US) at ambient temperature for 15 and 30 min followed by HP treatment in the range of 200-400 MPa for dwell times 5-10 min. The bean samples were also compared with HPP samples (200-400 MPa; 5-10 mins), overnight soaked samples(12-15 h) and thermal treated samples (100°C/30 min) followed by overnight soaking for 12-15 h soaking. Rapid soaking within 40 min was achieved by the combined US-HPP treatment, and it reduced the soaking time by about 25 times in comparison to overnight soaking or thermal treatment followed by soaking. Reducing the soaking time of soybeans is expected to suppress the development of undesirable beany flavor of soy milk developed during normal soaking milk extraction. The optimum moisture uptake by the sonicated-pressure treated soybeans was 60-62% (w.b) similar to that obtained after overnight soaking for 12-15 h or thermal treatment followed by overnight soaking. pH of soy milk was not much affected by the different US-HPP treatments and overnight soaking which centered around the range of 6.6-6.7 much like the normal cow milk. For milk extracted from thermally treated soy samples, pH reduced to 6.2. Total soluble solids were found to be maximum for the normal overnight soaked soy samples, and it was in the range of 10.3-10.6. For the HPP treated soy milk, the TSS reduced to 7.4 while sonication further reduced it to 6.2. TSS was found to be getting reduced with increasing time of ultrasonication. Further reduction in TSS to 2.3 was observed in soy milk produced from thermally treated samples following overnight soaking. Our results conclude that thermally treated beans' milk is less stable and more acidic, soaking is very rapid compared to overnight soaking hence milk productivity can be enhanced with less development of undesirable beany flavor.

Keywords: beany flavor, high pressure processing, high pressure, soybean, soaking, milk, ultrasound, wet basis

Procedia PDF Downloads 256
21902 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods

Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie

Abstract:

The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.

Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence

Procedia PDF Downloads 248
21901 Uncertainty in Near-Term Global Surface Warming Linked to Pacific Trade Wind Variability

Authors: M. Hadi Bordbar, Matthew England, Alex Sen Gupta, Agus Santoso, Andrea Taschetto, Thomas Martin, Wonsun Park, Mojib Latif

Abstract:

Climate models generally simulate long-term reductions in the Pacific Walker Circulation with increasing atmospheric greenhouse gases. However, over two recent decades (1992-2011) there was a strong intensification of the Pacific Trade Winds that is linked with a slowdown in global surface warming. Using large ensembles of multiple climate models forced by increasing atmospheric greenhouse gas concentrations and starting from different ocean and/or atmospheric initial conditions, we reveal very diverse 20-year trends in the tropical Pacific climate associated with a considerable uncertainty in the globally averaged surface air temperature (SAT) in each model ensemble. This result suggests low confidence in our ability to accurately predict SAT trends over 20-year timescale only from external forcing. We show, however, that the uncertainty can be reduced when the initial oceanic state is adequately known and well represented in the model. Our analyses suggest that internal variability in the Pacific trade winds can mask the anthropogenic signal over a 20-year time frame, and drive transitions between periods of accelerated global warming and temporary slowdown periods.

Keywords: trade winds, walker circulation, hiatus in the global surface warming, internal climate variability

Procedia PDF Downloads 268
21900 Analysis of Moving Loads on Bridges Using Surrogate Models

Authors: Susmita Panda, Arnab Banerjee, Ajinkya Baxy, Bappaditya Manna

Abstract:

The design of short to medium-span high-speed bridges in critical locations is an essential aspect of vehicle-bridge interaction. Due to dynamic interaction between moving load and bridge, mathematical models or finite element modeling computations become time-consuming. Thus, to reduce the computational effort, a universal approximator using an artificial neural network (ANN) has been used to evaluate the dynamic response of the bridge. The data set generation and training of surrogate models have been conducted over the results obtained from mathematical modeling. Further, the robustness of the surrogate model has been investigated, which showed an error percentage of less than 10% with conventional methods. Additionally, the dependency of the dynamic response of the bridge on various load and bridge parameters has been highlighted through a parametric study.

Keywords: artificial neural network, mode superposition method, moving load analysis, surrogate models

Procedia PDF Downloads 100
21899 Quantitative Changes in Biofilms of a Seawater Tubular Heat Exchanger Subjected to Electromagnetic Fields Treatment

Authors: Sergio Garcia, Alfredo Trueba, Luis M. Vega, Ernesto Madariaga

Abstract:

Biofilms adhesion is one of the more important cost of industries plants on wide world, which use to water for cooling heat exchangers or are in contact with water. This study evaluated the effect of Electromagnetic Fields on biofilms in tubular heat exchangers using seawater cooling. The results showed an up to 40% reduction of the biofilm thickness compared to the untreated control tubes. The presence of organic matter was reduced by 75%, the inorganic mater was reduced by 87%, and 53% of the dissolved solids were eliminated. The biofilm thermal conductivity in the treated tube was reduced by 53% as compared to the control tube. The hardness in the effluent during the experimental period was decreased by 18% in the treated tubes compared with control tubes. Our results show that the electromagnetic fields treatment has a great potential in the process of removing biofilms in heat exchanger.

Keywords: biofilm, heat exchanger, electromagnetic fields, seawater

Procedia PDF Downloads 191
21898 Design Fractional-Order Terminal Sliding Mode Control for Synchronization of a Class of Fractional-Order Chaotic Systems with Uncertainty and External Disturbances

Authors: Shabnam Pashaei, Mohammadali Badamchizadeh

Abstract:

This paper presents a new fractional-order terminal sliding mode control for synchronization of two different fractional-order chaotic systems with uncertainty and external disturbances. A fractional-order integral type nonlinear switching surface is presented. Then, using the Lyapunov stability theory and sliding mode theory, a fractional-order control law is designed to synchronize two different fractional-order chaotic systems. Finally, a simulation example is presented to illustrate the performance and applicability of the proposed method. Based on numerical results, the proposed controller ensures that the states of the controlled fractional-order chaotic response system are asymptotically synchronized with the states of the drive system.

Keywords: terminal sliding mode control, fractional-order calculus, chaotic systems, synchronization

Procedia PDF Downloads 411
21897 Thorium Extraction with Cyanex272 Coated Magnetic Nanoparticles

Authors: Afshin Shahbazi, Hadi Shadi Naghadeh, Ahmad Khodadadi Darban

Abstract:

In the Magnetically Assisted Chemical Separation (MACS) process, tiny ferromagnetic particles coated with solvent extractant are used to selectively separate radionuclides and hazardous metals from aqueous waste streams. The contaminant-loaded particles are then recovered from the waste solutions using a magnetic field. In the present study, Cyanex272 or C272 (bis (2,4,4-trimethylpentyl) phosphinic acid) coated magnetic particles are being evaluated for the possible application in the extraction of Thorium (IV) from nuclear waste streams. The uptake behaviour of Th(IV) from nitric acid solutions was investigated by batch studies. Adsorption of Thorium (IV) from aqueous solution onto adsorbent was investigated in a batch system. Adsorption isotherm and adsorption kinetic studies of Thorium (IV) onto nanoparticles coated Cyanex272 were carried out in a batch system. The factors influencing Thorium (IV) adsorption were investigated and described in detail, as a function of the parameters such as initial pH value, contact time, adsorbent mass, and initial Thorium (IV) concentration. Magnetically Assisted Chemical Separation (MACS) process adsorbent showed best results for the fast adsorption of Th (IV) from aqueous solution at aqueous phase acidity value of 0.5 molar. In addition, more than 80% of Th (IV) was removed within the first 2 hours, and the time required to achieve the adsorption equilibrium was only 140 minutes. Langmuir and Frendlich adsorption models were used for the mathematical description of the adsorption equilibrium. Equilibrium data agreed very well with the Langmuir model, with a maximum adsorption capacity of 48 mg.g-1. Adsorption kinetics data were tested using pseudo-first-order, pseudo-second-order and intra-particle diffusion models. Kinetic studies showed that the adsorption followed a pseudo-second-order kinetic model, indicating that the chemical adsorption was the rate-limiting step.

Keywords: Thorium (IV) adsorption, MACS process, magnetic nanoparticles, Cyanex272

Procedia PDF Downloads 339
21896 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 79
21895 Three Dimensional Computational Fluid Dynamics Simulation of Wall Condensation inside Inclined Tubes

Authors: Amirhosein Moonesi Shabestary, Eckhard Krepper, Dirk Lucas

Abstract:

The current PhD project comprises CFD-modeling and simulation of condensation and heat transfer inside horizontal pipes. Condensation plays an important role in emergency cooling systems of reactors. The emergency cooling system consists of inclined horizontal pipes which are immersed in a tank of subcooled water. In the case of an accident the water level in the core is decreasing, steam comes in the emergency pipes, and due to the subcooled water around the pipe, this steam will start to condense. These horizontal pipes act as a strong heat sink which is responsible for a quick depressurization of the reactor core when any accident happens. This project is defined in order to model all these processes which happening in the emergency cooling systems. The most focus of the project is on detection of different morphologies such as annular flow, stratified flow, slug flow and plug flow. This project is an ongoing project which has been started 1 year ago in Helmholtz Zentrum Dresden Rossendorf (HZDR), Fluid Dynamics department. In HZDR most in cooperation with ANSYS different models are developed for modeling multiphase flows. Inhomogeneous MUSIG model considers the bubble size distribution and is used for modeling small-scaled dispersed gas phase. AIAD (Algebraic Interfacial Area Density Model) is developed for detection of the local morphology and corresponding switch between them. The recent model is GENTOP combines both concepts. GENTOP is able to simulate co-existing large-scaled (continuous) and small-scaled (polydispersed) structures. All these models are validated for adiabatic cases without any phase change. Therefore, the start point of the current PhD project is using the available models and trying to integrate phase transition and wall condensing models into them. In order to simplify the idea of condensation inside horizontal tubes, 3 steps have been defined. The first step is the investigation of condensation inside a horizontal tube by considering only direct contact condensation (DCC) and neglect wall condensation. Therefore, the inlet of the pipe is considered to be annular flow. In this step, AIAD model is used in order to detect the interface. The second step is the extension of the model to consider wall condensation as well which is closer to the reality. In this step, the inlet is pure steam, and due to the wall condensation, a liquid film occurs near the wall which leads to annular flow. The last step will be modeling of different morphologies which are occurring inside the tube during the condensation via using GENTOP model. By using GENTOP, the dispersed phase is able to be considered and simulated. Finally, the results of the simulations will be validated by experimental data which will be available also in HZDR.

Keywords: wall condensation, direct contact condensation, AIAD model, morphology detection

Procedia PDF Downloads 304
21894 Chemometric Estimation of Inhibitory Activity of Benzimidazole Derivatives by Linear Least Squares and Artificial Neural Networks Modelling

Authors: Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević, Lidija R. Jevrić, Stela Jokić

Abstract:

The subject of this paper is to correlate antibacterial behavior of benzimidazole derivatives with their molecular characteristics using chemometric QSAR (Quantitative Structure–Activity Relationships) approach. QSAR analysis has been carried out on the inhibitory activity of benzimidazole derivatives against Staphylococcus aureus. The data were processed by linear least squares (LLS) and artificial neural network (ANN) procedures. The LLS mathematical models have been developed as a calibration models for prediction of the inhibitory activity. The quality of the models was validated by leave one out (LOO) technique and by using external data set. High agreement between experimental and predicted inhibitory acivities indicated the good quality of the derived models. These results are part of the CMST COST Action No. CM1306 "Understanding Movement and Mechanism in Molecular Machines".

Keywords: Antibacterial, benzimidazoles, chemometric, QSAR.

Procedia PDF Downloads 316
21893 Radiation Emission from Ultra-Relativistic Plasma Electrons in Short-Pulse Laser Light Interactions

Authors: R. Ondarza-Rovira, T. J. M. Boyd

Abstract:

Intense femtosecond laser light incident on over-critical density plasmas has shown to emit a prolific number of high-order harmonics of the driver frequency, with spectra characterized by power-law decays Pm ~ m-p, where m denotes the harmonic order and p the spectral decay index. When the laser pulse is p-polarized, plasma effects do modify the harmonic spectrum, weakening the so-called universal decay with p=8/3 to p=5/3, or below. In this work, appeal is made to a single particle radiation model in support of the predictions from particle-in-cell (PIC) simulations. Using this numerical technique we further show that the emission radiated by electrons -that are relativistically accelerated by the laser field inside the plasma, after being expelled into vacuum, the so-called Brunel electrons is characterized not only by the plasma line but also by ultraviolet harmonic orders described by the 5/3 decay index. Results obtained from these simulations suggest that for ultra-relativistic light intensities, the spectral decay index is further reduced, with p now in the range 2/3 ≤ p ≤ 4/3. This reduction is indicative of a transition from the regime where Brunel-induced plasma radiation influences the spectrum to one dominated by bremsstrahlung emission from the Brunel electrons.

Keywords: ultra-relativistic, laser-plasma interactions, high-order harmonic emission, radiation, spectrum

Procedia PDF Downloads 464