Search results for: linear complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4825

Search results for: linear complexity

3715 Experimental Studies of the Reverse Load-Unloading Effect on the Mechanical, Linear and Nonlinear Elastic Properties of n-AMg6/C60 Nanocomposite

Authors: Aleksandr I. Korobov, Natalia V. Shirgina, Aleksey I. Kokshaiskiy, Vyacheslav M. Prokhorov

Abstract:

The paper presents the results of an experimental study of the effect of reverse mechanical load-unloading on the mechanical, linear, and nonlinear elastic properties of n-AMg6/C60 nanocomposite. Samples for experimental studies of n-AMg6/C60 nanocomposite were obtained by grinding AMg6 polycrystalline alloy in a planetary mill with 0.3 wt % of C60 fullerite in an argon atmosphere. The resulting product consisted of 200-500-micron agglomerates of nanoparticles. X-ray coherent scattering (CSL) method has shown that the average nanoparticle size is 40-60 nm. The resulting preform was extruded at high temperature. Modifications of C60 fullerite interferes the process of recrystallization at grain boundaries. In the samples of n-AMg6/C60 nanocomposite, the load curve is measured: the dependence of the mechanical stress σ on the strain of the sample ε under its multi-cycle load-unloading process till its destruction. The hysteresis dependence σ = σ(ε) was observed, and insignificant residual strain ε < 0.005 were recorded. At σ≈500 MPa and ε≈0.025, the sample was destroyed. The destruction of the sample was fragile. Microhardness was measured before and after destruction of the sample. It was found that the loading-unloading process led to an increase in its microhardness. The effect of the reversible mechanical stress on the linear and nonlinear elastic properties of the n-AMg6/C60 nanocomposite was studied experimentally by ultrasonic method on the automated complex Ritec RAM-5000 SNAP SYSTEM. In the n-AMg6/C60 nanocomposite, the velocities of the longitudinal and shear bulk waves were measured with the pulse method, and all the second-order elasticity coefficients and their dependence on the magnitude of the reversible mechanical stress applied to the sample were calculated. Studies of nonlinear elastic properties of the n-AMg6/C60 nanocomposite at reversible load-unloading of the sample were carried out with the spectral method. At arbitrary values of the strain of the sample (up to its breakage), the dependence of the amplitude of the second longitudinal acoustic harmonic at a frequency of 2f = 10MHz on the amplitude of the first harmonic at a frequency f = 5MHz of the acoustic wave is measured. Based on the results of these measurements, the values of the nonlinear acoustic parameter in the n-AMg6/C60 nanocomposite sample at different mechanical stress were determined. The obtained results can be used in solid-state physics, materials science, for development of new techniques for nondestructive testing of structural materials using methods of nonlinear acoustic diagnostics. This study was supported by the Russian Science Foundation (project №14-22-00042).

Keywords: nanocomposite, generation of acoustic harmonics, nonlinear acoustic parameter, hysteresis

Procedia PDF Downloads 134
3714 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 217
3713 Prediction of Terrorist Activities in Nigeria using Bayesian Neural Network with Heterogeneous Transfer Functions

Authors: Tayo P. Ogundunmade, Adedayo A. Adepoju

Abstract:

Terrorist attacks in liberal democracies bring about a few pessimistic results, for example, sabotaged public support in the governments they target, disturbing the peace of a protected environment underwritten by the state, and a limitation of individuals from adding to the advancement of the country, among others. Hence, seeking for techniques to understand the different factors involved in terrorism and how to deal with those factors in order to completely stop or reduce terrorist activities is the topmost priority of the government in every country. This research aim is to develop an efficient deep learning-based predictive model for the prediction of future terrorist activities in Nigeria, addressing low-quality prediction accuracy problems associated with the existing solution methods. The proposed predictive AI-based model as a counterterrorism tool will be useful by governments and law enforcement agencies to protect the lives of individuals in society and to improve the quality of life in general. A Heterogeneous Bayesian Neural Network (HETBNN) model was derived with Gaussian error normal distribution. Three primary transfer functions (HOTTFs), as well as two derived transfer functions (HETTFs) arising from the convolution of the HOTTFs, are namely; Symmetric Saturated Linear transfer function (SATLINS ), Hyperbolic Tangent transfer function (TANH), Hyperbolic Tangent sigmoid transfer function (TANSIG), Symmetric Saturated Linear and Hyperbolic Tangent transfer function (SATLINS-TANH) and Symmetric Saturated Linear and Hyperbolic Tangent Sigmoid transfer function (SATLINS-TANSIG). Data on the Terrorist activities in Nigeria gathered through questionnaires for the purpose of this study were used. Mean Square Error (MSE), Mean Absolute Error (MAE) and Test Error are the forecast prediction criteria. The results showed that the HETFs performed better in terms of prediction and factors associated with terrorist activities in Nigeria were determined. The proposed predictive deep learning-based model will be useful to governments and law enforcement agencies as an effective counterterrorism mechanism to understand the parameters of terrorism and to design strategies to deal with terrorism before an incident actually happens and potentially causes the loss of precious lives. The proposed predictive AI-based model will reduce the chances of terrorist activities and is particularly helpful for security agencies to predict future terrorist activities.

Keywords: activation functions, Bayesian neural network, mean square error, test error, terrorism

Procedia PDF Downloads 148
3712 Planktivorous Fish Schooling Responses to Current at Natural and Artificial Reefs

Authors: Matthew Holland, Jason Everett, Martin Cox, Iain Suthers

Abstract:

High spatial-resolution distribution of planktivorous reef fish can reveal behavioural adaptations to optimise the balance between feeding success and predator avoidance. We used a multi-beam echosounder to record bathymetry and the three-dimensional distribution of fish schools associated with natural and artificial reefs. We utilised generalised linear models to assess the distribution, orientation, and aggregation of fish schools relative to the structure, vertical relief, and currents. At artificial reefs, fish schooled more closely to the structure and demonstrated a preference for the windward side, particularly when exposed to strong currents. Similarly, at natural reefs fish demonstrated a preference for windward aspects of bathymetry, particularly when associated with high vertical relief. Our findings suggest that under conditions with stronger current velocity, fish can exercise their preference to remain close to structure for predator avoidance, while still receiving an adequate supply of zooplankton delivered by the current. Similarly, when current velocity is low, fish tend to disperse for better access to zooplankton. As artificial reefs are generally deployed with the goal of creating productivity rather than simply attracting fish from elsewhere, we advise that future artificial reefs be designed as semi-linear arrays perpendicular to the prevailing current, with multiple tall towers. This will facilitate the conversion of dispersed zooplankton into energy for higher trophic levels, enhancing reef productivity and fisheries.

Keywords: artificial reef, current, forage fish, multi-beam, planktivorous fish, reef fish, schooling

Procedia PDF Downloads 142
3711 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings

Authors: Jude K. Safo

Abstract:

Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.

Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics

Procedia PDF Downloads 57
3710 Nonlinear Finite Element Analysis of Optimally Designed Steel Angelina™ Beams

Authors: Ferhat Erdal, Osman Tunca, Serkan Tas, Serdar Carbas

Abstract:

Web-expanded steel beams provide an easy and economical solution for the systems having longer structural members. The main goal of manufacturing these beams is to increase the moment of inertia and section modulus, which results in greater strength and rigidity. Until recently, there were two common types of open web-expanded beams: with hexagonal openings, also called castellated beams, and beams with circular openings referred to as cellular beams, until the generation of sinusoidal web-expanded beams. In the present research, the optimum design of a new generation beams, namely sinusoidal web-expanded beams, will be carried out and the design results will be compared with castellated and cellular beam solutions. Thanks to a reduced fabrication process and substantial material savings, the web-expanded beam with sinusoidal holes (Angelina™ Beam) meets the economic requirements of steel design problems while ensuring optimum safety. The objective of this research is to carry out non-linear finite element analysis (FEA) of the web-expanded beam with sinusoidal holes. The FE method has been used to predict their entire response to increasing values of external loading until they lose their load carrying capacity. FE model of each specimen that is utilized in the experimental studies is carried out. These models are used to simulate the experimental work to verify of test results and to investigate the non-linear behavior of failure modes such as web-post buckling, shear buckling and vierendeel bending of beams.

Keywords: steel structures, web-expanded beams, angelina beam, optimum design, failure modes, finite element analysis

Procedia PDF Downloads 266
3709 Reverse Logistics Information Management Using Ontological Approach

Authors: F. Lhafiane, A. Elbyed, M. Bouchoum

Abstract:

Reverse Logistics (RL) Process is considered as complex and dynamic network that involves many stakeholders such as: suppliers, manufactures, warehouse, retails, and costumers, this complexity is inherent in such process due to lack of perfect knowledge or conflicting information. Ontologies, on the other hand, can be considered as an approach to overcome the problem of sharing knowledge and communication among the various reverse logistics partners. In this paper, we propose a semantic representation based on hybrid architecture for building the Ontologies in an ascendant way, this method facilitates the semantic reconciliation between the heterogeneous information systems (ICT) that support reverse logistics Processes and product data.

Keywords: Reverse Logistics, information management, heterogeneity, ontologies, semantic web

Procedia PDF Downloads 479
3708 Adaptive Multiple Transforms Hardware Architecture for Versatile Video Coding

Authors: T. Damak, S. Houidi, M. A. Ben Ayed, N. Masmoudi

Abstract:

The Versatile Video Coding standard (VVC) is actually under development by the Joint Video Exploration Team (or JVET). An Adaptive Multiple Transforms (AMT) approach was announced. It is based on different transform modules that provided an efficient coding. However, the AMT solution raises several issues especially regarding the complexity of the selected set of transforms. This can be an important issue, particularly for a future industrial adoption. This paper proposed an efficient hardware implementation of the most used transform in AMT approach: the DCT II. The developed circuit is adapted to different block sizes and can reach a minimum frequency of 192 MHz allowing an optimized execution time.

Keywords: adaptive multiple transforms, AMT, DCT II, hardware, transform, versatile video coding, VVC

Procedia PDF Downloads 131
3707 Simulation and Hardware Implementation of Data Communication Between CAN Controllers for Automotive Applications

Authors: R. M. Kalayappan, N. Kathiravan

Abstract:

In automobile industries, Controller Area Network (CAN) is widely used to reduce the system complexity and inter-task communication. Therefore, this paper proposes the hardware implementation of data frame communication between one controller to other. The CAN data frames and protocols will be explained deeply, here. The data frames are transferred without any collision or corruption. The simulation is made in the KEIL vision software to display the data transfer between transmitter and receiver in CAN. ARM7 micro-controller is used to transfer data’s between the controllers in real time. Data transfer is verified using the CRO.

Keywords: control area network (CAN), automotive electronic control unit, CAN 2.0, industry

Procedia PDF Downloads 386
3706 Vibration Absorption Strategy for Multi-Frequency Excitation

Authors: Der Chyan Lin

Abstract:

Since the early introduction by Ormondroyd and Den Hartog, vibration absorber (VA) has become one of the most commonly used vibration mitigation strategies. The strategy is most effective for a primary plant subjected to a single frequency excitation. For continuous systems, notable advances in vibration absorption in the multi-frequency system were made. However, the efficacy of the VA strategy for systems under multi-frequency excitation is not well understood. For example, for an N degrees-of-freedom (DOF) primary-absorber system, there are N 'peak' frequencies of large amplitude vibration per every new excitation frequency. In general, the usable range for vibration absorption can be greatly reduced as a result. Frequency modulated harmonic excitation is a commonly seen multi-frequency excitation example: f(t) = cos(ϖ(t)t) where ϖ(t)=ω(1+α sin⁡(δt)). It is known that f(t) has a series expansion given by the Bessel function of the first kind, which implies an infinity of forcing frequencies in the frequency modulated harmonic excitation. For an SDOF system of natural frequency ωₙ subjected to f(t), it can be shown that amplitude peaks emerge at ω₍ₚ,ₖ₎=(ωₙ ± 2kδ)/(α ∓ 1),k∈Z; i.e., there is an infinity of resonant frequencies ω₍ₚ,ₖ₎, k∈Z, making the use of VA strategy ineffective. In this work, we propose an absorber frequency placement strategy for SDOF vibration systems subjected to frequency-modulated excitation. An SDOF linear mass-spring system coupled to lateral absorber systems is used to demonstrate the ideas. Although the mechanical components are linear, the governing equations for the coupled system are nonlinear. We show using N identical absorbers, for N ≫ 1, that (a) there is a cluster of N+1 natural frequencies around every natural absorber frequency, and (b) the absorber frequencies can be moved away from the plant's resonance frequency (ω₀) as N increases. Moreover, we also show the bandwidth of the VA performance increases with N. The derivations of the clustering and bandwidth widening effect will be given, and the superiority of the proposed strategy will be demonstrated via numerical experiments.

Keywords: Bessel function, bandwidth, frequency modulated excitation, vibration absorber

Procedia PDF Downloads 139
3705 The Relationship between Land Use Factors and Feeling of Happiness at the Neighbourhood Level

Authors: M. Moeinaddini, Z. Asadi-Shekari, Z. Sultan, M. Zaly Shah

Abstract:

Happiness can be related to everything that can provide a feeling of satisfaction or pleasure. This study tries to consider the relationship between land use factors and feeling of happiness at the neighbourhood level. Land use variables (beautiful and attractive neighbourhood design, availability and quality of shopping centres, sufficient recreational spaces and facilities, and sufficient daily service centres) are used as independent variables and the happiness score is used as the dependent variable in this study. In addition to the land use variables, socio-economic factors (gender, race, marital status, employment status, education, and income) are also considered as independent variables. This study uses the Oxford happiness questionnaire to estimate happiness score of more than 300 people living in six neighbourhoods. The neighbourhoods are selected randomly from Skudai neighbourhoods in Johor, Malaysia. The land use data were obtained by adding related questions to the Oxford happiness questionnaire. The strength of the relationship in this study is found using generalised linear modelling (GLM). The findings of this research indicate that increase in happiness feeling is correlated with an increasing income, more beautiful and attractive neighbourhood design, sufficient shopping centres, recreational spaces, and daily service centres. The results show that all land use factors in this study have significant relationship with happiness but only income, among socio-economic factors, can affect happiness significantly. Therefore, land use factors can affect happiness in Skudai more than socio-economic factors.

Keywords: neighbourhood land use, neighbourhood design, happiness, socio-economic factors, generalised linear modelling

Procedia PDF Downloads 139
3704 Thermoluminescence Characteristic of Nanocrystalline BaSO4 Doped with Europium

Authors: Kanika S. Raheja, A. Pandey, Shaila Bahl, Pratik Kumar, S. P. Lochab

Abstract:

The subject of undertaking for this paper is the study of BaSO4 nanophosphor doped with Europium in which mainly the concentration of the rare earth impurity Eu (0.05, 0.1, 0.2, 0.5, and 1 mol %) has been varied. A comparative study of the thermoluminescence(TL) properties of the given nanophosphor has also been done using a well-known standard dosimetry material i.e. TLD-100.Firstly, a number of samples were prepared successfully by the chemical co-precipitation method. The whole lot was then compared to a well established standard material (TLD-100) for its TL sensitivity property. BaSO4:Eu ( 0.2 mol%) showed the highest sensitivity out of the lot. It was also found that when compared to the standard TLD-100, BaSo4:Eu (0.2mol%) showed surprisingly high sensitivity for a large range of doses. The TL response curve for all prepared samples has also been studied over a wide range of doses i.e 10Gy to 2kGy for gamma radiation. Almost all the samples of BaSO4:Eu showed a remarkable linearity for a broad range of doses, which is a characteristic feature of a fine TL dosimeter. The graph remained linear even beyond 1kGy for gamma radiation. Thus, the given nanophosphor has been successfully optimised for the concentration of the dopant material to achieve its highest TL sensitivity. Further, the comparative study with the standard material revealed that the current optimised sample shows an astonishingly better TL sensitivity and a phenomenal linear response curve for an incredibly wide range of doses for gamma radiation (Co-60) as compared to the standard TLD-100, which makes the current optimised BaSo4:Eu quite promising as an efficient gamma radiation dosimeter. Lastly, the present phosphor has been optimised for its annealing temperature to acquire the best results while also studying its fading and reusability properties.

Keywords: gamma radiation, nanoparticles, radiation dosimetry, thermoluminescence

Procedia PDF Downloads 417
3703 Turing Pattern in the Oregonator Revisited

Authors: Elragig Aiman, Dreiwi Hanan, Townley Stuart, Elmabrook Idriss

Abstract:

In this paper, we reconsider the analysis of the Oregonator model. We highlight an error in this analysis which leads to an incorrect depiction of the parameter region in which diffusion driven instability is possible. We believe that the cause of the oversight is the complexity of stability analyses based on eigenvalues and the dependence on parameters of matrix minors appearing in stability calculations. We regenerate the parameter space where Turing patterns can be seen, and we use the common Lyapunov function (CLF) approach, which is numerically reliable, to further confirm the dependence of the results on diffusion coefficients intensities.

Keywords: diffusion driven instability, common Lyapunov function (CLF), turing pattern, positive-definite matrix

Procedia PDF Downloads 346
3702 Development of a Sensitive Electrochemical Sensor Based on Carbon Dots and Graphitic Carbon Nitride for the Detection of 2-Chlorophenol and Arsenic

Authors: Theo H. G. Moundzounga

Abstract:

Arsenic and 2-chlorophenol are priority pollutants that pose serious health threats to humans and ecology. An electrochemical sensor, based on graphitic carbon nitride (g-C₃N₄) and carbon dots (CDs), was fabricated and used for the determination of arsenic and 2-chlorophenol. The g-C₃N₄/CDs nanocomposite was prepared via microwave irradiation heating method and was dropped-dried on the surface of the glassy carbon electrode (GCE). Transmission electron microscopy (TEM), X-ray diffraction (XRD), photoluminescence (PL), Fourier transform infrared spectroscopy (FTIR), UV-Vis diffuse reflectance spectroscopy (UV-Vis DRS) were used for the characterization of structure and morphology of the nanocomposite. Electrochemical characterization was done by electrochemical impedance spectroscopy (EIS) and cyclic voltammetry (CV). The electrochemical behaviors of arsenic and 2-chlorophenol on different electrodes (GCE, CDs/GCE, and g-C₃N₄/CDs/GCE) was investigated by differential pulse voltammetry (DPV). The results demonstrated that the g-C₃N₄/CDs/GCE significantly enhanced the oxidation peak current of both analytes. The analytes detection sensitivity was greatly improved, suggesting that this new modified electrode has great potential in the determination of trace level of arsenic and 2-chlorophenol. Experimental conditions which affect the electrochemical response of arsenic and 2-chlorophenol were studied, the oxidation peak currents displayed a good linear relationship to concentration for 2-chlorophenol (R²=0.948, n=5) and arsenic (R²=0.9524, n=5), with a linear range from 0.5 to 2.5μM for 2-CP and arsenic and a detection limit of 2.15μM and 0.39μM respectively. The modified electrode was used to determine arsenic and 2-chlorophenol in spiked tap and effluent water samples by the standard addition method, and the results were satisfying. According to the measurement, the new modified electrode is a good alternative as chemical sensor for determination of other phenols.

Keywords: electrochemistry, electrode, limit of detection, sensor

Procedia PDF Downloads 126
3701 Performance Comparison of Prim’s and Ant Colony Optimization Algorithm to Select Shortest Path in Case of Link Failure

Authors: Rimmy Yadav, Avtar Singh

Abstract:

—Ant Colony Optimization (ACO) is a promising modern approach to the unused combinatorial optimization. Here ACO is applied to finding the shortest during communication link failure. In this paper, the performances of the prim’s and ACO algorithm are made. By comparing the time complexity and program execution time as set of parameters, we demonstrate the pleasant performance of ACO in finding excellent solution to finding shortest path during communication link failure.

Keywords: ant colony optimization, link failure, prim’s algorithm, shortest path

Procedia PDF Downloads 381
3700 Comparison of the Existing Damage Indices in Steel Moment-Resisting Frame Structures

Authors: Hamid Kazemi, Abbasali Sadeghi

Abstract:

Assessment of seismic behavior of frame structures is just done for evaluating life and financial damages or lost. The new structural seismic behavior assessment methods have been proposed, so it is necessary to define a formulation as a damage index, which the damage amount has been quantified and qualified. In this paper, four new steel moment-resisting frames with intermediate ductility and different height (2, 5, 8, and 12-story) with regular geometry and simple rectangular plan were supposed and designed. The three existing groups’ damage indices were studied, each group consisting of local index (Drift, Maximum Roof Displacement, Banon Failure, Kinematic, Banon Normalized Cumulative Rotation, Cumulative Plastic Rotation and Ductility), global index (Roufaiel and Meyer, Papadopoulos, Sozen, Rosenblueth, Ductility and Base Shear), and story (Banon Failure and Inter-story Rotation). The necessary parameters for these damage indices have been calculated under the effect of far-fault ground motion records by Non-linear Dynamic Time History Analysis. Finally, prioritization of damage indices is defined based on more conservative values in terms of more damageability rate. The results show that the selected damage index has an important effect on estimation of the damage state. Also, failure, drift, and Rosenblueth damage indices are more conservative indices respectively for local, story and global damage indices.

Keywords: damage index, far-fault ground motion records, non-linear time history analysis, SeismoStruct software, steel moment-resisting frame

Procedia PDF Downloads 280
3699 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 21
3698 Artificial Intelligence Technologies Used in Healthcare: Its Implication on the Healthcare Workforce and Applications in the Diagnosis of Diseases

Authors: Rowanda Daoud Ahmed, Mansoor Abdulhak, Muhammad Azeem Afzal, Sezer Filiz, Usama Ahmad Mughal

Abstract:

This paper discusses important aspects of AI in the healthcare domain. The increase of data in healthcare both in size and complexity, opens more room for artificial intelligence applications. Our focus is to review the main AI methods within the scope of the health care domain. The results of the review show that recommendations for diagnosis and recommendations for treatment, patent engagement, and administrative tasks are the key applications of AI in healthcare. Understanding the potential of AI methods in the domain of healthcare would benefit healthcare practitioners and will improve patient outcomes.

Keywords: AI in healthcare, technologies of AI, neural network, future of AI in healthcare

Procedia PDF Downloads 98
3697 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model

Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi

Abstract:

Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.

Keywords: flight control clearance, LFR, stability analysis, robustness analysis

Procedia PDF Downloads 338
3696 Hydromagnetic Linear Instability Analysis of Giesekus Fluids in Taylor-Couette Flow

Authors: K. Godazandeh, K. Sadeghy

Abstract:

In the present study, the effect of magnetic field on the hydrodynamic instability of Taylor-Couette flow between two concentric rotating cylinders has been numerically investigated. At the beginning the basic flow has been solved using continuity, Cauchy equations (with regards to Lorentz force) and the constitutive equations of a viscoelastic model called "Giesekus" model. Small perturbations, considered to be normal mode, have been superimposed to the basic flow and the unsteady perturbation equations have been derived consequently. Neglecting non-linear terms, the general eigenvalue problem obtained has been solved using pseudo spectral method (combination of Chebyshev polynomials). The objective of the calculations is to study the effect of magnetic fields on the onset of first mode of instability (axisymmetric mode) for different dimensionless parameters of the flow. The results show that the stability picture is highly influenced by the magnetic field. When magnetic field increases, it first has a destabilization effect which changes to stabilization effect due to more increase of magnetic fields. Therefor there is a critical magnetic number (Hartmann number) for instability of Taylor-Couette flow. Also, the effect of magnetic field is more dominant in large gaps. Also based on the results obtained, magnetic field shows a more considerable effect on the stability at higher Weissenberg numbers (at higher elasticity), while the "mobility factor" changes show no dominant role on the intense of suction and injection effect on the flow's instability.

Keywords: magnetic field, Taylor-Couette flow, Giesekus model, pseudo spectral method, Chebyshev polynomials, Hartmann number, Weissenberg number, mobility factor

Procedia PDF Downloads 376
3695 Rule-Of-Mixtures: Predicting the Bending Modulus of Unidirectional Fiber Reinforced Dental Composites

Authors: Niloofar Bahramian, Mohammad Atai, Mohammad Reza Naimi-Jamal

Abstract:

Rule of mixtures is the simple analytical model is used to predict various properties of composites before design. The aim of this study was to demonstrate the benefits and limitations of the Rule-of-Mixtures (ROM) for predicting bending modulus of a continuous and unidirectional fiber reinforced composites using in dental applications. The Composites were fabricated from light curing resin (with and without silica nanoparticles) and modified and non-modified fibers. Composite samples were divided into eight groups with ten specimens for each group. The bending modulus (flexural modulus) of samples was determined from the slope of the initial linear region of stress-strain curve on 2mm×2mm×25mm specimens with different designs: fibers corona treatment time (0s, 5s, 7s), fibers silane treatment (0%wt, 2%wt), fibers volume fraction (41%, 33%, 25%) and nanoparticles incorporation in resin (0%wt, 10%wt, 15%wt). To study the fiber and matrix interface after fracture, single edge notch beam (SENB) method and scanning electron microscope (SEM) were used. SEM also was used to show the nanoparticles dispersion in resin. Experimental results of bending modulus for composites made of both physical (corona) and chemical (silane) treated fibers were in reasonable agreement with linear ROM estimates, but untreated fibers or non-optimized treated fibers and poor nanoparticles dispersion did not correlate as well with ROM results. This study shows that the ROM is useful to predict the mechanical behavior of unidirectional dental composites but fiber-resin interface and quality of nanoparticles dispersion play important role in ROM accurate predictions.

Keywords: bending modulus, fiber reinforced composite, fiber treatment, rule-of-mixtures

Procedia PDF Downloads 260
3694 Implementation of Iterative Algorithm for Earthquake Location

Authors: Hussain K. Chaiel

Abstract:

The development in the field of the digital signal processing (DSP) and the microelectronics technology reduces the complexity of the iterative algorithms that need large number of arithmetic operations. Virtex-Field Programmable Gate Arrays (FPGAs) are programmable silicon foundations which offer an important solution for addressing the needs of high performance DSP designer. In this work, Virtex-7 FPGA technology is used to implement an iterative algorithm to estimate the earthquake location. Simulation results show that an implementation based on block RAMB36E1 and DSP48E1 slices of Virtex-7 type reduces the number of cycles of the clock frequency. This enables the algorithm to be used for earthquake prediction.

Keywords: DSP, earthquake, FPGA, iterative algorithm

Procedia PDF Downloads 372
3693 Solving SPDEs by Least Squares Method

Authors: Hassan Manouzi

Abstract:

We present in this paper a useful strategy to solve stochastic partial differential equations (SPDEs) involving stochastic coefficients. Using the Wick-product of higher order and the Wiener-Itˆo chaos expansion, the SPDEs is reformulated as a large system of deterministic partial differential equations. To reduce the computational complexity of this system, we shall use a decomposition-coordination method. To obtain the chaos coefficients in the corresponding deterministic equations, we use a least square formulation. Once this approximation is performed, the statistics of the numerical solution can be easily evaluated.

Keywords: least squares, wick product, SPDEs, finite element, wiener chaos expansion, gradient method

Procedia PDF Downloads 404
3692 Climate Changes in Albania and Their Effect on Cereal Yield

Authors: Lule Basha, Eralda Gjika

Abstract:

This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.

Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest

Procedia PDF Downloads 75
3691 "IS Cybernetics": An Idea to Base the International System Theory upon the General System Theory and Cybernetics

Authors: Petra Suchovska

Abstract:

The spirit of post-modernity remains chaotic and obscure. Geopolitical rivalries raging at the more extreme levels and the ability of intellectual community to explain the entropy of global affairs has been diminishing. The Western-led idea of globalisation imposed upon the world does not seem to bring the bright future for human progress anymore, and its architects lose much of global control, as the strong non-western cultural entities develop new forms of post-modern establishments. The overall growing cultural misunderstanding and mistrust are expressions of political impotence to deal with the inner contradictions within the contemporary phenomenon (capitalism, economic globalisation) that embrace global society. The drivers and effects of global restructuring must be understood in the context of systems and principles reflecting on true complexity of society. The purpose of this paper is to set out some ideas about how cybernetics can contribute to understanding international system structure and analyse possible world futures. “IS Cybernetics” would apply to system thinking and cybernetic principles in IR in order to analyse and handle the complexity of social phenomena from global perspective. “IS cybernetics” would be, for now, the subfield of IR, concerned with applying theories and methodologies from cybernetics and system sciences by offering concepts and tools for addressing problems holistically. It would bring order to the complex relations between disciplines that IR touches upon. One of its tasks would be to map, measure, tackle and find the principles of dynamics and structure of social forces that influence human behaviour and consequently cause political, technological and economic structural reordering, forming and reforming the international system. “IS cyberneticists” task would be to understand the control mechanisms that govern the operation of international society (and the sub-systems in their interconnection) and only then suggest better ways operate these mechanisms on sublevels as cultural, political, technological, religious and other. “IS cybernetics” would also strive to capture the mechanism of social-structural changes in time, which would open space for syntheses between IR and historical sociology. With the cybernetic distinction between first order studies of observed systems and the second order study of observing systems, IS cybernetics would also provide a unifying epistemological and methodological, conceptual framework for multilateralism and multiple modernities theory.

Keywords: cybernetics, historical sociology, international system, systems theory

Procedia PDF Downloads 212
3690 Improved Performance of AlGaN/GaN HEMTs Using N₂/NH₃ Pretreatment before Passivation

Authors: Yifan Gao

Abstract:

Owing to the high breakdown field, high saturation drift velocity, 2DEG with high density and mobility and so on, AlGaN/GaN HEMTs have been widely used in high-frequency and high-power applications. To acquire a higher power often means higher breakdown voltage and higher drain current. Surface leakage current is usually the key issue affecting the breakdown voltage and power performance. In this work, we have performed in-situ N₂/NH₃ pretreatment before the passivation to suppress the surface leakage and achieve device performance enhancement. The AlGaN/GaN HEMT used in this work was grown on a 3-in. SiC substrate, whose epitaxial structure consists of a 3.5-nm GaN cap layer, a 25-nm Al₀.₂₅GaN barrier layer, a 1-nm AlN layer, a 400-nm i-GaN layer and a buffer layer. In order to analyze the mechanism for the N-based pretreatment, the details are measured by XPS analysis. It is found that the intensity of Ga-O bonds is decreasing and the intensity of Ga-N bonds is increasing, which means with the supplement of N, the dangling bonds on the surface are indeed reduced with the forming of Ga-N bonds, reducing the surface states. The surface states have a great influence on the leakage current, and improved surface states represent a better off-state of the device. After the N-based pretreatment, the breakdown voltage of the device with Lₛ𝒹=6 μm increased from 93V to 170V, which increased by 82.8%. Moreover, for HEMTs with Lₛ𝒹 of 6-μm, we can obtain a peak output power (Pout) of 12.79W/mm, power added efficiency (PAE) of 49.84% and a linear gain of 20.2 dB at 60V under 3.6GHz. Comparing the result with the reference 6-μm device, Pout is increased by 16.5%. Meanwhile, PAE and the linear gain also have a slight increase. The experimental results indicate that using N₂/NH₃ pretreatment before passivation is an attractive approach to achieving power performance enhancement.

Keywords: AlGaN/GaN HEMT, N-based pretreatment, output power, passivation

Procedia PDF Downloads 302
3689 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results

Authors: Athanasios Karasimos, Vasiliki Makri

Abstract:

This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.

Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes

Procedia PDF Downloads 73
3688 Best Resource Recommendation for a Stochastic Process

Authors: Likewin Thomas, M. V. Manoj Kumar, B. Annappa

Abstract:

The aim of this study was to develop an Artificial Neural Network0 s recommendation model for an online process using the complexity of load, performance, and average servicing time of the resources. Here, the proposed model investigates the resource performance using stochastic gradient decent method for learning ranking function. A probabilistic cost function is implemented to identify the optimal θ values (load) on each resource. Based on this result the recommendation of resource suitable for performing the currently executing task is made. The test result of CoSeLoG project is presented with an accuracy of 72.856%.

Keywords: ADALINE, neural network, gradient decent, process mining, resource behaviour, polynomial regression model

Procedia PDF Downloads 371
3687 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro-Grids

Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone

Abstract:

Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.

Keywords: short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, gain

Procedia PDF Downloads 451
3686 A Further Insight to Foaming in Anaerobic Digester

Authors: Ifeyinwa Rita Kanu, Thomas Aspray, Adebayo J. Adeloye

Abstract:

As a result of the ambiguity and complexity surrounding anaerobic digester foaming, efforts have been made by various researchers to understand the process of anaerobic digester foaming so as to proffer a solution that can be universally applied rather than site specific. All attempts ranging from experimental analysis to comparative review of other process has been futile at explaining explicitly the conditions and process of foaming in anaerobic digester. Studying the available knowledge on foam formation and relating it to anaerobic digester process and operating condition, this study presents a succinct and enhanced understanding of foaming in anaerobic digesters as well as introducing a simple and novel method to identify the onset of anaerobic digester foaming based on analysis of historical data from a field scale system.

Keywords: anaerobic digester, foaming, biogas, surfactant, wastewater

Procedia PDF Downloads 433