Search results for: linear multistep methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17940

Search results for: linear multistep methods

16710 Vehicle Routing Problem Considering Alternative Roads under Triple Bottom Line Accounting

Authors: Onur Kaya, Ilknur Tukenmez

Abstract:

In this study, we consider vehicle routing problems on networks with alternative direct links between nodes, and we analyze a multi-objective problem considering the financial, environmental and social objectives in this context. In real life, there might exist several alternative direct roads between two nodes, and these roads might have differences in terms of their lengths and durations. For example, a road might be shorter than another but might require longer time due to traffic and speed limits. Similarly, some toll roads might be shorter or faster but require additional payment, leading to higher costs. We consider such alternative links in our problem and develop a mixed integer linear programming model that determines which alternative link to use between two nodes, in addition to determining the optimal routes for different vehicles, depending on the model objectives and constraints. We consider the minimum cost routing as the financial objective for the company, minimizing the CO2 emissions and gas usage as the environmental objectives, and optimizing the driver working conditions/working hours, and minimizing the risks of accidents as the social objectives. With these objective functions, we aim to determine which routes, and which alternative links should be used in addition to the speed choices on each link. We discuss the results of the developed vehicle routing models and compare their results depending on the system parameters.

Keywords: vehicle routing, alternative links between nodes, mixed integer linear programming, triple bottom line accounting

Procedia PDF Downloads 407
16709 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory

Authors: Damir Latypov

Abstract:

A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.

Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory

Procedia PDF Downloads 154
16708 Analysis of Attention to the Confucius Institute from Domestic and Foreign Mainstream Media

Authors: Wei Yang, Xiaohui Cui, Weiping Zhu, Liqun Liu

Abstract:

The rapid development of the Confucius Institute is attracting more and more attention from mainstream media around the world. Mainstream media plays a large role in public information dissemination and public opinion. This study presents efforts to analyze the correlation and functional relationship between domestic and foreign mainstream media by analyzing the amount of reports on the Confucius Institute. Three kinds of correlation calculation methods, the Pearson correlation coefficient (PCC), the Spearman correlation coefficient (SCC), and the Kendall rank correlation coefficient (KCC), were applied to analyze the correlations among mainstream media from three regions: mainland of China; Hong Kong and Macao (the two special administration regions of China denoted as SARs); and overseas countries excluding China, such as the United States, England, and Canada. Further, the paper measures the functional relationships among the regions using a regression model. The experimental analyses found high correlations among mainstream media from the different regions. Additionally, we found that there is a linear relationship between the mainstream media of overseas countries and those of the SARs by analyzing the amount of reports on the Confucius Institute based on a data set obtained by crawling the websites of 106 mainstream media during the years 2004 to 2014.

Keywords: mainstream media, Confucius institute, correlation analysis, regression model

Procedia PDF Downloads 318
16707 Electricity Load Modeling: An Application to Italian Market

Authors: Giovanni Masala, Stefania Marica

Abstract:

Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.

Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression

Procedia PDF Downloads 395
16706 Behavior of Composite Timber-Concrete Beam with CFRP Reinforcement

Authors: O. Vlcek

Abstract:

The paper deals with current issues in the research of advanced methods to increase the reliability of traditional timber structural elements. It analyses the issue of strengthening of bent timber beams, such as ceiling beams in old (historical) buildings with the additional concrete slab in combination with externally bonded fibre-reinforced polymer. The study evaluates deflection of a selected group of timber beams with concrete slab and additional CFRP reinforcement using different calculating methods and observes differences in results from different calculating methods. An elastic calculation method and evaluation with FEM analysis software were used.

Keywords: timber-concrete composite, strengthening, fibre-reinforced polymer, theoretical analysis

Procedia PDF Downloads 315
16705 The Asymmetric Proximal Support Vector Machine Based on Multitask Learning for Classification

Authors: Qing Wu, Fei-Yan Li, Heng-Chang Zhang

Abstract:

Multitask learning support vector machines (SVMs) have recently attracted increasing research attention. Given several related tasks, the single-task learning methods trains each task separately and ignore the inner cross-relationship among tasks. However, multitask learning can capture the correlation information among tasks and achieve better performance by training all tasks simultaneously. In addition, the asymmetric squared loss function can better improve the generalization ability of the models on the most asymmetric distributed data. In this paper, we first make two assumptions on the relatedness among tasks and propose two multitask learning proximal support vector machine algorithms, named MTL-a-PSVM and EMTL-a-PSVM, respectively. MTL-a-PSVM seeks a trade-off between the maximum expectile distance for each task model and the closeness of each task model to the general model. As an extension of the MTL-a-PSVM, EMTL-a-PSVM can select appropriate kernel functions for shared information and private information. Besides, two corresponding special cases named MTL-PSVM and EMTLPSVM are proposed by analyzing the asymmetric squared loss function, which can be easily implemented by solving linear systems. Experimental analysis of three classification datasets demonstrates the effectiveness and superiority of our proposed multitask learning algorithms.

Keywords: multitask learning, asymmetric squared loss, EMTL-a-PSVM, classification

Procedia PDF Downloads 134
16704 How Much the Role of Fertilizers Management and Wheat Planting Methods on Its Yield Improvement?

Authors: Ebrahim Izadi-Darbandi, Masoud Azad, Masumeh Dehghan

Abstract:

In order to study the effects of nitrogen and phosphoruse management and wheat sowing method on wheat yield, two experiments was performed as factorial, based on completely randomized design with three replications at Research Farm, Faculty of Agriculture, Ferdowsi University of Mashhad, Iran in 2009. In the first experiment nitrogen application rates (100kg ha-1, 200 kg ha-1, 300 kg ha-1), phosphorus application rates (100 kg ha-1, 200 kg ha-1) and two levels of their application methods (Broadcast and Band) were studied. The second experiment treatments included of wheat sowing methods (single-row with 30 cm distance and twine row on 60 cm width ridges), as main plots and nitrogen and phosphorus application methods (Broadcast and Band) as sub plots (150 kg ha-1). Phosphorus and nitrogen sources for fertilization at both experiment were respectively super phosphate, applied before wheat sowing and incorporated with soil and urea, applied in two phases (50% pre plant) and (50%) near wheat shooting. Results from first experiment showed that the effect of fertilizers application methods were significant (p≤0.01) on wheat yield increasing. Band application of phosphorus and nitrogen were increased biomass and seed yield of wheat with nine and 15% respectively compared to their broadcast application. The interaction between the effects of nitrogen and phosphorus application rate with phosphorus and nitrogen application methods, showed that band application of fertilizers and the rate of application of 200kg/ha phosphorus and 300kg/ha nitrogen were the best methods in wheat yield improvement. The second experiment also showed that the effect of wheat sowing method and fertilizers application methods were significant (p≤0.01) on wheat seed and biomass yield improvement. Wheat twine row on 60 cm width ridges sowing method, increased its biomass and seed yield for 22% and 30% respectively compared to single-row with 30 cm. Wheat sowing method and fertilizers application methods interaction indicated that band application of fertilizers and wheat twine row on 60 cm width ridges sowing method was the best treatment on winter wheat yield improvement. In conclusion these results indicated that nitrogen and phosphorus management in wheat and modifying wheat sowing method have important role in increasing fertilizers use efficiency.

Keywords: band application, broadcast application, rate of fertilizer application, wheat seed yield, wheat biomass yield

Procedia PDF Downloads 464
16703 Non-Linear Regression Modeling for Composite Distributions

Authors: Mostafa Aminzadeh, Min Deng

Abstract:

Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.

Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions

Procedia PDF Downloads 34
16702 Magnetic Field Induced Mechanical Behavior of Fluid Filled Carbon Nanotube Foam

Authors: Siva Kumar Reddy, Anwesha Mukherjee, Abha Misra

Abstract:

Excellent energy absorption capability in carbon nanotubes (CNT) is shown in their bulk structure that behaves like super compressible foam. Furthermore, a tunable mechanical behavior of CNT foam is achieved using several methods like changing the concentration of precursors, polymer impregnation, non covalent functionalization of CNT microstructure etc. Influence of magnetic field on compressive behavior of magnetic CNT demonstrated an enhanced peak stress and energy absorption capability, which does not require any surface and structural modification of the foam. This presentation discusses the mechanical behavior of micro porous CNT foam that is impregnated in magnetic field responsive fluid. Magnetic particles are dispersed in a nonmagnetic fluid so that alignment of both particles and CNT could play a crucial role in controlling the stiffness of the overall structure. It is revealed that the compressive behavior of CNT foam critically depends on the fluid viscosity as well as magnetic field intensity. Both peak Stress and energy absorption in CNT foam followed a power law behavior with the increase in the magnetic field intensity. However, in the absence of magnetic field, both peak stress and energy absorption capability of CNT foam presented a linear dependence on the fluid viscosity. Hence, this work demonstrates the role magnetic filed in controlling the mechanical behavior of the foams prepared at nanoscale.

Keywords: carbon nanotubes, magnetic field, energy absorption capability and viscosity

Procedia PDF Downloads 304
16701 Dynamic Store Procedures in Database

Authors: Muhammet Dursun Kaya, Hasan Asil

Abstract:

In recent years, different methods have been proposed to optimize question processing in database. Although different methods have been proposed to optimize the query, but the problem which exists here is that most of these methods destroy the query execution plan after executing the query. This research attempts to solve the above problem by using a combination of methods of communicating with the database (the present questions in the programming code and using store procedures) and making query processing adaptive in database, and proposing a new approach for optimization of query processing by introducing the idea of dynamic store procedures. This research creates dynamic store procedures in the database according to the proposed algorithm. This method has been tested on applied software and results shows a significant improvement in reducing the query processing time and also reducing the workload of DBMS. Other advantages of this algorithm include: making the programming environment a single environment, eliminating the parametric limitations of the stored procedures in the database, making the stored procedures in the database dynamic, etc.

Keywords: relational database, agent, query processing, adaptable, communication with the database

Procedia PDF Downloads 372
16700 Numerical and Experimental Analysis of Stiffened Aluminum Panels under Compression

Authors: Ismail Cengiz, Faruk Elaldi

Abstract:

Within the scope of the study presented in this paper, load carrying capacity and buckling behavior of a stiffened aluminum panel designed by adopting current ‘buckle-resistant’ design application and ‘Post –Buckling’ design approach were investigated experimentally and numerically. The test specimen that is stabilized by Z-type stiffeners and manufactured from aluminum 2024 T3 Clad material was test under compression load. Buckling behavior was observed by means of 3 – dimensional digital image correlation (DIC) and strain gauge pairs. The experimental study was followed by developing an efficient and reliable finite element model whose ability to predict behavior of the stiffened panel used for compression test is verified by compering experimental and numerical results in terms of load – shortening curve, strain-load curves and buckling mode shapes. While finite element model was being constructed, non-linear behaviors associated with material and geometry was considered. Finally, applicability of aluminum stiffened panel in airframe design against to composite structures was evaluated thorough the concept of ‘Structural Efficiency’. This study reveals that considerable amount of weight saving could be gained if the concept of ‘post-buckling design’ is preferred to the already conventionally used ‘buckle resistant design’ concept in aircraft industry without scarifying any of structural integrity under load spectrum.

Keywords: post-buckling, stiffened panel, non-linear finite element method, aluminum, structural efficiency

Procedia PDF Downloads 148
16699 On Mathematical Modelling and Optimization of Emerging Trends Processes in Advanced Manufacturing

Authors: Agarana Michael C., Akinlabi Esther T., Pule Kholopane

Abstract:

Innovation in manufacturing process technologies and associated product design affects the prospects for manufacturing today and in near future. In this study some theoretical methods, useful as tools in advanced manufacturing, are considered. In particular, some basic Mathematical, Operational Research, Heuristic, and Statistical techniques are discussed. These techniques/methods are very handy in many areas of advanced manufacturing processes, including process planning optimization, modelling and analysis. Generally the production rate requires the application of Mathematical methods. The Emerging Trends Processes in Advanced Manufacturing can be enhanced by using Mathematical Modelling and Optimization techniques.

Keywords: mathematical modelling, optimization, emerging trends, advanced manufacturing

Procedia PDF Downloads 298
16698 A Comparative Asessment of Some Algorithms for Modeling and Forecasting Horizontal Displacement of Ialy Dam, Vietnam

Authors: Kien-Trinh Thi Bui, Cuong Manh Nguyen

Abstract:

In order to simulate and reproduce the operational characteristics of a dam visually, it is necessary to capture the displacement at different measurement points and analyze the observed movement data promptly to forecast the dam safety. The accuracy of forecasts is further improved by applying machine learning methods to data analysis progress. In this study, the horizontal displacement monitoring data of the Ialy hydroelectric dam was applied to machine learning algorithms: Gaussian processes, multi-layer perceptron neural networks, and the M5-rules algorithm for modelling and forecasting of horizontal displacement of the Ialy hydropower dam (Vietnam), respectively, for analysing. The database which used in this research was built by collecting time series of data from 2006 to 2021 and divided into two parts: training dataset and validating dataset. The final results show all three algorithms have high performance for both training and model validation, but the MLPs is the best model. The usability of them are further investigated by comparison with a benchmark models created by multi-linear regression. The result show the performance which obtained from all the GP model, the MLPs model and the M5-Rules model are much better, therefore these three models should be used to analyze and predict the horizontal displacement of the dam.

Keywords: Gaussian processes, horizontal displacement, hydropower dam, Ialy dam, M5-Rules, multi-layer perception neural networks

Procedia PDF Downloads 210
16697 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs, and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine-learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient but not the magnitude. A neural network with two hidden layers were then used to learn the coefficient magnitudes along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction

Procedia PDF Downloads 114
16696 Using a Simulated Learning Environment to Teach Pre-Service Special Educators Behavior Management

Authors: Roberta Gentry

Abstract:

A mixed methods study that examined candidate’s perceptions of the use of computerized simulation as an effective tool to learn classroom management will be presented. The development, implementation, and assessment of the simulation and candidate data on the feasibility of the approach in comparison to other methods will be presented.

Keywords: behavior management, simulations, teacher preparation, teacher education

Procedia PDF Downloads 402
16695 The Quantitative Analysis of the Influence of the Superficial Abrasion on the Lifetime of the Frog Rail

Authors: Dong Jiang

Abstract:

Turnout is the essential equipment on the railway, which also belongs to one of the strongest demanded infrastructural facilities of railway on account of the more seriously frog rail failures. In cooperation with Germany Company (DB Systemtechnik AG), our research team focuses on the quantitative analysis about the frog rails to predict their lifetimes. Moreover, the suggestions for the timely and effective maintenances are made to improve the economy of the frog rails. The lifetime of the frog rail depends strongly on the internal damage of the running surface until the breakages occur. On the basis of Hertzian theory of the contact mechanics, the dynamic loads of the running surface are calculated in form of the contact pressures on the running surface and the equivalent tensile stress inside the running surface. According to material mechanics, the strength of the frog rail is determined quantitatively in form of the Stress-cycle (S-N) curve. Under the interaction between the dynamic loads and the strength, the internal damage of the running surface is calculated by means of the linear damage hypothesis of the Miner’s rule. The emergence of the first Breakage on the running surface is to be defined as the failure criterion that the damage degree equals 1.0. From the microscopic perspective, the running surface of the frog rail is divided into numerous segments for the detailed analysis. The internal damage of the segment grows slowly in the beginning and disproportionately quickly in the end until the emergence of the breakage. From the macroscopic perspective, the internal damage of the running surface develops simply always linear along the lifetime. With this linear growth of the internal damages, the lifetime of the frog rail could be predicted simply through the immediate introduction of the slope of the linearity. However, the superficial abrasion plays an essential role in the results of the internal damages from the both perspectives. The influences of the superficial abrasion on the lifetime are described in form of the abrasion rate. It has two contradictory effects. On the one hand, the insufficient abrasion rate causes the concentration of the damage accumulation on the same position below the running surface to accelerate the rail failure. On the other hand, the excessive abrasion rate advances the disappearance of the head hardened surface of the frog rail to result in the untimely breakage on the surface. Thus, the relationship between the abrasion rate and the lifetime is subdivided into an initial phase of the increased lifetime and a subsequent phase of the more rapid decreasing lifetime with the continuous growth of the abrasion rate. Through the compensation of these two effects, the critical abrasion rate is discussed to reach the optimal lifetime.

Keywords: breakage, critical abrasion rate, frog rail, internal damage, optimal lifetime

Procedia PDF Downloads 225
16694 Perceived Effects of Work-Family Balance on Employee’s Job Satisfaction among Extension Agents in Southwest Nigeria

Authors: B. G. Abiona, A. A. Onaseso, T. D. Odetayo, J. Yila, O. E. Fapojuwo, K. G. Adeosun

Abstract:

This study determines the perceived effects of work-family balance on employees’ job satisfaction among Extension Agents in the Agricultural Development Programme (ADP) in southwest Nigeria. A multistage sampling technique was used to select 256 respondents for the study. Data on personal characteristics, work-family balance domain, and job satisfaction were collected. The collected data were analysed using descriptive statistics, Chi-square, Pearson Product Moment Correlation (PPMC), multiple linear regression, and Student T-test. Results revealed that the mean age of the respondents was 40 years; the majority (59.3%) of the respondents were male, and slightly above half (51.6%) of the respondents had MSc as their highest academic qualification. Findings revealed that turnover intention (x ̅ = 3.20) and work-role conflict (x ̅ = 3.06) were the major perceived work-family balance domain in the studied areas. Further, the result showed that the respondents have a high (79%) level of job satisfaction. Multiple linear regression revealed that job involvement (ß=0.167, p<0.01) and work-role conflict (ß= -0.221, p<0.05) contributed significantly to employees’ level of job satisfaction. The results of the Student T-test revealed a significant difference in the perceived work-family balance domain (t = 0.43, p<0.05) between the two studied areas. The study concluded that work-role conflict among employees causes work-family imbalance and, therefore, negatively affects employees’ job satisfaction. The definition of job design among the respondents that will create a balance between work and family is highly recommended.

Keywords: work-life, conflict, job satisfaction, extension agent

Procedia PDF Downloads 94
16693 A Comparative Study on Behavior Among Different Types of Shear Connectors using Finite Element Analysis

Authors: Mohd Tahseen Islam Talukder, Sheikh Adnan Enam, Latifa Akter Lithi, Soebur Rahman

Abstract:

Composite structures have made significant advances in construction applications during the last few decades. Composite structures are composed of structural steel shapes and reinforced concrete combined with shear connectors, which benefit each material's unique properties. Significant research has been conducted on different types of connectors’ behavior and shear capacity. Moreover, the AISC 360-16 “Specification for Steel Structural Buildings” consists of a formula for channel shear connectors' shear capacity. This research compares the behavior of C type and L type shear connectors using Finite Element Analysis. Experimental results from published literature are used to validate the finite element models. The 3-D Finite Element Model (FEM) was built using ABAQUS 2017 to investigate non-linear capabilities and the ultimate load-carrying potential of the connectors using push-out tests. The changes in connector dimensions were analyzed using this non-linear model in parametric investigations. The parametric study shows that by increasing the length of the shear connector by 10 mm, its shear strength increases by 21%. Shear capacity increased by 13% as the height was increased by 10 mm. The thickness of the specimen was raised by 1 mm, resulting in a 2% increase in shear capacity. However, the shear capacity of channel connectors was reduced by 21% due to an increase of thickness by 2 mm.

Keywords: finite element method, channel shear connector, angle shear connector, ABAQUS, composite structure, shear connector, parametric study, ultimate shear capacity, push-out test

Procedia PDF Downloads 125
16692 Masquerade and “What Comes Behind Six Is More Than Seven”: Thoughts on Art History and Visual Culture Research Methods

Authors: Osa D Egonwa

Abstract:

In the 21st century, the disciplinary boundaries of past centuries that we often create through mainstream art historical classification, techniques and sources may have been eroded by visual culture, which seems to provide a more inclusive umbrella for the new ways artists go about the creative process and its resultant commodities. Over the past four decades, artists in Africa have resorted to new materials, techniques and themes which have affected our ways of research on these artists and their art. Frontline artists such as El Anatsui, Yinka Shonibare, Erasmus Onyishi are demonstrating that any material is just suitable for artistic expression. Most of times, these materials come with their own techniques/effects and visual syntax: a combination of materials compounds techniques, formal aesthetic indexes, halo effects, and iconography. This tends to challenge the categories and we lean on to view, think and talk about them. This renders our main stream art historical research methods inadequate, thus suggesting new discursive concepts, terms and theories. This paper proposed the Africanist eclectic methods derived from the dual framework of Masquerade Theory and What Comes Behind Six is More Than Seven. This paper shares thoughts/research on art historical methods, terminological re-alignments on classification/source data, presentational format and interpretation arising from the emergent trends in our subject. The outcome provides useful tools to mediate new thoughts and experiences in recent African art and visual culture.

Keywords: art historical methods, classifications, concepts, re-alignment

Procedia PDF Downloads 110
16691 Using Mining Methods of WEKA to Predict Quran Verb Tense and Aspect in Translations from Arabic to English: Experimental Results and Analysis

Authors: Jawharah Alasmari

Abstract:

In verb inflection, tense marks past/present/future action, and aspect marks progressive/continues perfect/completed actions. This usage and meaning of tense and aspect differ in Arabic and English. In this research, we applied data mining methods to test the predictive function of candidate features by using our dataset of Arabic verbs in-context, and their 7 translations. Weka machine learning classifiers is used in this experiment in order to examine the key features that can be used to provide guidance to enable a translator’s appropriate English translation of the Arabic verb tense and aspect.

Keywords: Arabic verb, English translations, mining methods, Weka software

Procedia PDF Downloads 272
16690 Multi-Scale Control Model for Network Group Behavior

Authors: Fuyuan Ma, Ying Wang, Xin Wang

Abstract:

Social networks have become breeding grounds for the rapid spread of rumors and malicious information, posing threats to societal stability and causing significant public harm. Existing research focuses on simulating the spread of information and its impact on users through propagation dynamics and applies methods such as greedy approximation strategies to approximate the optimal control solution at the global scale. However, the greedy strategy at the global scale may fall into locally optimal solutions, and the approximate simulation of information spread may accumulate more errors. Therefore, we propose a multi-scale control model for network group behavior, introducing individual and group scales on top of the greedy strategy’s global scale. At the individual scale, we calculate the propagation influence of nodes based on their structural attributes to alleviate the issue of local optimality. At the group scale, we conduct precise propagation simulations to avoid introducing cumulative errors from approximate calculations without increasing computational costs. Experimental results on three real-world datasets demonstrate the effectiveness of our proposed multi-scale model in controlling network group behavior.

Keywords: influence blocking maximization, competitive linear threshold model, social networks, network group behavior

Procedia PDF Downloads 21
16689 Recycling Service Strategy by Considering Demand-Supply Interaction

Authors: Hui-Chieh Li

Abstract:

Circular economy promotes greater resource productivity and avoids pollution through greater recycling and re-use which bring benefits for both the environment and the economy. The concept is contrast to a linear economy which is ‘take, make, dispose’ model of production. A well-design reverse logistics service strategy could enhance the willingness of recycling of the users and reduce the related logistics cost as well as carbon emissions. Moreover, the recycle brings the manufacturers most advantages as it targets components for closed-loop reuse, essentially converting materials and components from worn-out product into inputs for new ones at right time and right place. This study considers demand-supply interaction, time-dependent recycle demand, time-dependent surplus value of recycled product and constructs models on recycle service strategy for the recyclable waste collector. A crucial factor in optimizing a recycle service strategy is consumer demand. The study considers the relationships between consumer demand towards recycle and product characteristics, surplus value and user behavior. The study proposes a recycle service strategy which differs significantly from the conventional and typical uniform service strategy. Periods with considerable demand and large surplus product value suggest frequent and short service cycle. The study explores how to determine a recycle service strategy for recyclable waste collector in terms of service cycle frequency and duration and vehicle type for all service cycles by considering surplus value of recycled product, time-dependent demand, transportation economies and demand-supply interaction. The recyclable waste collector is responsible for the collection of waste product for the manufacturer. The study also examines the impacts of utilization rate on the cost and profit in the context of different sizes of vehicles. The model applies mathematical programming methods and attempts to maximize the total profit of the distributor during the study period. This study applies the binary logit model, analytical model and mathematical programming methods to the problem. The model specifically explores how to determine a recycle service strategy for the recycler by considering product surplus value, time-dependent recycle demand, transportation economies and demand-supply interaction. The model applies mathematical programming methods and attempts to minimize the total logistics cost of the recycler and maximize the recycle benefits of the manufacturer during the study period. The study relaxes the constant demand assumption and examines how service strategy affects consumer demand towards waste recycling. Results of the study not only help understanding how the user demand for recycle service and product surplus value affects the logistics cost and manufacturer’s benefits, but also provide guidance such as award bonus and carbon emission regulations for the government.

Keywords: circular economy, consumer demand, product surplus value, recycle service strategy

Procedia PDF Downloads 392
16688 Identification, Isolation and Characterization of Unknown Degradation Products of Cefprozil Monohydrate by HPTLC

Authors: Vandana T. Gawande, Kailash G. Bothara, Chandani O. Satija

Abstract:

The present research work was aimed to determine stability of cefprozil monohydrate (CEFZ) as per various stress degradation conditions recommended by International Conference on Harmonization (ICH) guideline Q1A (R2). Forced degradation studies were carried out for hydrolytic, oxidative, photolytic and thermal stress conditions. The drug was found susceptible for degradation under all stress conditions. Separation was carried out by using High Performance Thin Layer Chromatographic System (HPTLC). Aluminum plates pre-coated with silica gel 60F254 were used as the stationary phase. The mobile phase consisted of ethyl acetate: acetone: methanol: water: glacial acetic acid (7.5:2.5:2.5:1.5:0.5v/v). Densitometric analysis was carried out at 280 nm. The system was found to give compact spot for cefprozil monohydrate (0.45 Rf). The linear regression analysis data showed good linear relationship in the concentration range 200-5.000 ng/band for cefprozil monohydrate. Percent recovery for the drug was found to be in the range of 98.78-101.24. Method was found to be reproducible with % relative standard deviation (%RSD) for intra- and inter-day precision to be < 1.5% over the said concentration range. The method was validated for precision, accuracy, specificity and robustness. The method has been successfully applied in the analysis of drug in tablet dosage form. Three unknown degradation products formed under various stress conditions were isolated by preparative HPTLC and characterized by mass spectroscopic studies.

Keywords: cefprozil monohydrate, degradation products, HPTLC, stress study, stability indicating method

Procedia PDF Downloads 299
16687 Fixed Point Iteration of a Damped and Unforced Duffing's Equation

Authors: Paschal A. Ochang, Emmanuel C. Oji

Abstract:

The Duffing’s Equation is a second order system that is very important because they are fundamental to the behaviour of higher order systems and they have applications in almost all fields of science and engineering. In the biological area, it is useful in plant stem dependence and natural frequency and model of the Brain Crash Analysis (BCA). In Engineering, it is useful in the study of Damping indoor construction and Traffic lights and to the meteorologist it is used in the prediction of weather conditions. However, most Problems in real life that occur are non-linear in nature and may not have analytical solutions except approximations or simulations, so trying to find an exact explicit solution may in general be complicated and sometimes impossible. Therefore we aim to find out if it is possible to obtain one analytical fixed point to the non-linear ordinary equation using fixed point analytical method. We started by exposing the scope of the Duffing’s equation and other related works on it. With a major focus on the fixed point and fixed point iterative scheme, we tried different iterative schemes on the Duffing’s Equation. We were able to identify that one can only see the fixed points to a Damped Duffing’s Equation and not to the Undamped Duffing’s Equation. This is because the cubic nonlinearity term is the determining factor to the Duffing’s Equation. We finally came to the results where we identified the stability of an equation that is damped, forced and second order in nature. Generally, in this research, we approximate the solution of Duffing’s Equation by converting it to a system of First and Second Order Ordinary Differential Equation and using Fixed Point Iterative approach. This approach shows that for different versions of Duffing’s Equations (damped), we find fixed points, therefore the order of computations and running time of applied software in all fields using the Duffing’s equation will be reduced.

Keywords: damping, Duffing's equation, fixed point analysis, second order differential, stability analysis

Procedia PDF Downloads 292
16686 Bartlett Factor Scores in Multiple Linear Regression Equation as a Tool for Estimating Economic Traits in Broilers

Authors: Oluwatosin M. A. Jesuyon

Abstract:

In order to propose a simpler tool that eliminates the age-long problems associated with the traditional index method for selection of multiple traits in broilers, the Barttlet factor regression equation is being proposed as an alternative selection tool. 100 day-old chicks each of Arbor Acres (AA) and Annak (AN) broiler strains were obtained from two rival hatcheries in Ibadan Nigeria. These were raised in deep litter system in a 56-day feeding trial at the University of Ibadan Teaching and Research Farm, located in South-west Tropical Nigeria. The body weight and body dimensions were measured and recorded during the trial period. Eight (8) zoometric measurements namely live weight (g), abdominal circumference, abdominal length, breast width, leg length, height, wing length and thigh circumference (all in cm) were recorded randomly from 20 birds within strain, at a fixed time on the first day of the new week respectively with a 5-kg capacity Camry scale. These records were analyzed and compared using completely randomized design (CRD) of SPSS analytical software, with the means procedure, Factor Scores (FS) in stepwise Multiple Linear Regression (MLR) procedure for initial live weight equations. Bartlett Factor Score (BFS) analysis extracted 2 factors for each strain, termed Body-length and Thigh-meatiness Factors for AA, and; Breast Size and Height Factors for AN. These derived orthogonal factors assisted in deducing and comparing trait-combinations that best describe body conformation and Meatiness in experimental broilers. BFS procedure yielded different body conformational traits for the two strains, thus indicating the different economic traits and advantages of strains. These factors could be useful as selection criteria for improving desired economic traits. The final Bartlett Factor Regression equations for prediction of body weight were highly significant with P < 0.0001, R2 of 0.92 and above, VIF of 1.00, and DW of 1.90 and 1.47 for Arbor Acres and Annak respectively. These FSR equations could be used as a simple and potent tool for selection during poultry flock improvement, it could also be used to estimate selection index of flocks in order to discriminate between strains, and evaluate consumer preference traits in broilers.

Keywords: alternative selection tool, Bartlet factor regression model, consumer preference trait, linear and body measurements, live body weight

Procedia PDF Downloads 203
16685 Effective Method of Paneling for Source/Vortex/Doublet Panel Methods Using Conformal Mapping

Authors: K. C. R. Perera, B. M. Hapuwatte

Abstract:

This paper presents an effective method to divide panels for mesh-less methods of source, vortex and doublet panel methods. In this research study the physical domain of air-foils were transformed into computational domain of a circle using conformal mapping technique of Joukowsky transformation. Then the circle is divided into panels of equal length and the co-ordinates were remapped into physical domain of the air-foil. With this method the leading edge and the trailing edge of the air-foil is panelled with a high density of panels and the rest of the body is panelled with low density of panels. The high density of panels in the leading edge and the trailing edge will increase the accuracy of the solutions obtained from panel methods where the fluid flow at the leading and trailing edges are complex.

Keywords: conformal mapping, Joukowsky transformation, physical domain, computational domain

Procedia PDF Downloads 376
16684 Using a Hybrid Method to Eradicate Bamboo Growth along the Route of Overhead Power Lines

Authors: Miriam Eduful

Abstract:

The Electricity Company of Ghana (ECG) is under obligation, demanded by the Public Utility and Regulation Commission to meet set performance indices. However, in certain parts of the country, bamboo related power interruptions have become a challenge. Growth rate of the bamboo is such that the cost of regular vegetation maintenance along route of the overhead power lines has become prohibitive. To address the problem, several methods and techniques of bamboo eradication have being used. Some of these methods involved application of chemical compounds that are considered inimical and dangerous to the environment. In this paper, three methods of bamboo eradication along the route of the ECG overhead power lines have been investigated. A hybrid method has been found to be very effective and ecologically friendly. The method is locally available and comparatively inexpensive to apply.

Keywords: bamboo, eradication, hybrid method, gly gold

Procedia PDF Downloads 366
16683 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 122
16682 On the Solution of Boundary Value Problems Blended with Hybrid Block Methods

Authors: Kizito Ugochukwu Nwajeri

Abstract:

This paper explores the application of hybrid block methods for solving boundary value problems (BVPs), which are prevalent in various fields such as science, engineering, and applied mathematics. Traditionally, numerical approaches such as finite difference and shooting methods, often encounter challenges related to stability and convergence, particularly in the context of complex and nonlinear BVPs. To address these challenges, we propose a hybrid block method that integrates features from both single-step and multi-step techniques. This method allows for the simultaneous computation of multiple solution points while maintaining high accuracy. Specifically, we employ a combination of polynomial interpolation and collocation strategies to derive a system of equations that captures the behavior of the solution across the entire domain. By directly incorporating boundary conditions into the formulation, we enhance the stability and convergence properties of the numerical solution. Furthermore, we introduce an adaptive step-size mechanism to optimize performance based on the local behavior of the solution. This adjustment allows the method to respond effectively to variations in solution behavior, improving both accuracy and computational efficiency. Numerical tests on a variety of boundary value problems demonstrate the effectiveness of the hybrid block methods. These tests showcase significant improvements in accuracy and computational efficiency compared to conventional methods, indicating that our approach is robust and versatile. The results suggest that this hybrid block method is suitable for a wide range of applications in real-world problems, offering a promising alternative to existing numerical techniques.

Keywords: hybrid block methods, boundary value problem, polynomial interpolation, adaptive step-size control, collocation methods

Procedia PDF Downloads 31
16681 Evaluation of the Photo Neutron Contamination inside and outside of Treatment Room for High Energy Elekta Synergy® Linear Accelerator

Authors: Sharib Ahmed, Mansoor Rafi, Kamran Ali Awan, Faraz Khaskhali, Amir Maqbool, Altaf Hashmi

Abstract:

Medical linear accelerators (LINAC’s) used in radiotherapy treatments produce undesired neutrons when they are operated at energies above 8 MeV, both in electron and photon configuration. Neutrons are produced by high-energy photons and electrons through electronuclear (e, n) a photonuclear giant dipole resonance (GDR) reactions. These reactions occurs when incoming photon or electron incident through the various materials of target, flattening filter, collimators, and other shielding components in LINAC’s structure. These neutrons may reach directly to the patient, or they may interact with the surrounding materials until they become thermalized. A work has been set up to study the effect of different parameter on the production of neutron around the room by photonuclear reactions induced by photons above ~8 MeV. One of the commercial available neutron detector (Ludlum Model 42-31H Neutron Detector) is used for the detection of thermal and fast neutrons (0.025 eV to approximately 12 MeV) inside and outside of the treatment room. Measurements were performed for different field sizes at 100 cm source to surface distance (SSD) of detector, at different distances from the isocenter and at the place of primary and secondary walls. Other measurements were performed at door and treatment console for the potential radiation safety concerns of the therapists who must walk in and out of the room for the treatments. Exposures have taken place from Elekta Synergy® linear accelerators for two different energies (10 MV and 18 MV) for a given 200 MU’s and dose rate of 600 MU per minute. Results indicates that neutron doses at 100 cm SSD depend on accelerator characteristics means jaw settings as jaws are made of high atomic number material so provides significant interaction of photons to produce neutrons, while doses at the place of larger distance from isocenter are strongly influenced by the treatment room geometry and backscattering from the walls cause a greater doses as compare to dose at 100 cm distance from isocenter. In the treatment room the ambient dose equivalent due to photons produced during decay of activation nuclei varies from 4.22 mSv.h−1 to 13.2 mSv.h−1 (at isocenter),6.21 mSv.h−1 to 29.2 mSv.h−1 (primary wall) and 8.73 mSv.h−1 to 37.2 mSv.h−1 (secondary wall) for 10 and 18 MV respectively. The ambient dose equivalent for neutrons at door is 5 μSv.h−1 to 2 μSv.h−1 while at treatment console room it is 2 μSv.h−1 to 0 μSv.h−1 for 10 and 18 MV respectively which shows that a 2 m thick and 5m longer concrete maze provides sufficient shielding for neutron at door as well as at treatment console for 10 and 18 MV photons.

Keywords: equivalent doses, neutron contamination, neutron detector, photon energy

Procedia PDF Downloads 449