Search results for: radial distribution networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7713

Search results for: radial distribution networks

4293 A Bacterial Foraging Optimization Algorithm Applied to the Synthesis of Polyacrylamide Hydrogels

Authors: Florin Leon, Silvia Curteanu

Abstract:

The Bacterial Foraging Optimization (BFO) algorithm is inspired by the behavior of bacteria such as Escherichia coli or Myxococcus xanthus when searching for food, more precisely the chemotaxis behavior. Bacteria perceive chemical gradients in the environment, such as nutrients, and also other individual bacteria, and move toward or in the opposite direction to those signals. The application example considered as a case study consists in establishing the dependency between the reaction yield of hydrogels based on polyacrylamide and the working conditions such as time, temperature, monomer, initiator, crosslinking agent and inclusion polymer concentrations, as well as type of the polymer added. This process is modeled with a neural network which is included in an optimization procedure based on BFO. An experimental study of BFO parameters is performed. The results show that the algorithm is quite robust and can obtain good results for diverse combinations of parameter values.

Keywords: bacterial foraging, hydrogels, modeling and optimization, neural networks

Procedia PDF Downloads 132
4292 Numerical Analysis of Liquid Metal Magnetohydrodynamic Flows in a Manifold with Three Sub-Channels

Authors: Meimei Wen, Chang Nyung Kim

Abstract:

In the current study, three-dimensional liquid metal (LM) magneto-hydrodynamic (MHD) flows in a manifold with three sub-channels under a uniform magnetic field are numerically investigated. In the manifold, the electrical current can cross channel walls, thus having influence on the flow distribution in each sub-channel. A case with various arrangements of electric conductivity for different parts of channel walls is considered, yielding different current distributions as well as flow distributions in each sub-channel. Here, the imbalance of mass flow rates in the three sub-channels is addressed. Meanwhile, predicted are detailed behaviors of the flow velocity, pressure, current and electric potential of LM MHD flows with three sub-channels. Commercial software CFX is used for the numerical simulation of LM MHD flows.

Keywords: CFX, liquid metal, manifold, MHD flow

Procedia PDF Downloads 333
4291 Feasiblity of Replacing Inductive Instrument Transformers with Non-Conventional Intrument Transformers to replace

Authors: David A. Wallace, Salakjit J. Nilboworn

Abstract:

Secure and reliable transmission and distribution of electrical power is crucial in today’s ever-increasing demand for electricity. Traditional methods of protecting the electrical grid have relied on relaying systems receiving voltage and current inputs from inductive instruments transformers (IT). This method has provided robust and stable performance throughout the years. Today with the advent of new non-conventional transformers (NCIT) and sensors, the electrical landscape is changing. These new systems have to ability to provide the same electrical performance as traditional instrument transformers with the added features of data acquisition, communication, smaller footprint, lower cost and resistance to GMD/GIC events.

Keywords: non-conventional instrument transformers, digital substations, smart grids, micro-grids

Procedia PDF Downloads 66
4290 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs

Authors: M. De Filippo, J. S. Kuang

Abstract:

In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.

Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line

Procedia PDF Downloads 165
4289 PDDA: Priority-Based, Dynamic Data Aggregation Approach for Sensor-Based Big Data Framework

Authors: Lutful Karim, Mohammed S. Al-kahtani

Abstract:

Sensors are being used in various applications such as agriculture, health monitoring, air and water pollution monitoring, traffic monitoring and control and hence, play the vital role in the growth of big data. However, sensors collect redundant data. Thus, aggregating and filtering sensors data are significantly important to design an efficient big data framework. Current researches do not focus on aggregating and filtering data at multiple layers of sensor-based big data framework. Thus, this paper introduces (i) three layers data aggregation and framework for big data and (ii) a priority-based, dynamic data aggregation scheme (PDDA) for the lowest layer at sensors. Simulation results show that the PDDA outperforms existing tree and cluster-based data aggregation scheme in terms of overall network energy consumptions and end-to-end data transmission delay.

Keywords: big data, clustering, tree topology, data aggregation, sensor networks

Procedia PDF Downloads 322
4288 Approaching Collaborative Governance Legitimacy through Discursive Legitimation Analysis

Authors: Carlo Schick

Abstract:

Legitimacy can be regarded the very fabric of political orders. Up to this point, IR scholarship was particularly interested in the legitimacy of nation-states, international regimes and of non-governmental actors. The legitimacy of collaborative governance comprising public, private and civic actors, however, has not received much attention from an IR perspective. This is partly due to the fact that the concept of legitimacy is difficult to operationalise and measure in settings where there is no clear boundary between political authorities and those who are subject to collaborative governance. In this case, legitimacy cannot be empirically approached in its own terms, but can only be analysed in terms of dialectic legitimation processes. The author develops a three-fold analytical framework based on a dialogical understanding of legitimation. Legitimation first has to relate to public legitimacy demands and contestations of collaborative governance and second to legitimacy claims issued by collaborative governance networks themselves. Lastly, collaborative governance is dependent on constant self-legitimisation. The paper closes with suggesting a discourse analytic approach to further empirical research on the legitimacy of collaborative governance.

Keywords: legitimacy, collaborative governance, discourse analysis, dialectic legitimation

Procedia PDF Downloads 317
4287 Catalytic Study of Methanol-to-Propylene Conversion over Nano-Sized HZSM-5

Authors: Jianwen Li, Hongfang Ma, Weixin Qian, Haitao Zhang, Weiyong Ying

Abstract:

Methanol-to-propylene conversion was carried out in a continuous-flow fixed-bed reactor over nano-sized HZSM-5 zeolites. The HZSM-5 catalysts were synthesized with different Si/Al ratio and silicon sources, and treated with NaOH. The structural property, morphology, and acidity of catalysts were measured by XRD, N2 adsorption, FE-SEM, TEM, and NH3-TPD. The results indicate that the increment of Si/Al ratio decreased the acidity of catalysts and then improved propylene selectivity, while silicon sources had slight impact on the acidity but affected the product distribution. The desilication after alkali treatment could increase intracrystalline mesopores and enhance propylene selectivity.

Keywords: alkali treatment, HZSM-5, methanol-to-propylene, synthesis condition

Procedia PDF Downloads 203
4286 Electricity Market Categorization for Smart Grid Market Testing

Authors: Rebeca Ramirez Acosta, Sebastian Lenhoff

Abstract:

Decision makers worldwide need to determine if the implementation of a new market mechanism will contribute to the sustainability and resilience of the power system. Due to smart grid technologies, new products in the distribution and transmission system can be traded; however, the impact of changing a market rule will differ between several regions. To test systematically those impacts, a market categorization has been compiled and organized in a smart grid market testing toolbox. This toolbox maps all actual energy products and sets the basis for running a co-simulation test with the new rule to be implemented. It will help to measure the impact of the new rule, based on the sustainable and resilience indicators.

Keywords: co-simulation, electricity market, smart grid market, market testing

Procedia PDF Downloads 167
4285 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation

Authors: Li-hsing Shih, Wei-Jen Hsu

Abstract:

Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.

Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation

Procedia PDF Downloads 53
4284 Classification of Red, Green and Blue Values from Face Images Using k-NN Classifier to Predict the Skin or Non-Skin

Authors: Kemal Polat

Abstract:

In this study, it has been estimated whether there is skin by using RBG values obtained from the camera and k-nearest neighbor (k-NN) classifier. The dataset used in this study has an unbalanced distribution and a linearly non-separable structure. This problem can also be called a big data problem. The Skin dataset was taken from UCI machine learning repository. As the classifier, we have used the k-NN method to handle this big data problem. For k value of k-NN classifier, we have used as 1. To train and test the k-NN classifier, 50-50% training-testing partition has been used. As the performance metrics, TP rate, FP Rate, Precision, recall, f-measure and AUC values have been used to evaluate the performance of k-NN classifier. These obtained results are as follows: 0.999, 0.001, 0.999, 0.999, 0.999, and 1,00. As can be seen from the obtained results, this proposed method could be used to predict whether the image is skin or not.

Keywords: k-NN classifier, skin or non-skin classification, RGB values, classification

Procedia PDF Downloads 234
4283 Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening

Authors: Ksheeraj Sai Vepuri, Nada Attar

Abstract:

We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple.

Keywords: facial expression recognittion, image preprocessing, deep learning, CNN

Procedia PDF Downloads 126
4282 Performance Evaluation of an Efficient Asynchronous Protocol for WDM Ring MANs

Authors: Baziana Peristera

Abstract:

The idea of the asynchronous transmission in wavelength division multiplexing (WDM) ring MANs is studied in this paper. Especially, we present an efficient access technique to coordinate the collisions-free transmission of the variable sizes of IP traffic in WDM ring core networks. Each node is equipped with a tunable transmitter and a tunable receiver. In this way, all the wavelengths are exploited for both transmission and reception. In order to evaluate the performance measures of average throughput, queuing delay and packet dropping probability at the buffers, a simulation model that assumes symmetric access rights among the nodes is developed based on Poisson statistics. Extensive numerical results show that the proposed protocol achieves apart from high bandwidth exploitation for a wide range of offered load, fairness of queuing delay and dropping events among the different packets size categories.

Keywords: asynchronous transmission, collision avoidance, wavelength division multiplexing, WDM

Procedia PDF Downloads 363
4281 Developing a Cybernetic Model of Interdepartmental Logistic Interactions in SME

Authors: Jonas Mayer, Kai-Frederic Seitz, Thorben Kuprat

Abstract:

In today’s competitive environment production’s logistic objectives such as ‘delivery reliability’ and ‘delivery time’ and distribution’s logistic objectives such as ‘service level’ and ‘delivery delay’ are attributed great importance. Especially for small and mid-sized enterprises (SME) attaining these objectives pose a key challenge. Within this context, one of the difficulties is that interactions between departments within the enterprise and their specific objectives are insufficiently taken into account and aligned. Interdepartmental independencies along with contradicting targets set within the different departments result in enterprises having sub-optimal logistic performance capability. This paper presents a research project which will systematically describe the interactions between departments and convert them into a quantifiable form.

Keywords: department-specific actuating and control variables, interdepartmental interactions, cybernetic model, logistic objectives

Procedia PDF Downloads 357
4280 Bioincision of Gmelina Arborea Roxb. Heartwood with Inonotus Dryophilus (Berk.) Murr. for Improved Chemical Uptake and Penetration

Authors: A. O. Adenaiya, S. F. Curling, O. Y. Ogunsanwo, G . A. Ormondroyd

Abstract:

Treatment of wood with chemicals in order to prolong its service life may prove difficult in some refractory wood species. This impermeability in wood is usually due to biochemical changes which occur during heartwood formation. Bioincision, which is a short-term, controlled microbial decomposition of wood, is one of the promising approaches capable of improving the amenability of refractory wood to chemical treatments. Gmelina Arborea, a mainstay timber species in Nigeria, has impermeable heartwood due to the excessive tyloses which occlude its vessels. Therefore, the chemical uptake and penetration in Gmelina arborea heartwood bioincised with Inonotus dryophilus fungus was investigated. Five mature Gmelina Arborea trees were harvested at the Departmental plantation in Ajibode, Ibadan, Nigeria and a bolt of 300 cm was obtained from the basal portion of each tree. The heartwood portion of the bolts was extracted and converted into dimensions 20 mm x 20 mm x 60 mm and subsequently conditioned (200C at 65% Relative Humidity). Twenty wood samples each were bioincised with the white-rot fungus Inonotus dryophilus (ID, 999) for 3, 5, 7 and 9 weeks using standard procedure, while a set of sterile control samples were prepared. Ten of each bioincised and control sample were pressure-treated with 5% tanalith preservative, while the other ten of each bioincised and control samples were pressure-treated with a liquid dye for easy traceability of the chemical in the wood, both using a full cell treatment process. The bioincised and control samples were evaluated for their Weight Loss before chemical treatment (WL, %), Preservative Absorption (PA, Kg/m3), Preservative Retention (PR, Kg/m3), Axial Absorption (AA, Kg/m3), Lateral Absorption (LA, Kg/m3), Axial Penetration Depth (APD, mm), Radial Penetration Depth (RPD, mm), and Tangential Penetration Depth (TPD, mm). The data obtained were analyzed using ANOVA at α0.05. Results show that the weight loss was least in the samples bioincised for three weeks (0.09%) and highest after 7 weeks of bioincision (0.48%). The samples bioincised for 3 weeks had the least PA (106.72 Kg/m3) and PR (5.87 Kg/m3), while the highest PA (134.9 Kg/m3) and PR were observed after 7 weeks of bioincision (7.42 Kg/m3). The AA ranged from 27.28 Kg/m3 (3 weeks) to 67.05 Kg/m3 (5 weeks), while the LA was least after 5 weeks of incubation (28.1 Kg/m3) and highest after 9 weeks (71.74 Kg/m3). Significantly lower APD was observed in control samples (6.97 mm) than in the samples bioincised after 9weeks (19.22 mm). The RPD increased from 0.08 mm (control samples) to 3.48 mm (5 weeks), while TPD ranged from 0.38 mm (control samples) to 0.63 mm (9 weeks), implying that liquid flow in the wood was predominantly through the axial pathway. Bioincising G. arborea heartwood with I. dryophilus fungus for 9 weeks is capable of enhancing chemical uptake and deeper penetration of chemicals in the wood through the degradation of the occluding vessel tyloses, which is accompanied by a minimal degradation of the polymeric wood constituents.

Keywords: Bioincision, chemical uptake, penetration depth, refractory wood, tyloses

Procedia PDF Downloads 88
4279 An Application of Quantile Regression to Large-Scale Disaster Research

Authors: Katarzyna Wyka, Dana Sylvan, JoAnn Difede

Abstract:

Background and significance: The following disaster, population-based screening programs are routinely established to assess physical and psychological consequences of exposure. These data sets are highly skewed as only a small percentage of trauma-exposed individuals develop health issues. Commonly used statistical methodology in post-disaster mental health generally involves population-averaged models. Such models aim to capture the overall response to the disaster and its aftermath; however, they may not be sensitive enough to accommodate population heterogeneity in symptomatology, such as post-traumatic stress or depressive symptoms. Methods: We use an archival longitudinal data set from Weill-Cornell 9/11 Mental Health Screening Program established following the World Trade Center (WTC) terrorist attacks in New York in 2001. Participants are rescue and recovery workers who participated in the site cleanup and restoration (n=2960). The main outcome is the post-traumatic stress symptoms (PTSD) severity score assessed via clinician interviews (CAPS). For a detailed understanding of response to the disaster and its aftermath, we are adapting quantile regression methodology with particular focus on predictors of extreme distress and resilience to trauma. Results: The response variable was defined as the quantile of the CAPS score for each individual under two different scenarios specifying the unconditional quantiles based on: 1) clinically meaningful CAPS cutoff values and 2) CAPS distribution in the population. We present graphical summaries of the differential effects. For instance, we found that the effect of the WTC exposures, namely seeing bodies and feeling that life was in danger during rescue/recovery work was associated with very high PTSD symptoms. A similar effect was apparent in individuals with prior psychiatric history. Differential effects were also present for age and education level of the individuals. Conclusion: We evaluate the utility of quantile regression in disaster research in contrast to the commonly used population-averaged models. We focused on assessing the distribution of risk factors for post-traumatic stress symptoms across quantiles. This innovative approach provides a comprehensive understanding of the relationship between dependent and independent variables and could be used for developing tailored training programs and response plans for different vulnerability groups.

Keywords: disaster workers, post traumatic stress, PTSD, quantile regression

Procedia PDF Downloads 271
4278 Mobile Learning in Teacher Education: A Review in Context of Developing Countries

Authors: Mehwish Raza

Abstract:

Mobile learning (m-learning) offers unique affordances to learners, setting them free of limitations posed by time and geographic space; thus becoming an affordable device for convenient distant learning. There is a plethora of research available on mobile learning projects planned, implemented and evaluated across disciplines in the context of developed countries, however, the potential of m-learning at different educational levels remain unexplored with little evidence of research carried out in developing countries. Despite the favorable technical infrastructure offered by cellular networks and boom in mobile subscriptions in the developing world, there is limited focus on utilizing m-learning for education and development purposes. The objective of this review is to unify findings from m-learning projects that have been implemented in developing countries such as Pakistan, Bangladesh, Philippines, India, and Tanzania for teachers’ in-service training. The purpose is to draw upon key characteristics of mobile learning that would be useful for future researchers to inform conceptualizations of mobile learning for developing countries.

Keywords: design model, developing countries, key characteristics, mobile learning

Procedia PDF Downloads 427
4277 Nonlinear Pollution Modelling for Polymeric Outdoor Insulator

Authors: Rahisham Abd Rahman

Abstract:

In this paper, a nonlinear pollution model has been proposed to compute electric field distribution over the polymeric insulator surface under wet contaminated conditions. A 2D axial-symmetric insulator geometry, energized with 11kV was developed and analysed using Finite Element Method (FEM). A field-dependent conductivity with simplified assumptions was established to characterize the electrical properties of the pollution layer. Comparative field studies showed that simulation of dynamic pollution model results in a more realistic field profile, offering better understanding on how the electric field behaves under wet polluted conditions.

Keywords: electric field distributions, pollution layer, dynamic model, polymeric outdoor insulators, finite element method (FEM)

Procedia PDF Downloads 385
4276 Social Aspects and Successfully Funding a Crowd-Funding Project: The Impact of Social Information

Authors: Peggy S. C. van Teunenbroek

Abstract:

Recently, philanthropic crowd-funding -the raising of external funding from a large audience via social networks or social media- emerged as a new funding instrument for the Dutch cultural sector. However, such philanthropic crowdfunding in the US and the Netherlands is less successful than any other form of crowdfunding. We argue that social aspects are an important stimulus in philanthropic crowd-funding since previous research has shown that crowdfunding is stimulated by something beyond financial merits. Put simply, crowd-funding seems to be a socially motivated activity. In this paper we focus on the effect of social information, described as information about the donation behavior of previous donors. Using a classroom experiment we demonstrated a positive effect of social information on the donation behavior in crowdfunding campaigns. Our study extends previous research by showing who is affected by social information and why, and highlights how social information can be used to stimulate individuals to donate more to crowdfunding projects.

Keywords: online donation behavior, philanthropic crowdfunding, social information, social influence, social motivation

Procedia PDF Downloads 385
4275 Distributed Coverage Control by Robot Networks in Unknown Environments Using a Modified EM Algorithm

Authors: Mohammadhosein Hasanbeig, Lacra Pavel

Abstract:

In this paper, we study a distributed control algorithm for the problem of unknown area coverage by a network of robots. The coverage objective is to locate a set of targets in the area and to minimize the robots’ energy consumption. The robots have no prior knowledge about the location and also about the number of the targets in the area. One efficient approach that can be used to relax the robots’ lack of knowledge is to incorporate an auxiliary learning algorithm into the control scheme. A learning algorithm actually allows the robots to explore and study the unknown environment and to eventually overcome their lack of knowledge. The control algorithm itself is modeled based on game theory where the network of the robots use their collective information to play a non-cooperative potential game. The algorithm is tested via simulations to verify its performance and adaptability.

Keywords: distributed control, game theory, multi-agent learning, reinforcement learning

Procedia PDF Downloads 439
4274 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 25
4273 A Proposal of Local Indentation Techniques for Mechanical Property Evaluation

Authors: G. B. Lim, C. H. Jeon, K. H. Jung

Abstract:

General light metal alloys are often developed in the material of transportation equipment such as automobiles and aircraft. Among the light metal alloys, magnesium is the lightest structural material with superior specific strength and many attractive physical and mechanical properties. However, magnesium alloys were difficult to obtain the mechanical properties at warm temperature. The aims of present work were to establish an analytical relation between mechanical properties and plastic flow induced by local indentation. An experimental investigation of the local strain distribution was carried out using a specially designed local indentation equipment in conjunction with ARAMIS based on digital image correlation method.

Keywords: indentation, magnesium, mechanical property, lightweight material, ARAMIS

Procedia PDF Downloads 467
4272 Investigation of the Possible Correlation of Earthquakes with a Red Tide Occurrence in the Persian Gulf and Oman Sea

Authors: Hadis Hosseinzadehnaseri

Abstract:

The red tide is a kind of algae blooming, caused different problems at different sizes for the human life and the environment, so it has become one of the serious global concerns in the field of Oceanography in few recent decades. This phenomenon has affected on Iran's water, especially the Persian Gulf's since last few years. Collecting data associated with this phenomenon and comparison in different parts of the world is significant as a practical way to study this phenomenon and controlling it. Effective factors to occur this phenomenon lead to the increase of the required nutrients of the algae or provide a good environment for blooming. In this study, we examined the probability of relation between the earthquake and the harmful algae blooming in the Persian Gulf's water through comparing the earthquake data and the recorded Red tides. On the one hand, earthquakes can cause changes in seawater temperature that is effective in creating a suitable environment and the other hand, it increases the possibility of water nutrients, and its transportation in the seabed, so it can play a principal role in the development of red tide occurrence. Comparing the distribution spatial-temporal maps of the earthquakes and deadly red tides in the Persian Gulf and Oman Sea, confirms the hypothesis, why there is a meaningful relation between these two distributions. Comparing the number of earthquakes around the world as well as the number of the red tides in many parts of the world indicates the correlation between these two issues. This subject due to numerous earthquakes, especially in recent years and in the southern part of the country should be considered as a warning to the possibility of re-occurrence of a critical state of red tide in a large scale, why in the year 2008, the number of recorded earthquakes have been more than near years. In this year, the distribution value of the red tide phenomenon in the Persian Gulf got measured about 140,000 square kilometers and entire Oman Sea, with 10 months Survival in the area, which is considered as a record among the occurred algae blooming in the world. In this paper, we could obtain a logical and reasonable relation between the earthquake frequency and this phenomenon occurrence, through compilation of statistics relating to the earthquakes in the southern Iran, from 2000 to the end of the first half of 2013 and also collecting statistics on the occurrence of red tide in the region as well as examination of similar data in different parts of the world. As shown in Figure 1, according to a survey conducted on the earthquake data, the most earthquakes in the southern Iran ranks first in the fourth Gregorian calendar month In April, coincided with Ordibehesht and Khordad in Persian calendar and then in the tenth Gregorian calendar month In October, coincided in Aban and Azar in Persian calendar.

Keywords: red tide, earth quake, persian gulf, harmful algae bloom

Procedia PDF Downloads 483
4271 Stress Analysis of Turbine Blades of Turbocharger Using Structural Steel

Authors: Roman Kalvin, Anam Nadeem, Saba Arif

Abstract:

Turbocharger is a device that is driven by the turbine and increases efficiency and power output of the engine by forcing external air into the combustion chamber. This study focused on the distribution of stress on the turbine blades and total deformation that may occur during its working along with turbocharger to carry out its static structural analysis of turbine blades. Structural steel was selected as the material for turbocharger. Assembly of turbocharger and turbine blades was designed on PRO ENGINEER. Furthermore, the structural analysis is performed by using ANSYS. This research concluded that by using structural steel, the efficiency of engine is improved and by increasing number of turbine blades, more waste heat from combustion chamber is emitted.

Keywords: turbocharger, turbine blades, structural steel, ANSYS

Procedia PDF Downloads 226
4270 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 122
4269 Platform Virtual for Joint Amplitude Measurement Based in MEMS

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez

Abstract:

Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.

Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation

Procedia PDF Downloads 245
4268 Defect Profile Simulation of Oxygen Implantation into Si and GaAs

Authors: N. Dahbi, R. B. Taleb

Abstract:

This study concerns the ion implantation of oxygen in two semiconductors Si and GaAs realized by a simulation using the SRIM tool. The goal of this study is to compare the effect of implantation energy on the distribution of implant ions in the two targets and to examine the different processes resulting from the interaction between the ions of oxygen and the target atoms (Si, GaAs). SRIM simulation results indicate that the implanted ions have a profile as a function of Gaussian-type; oxygen produced more vacancies and implanted deeper in Si compared to GaAs. Also, most of the energy loss is due to ionization and phonon production, where vacancy production amounts to few percent of the total energy.

Keywords: defect profile, GaAs, ion implantation, SRIM, phonon production, vacancies

Procedia PDF Downloads 163
4267 Prosperous Digital Image Watermarking Approach by Using DCT-DWT

Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar

Abstract:

In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacks

Keywords: watermarking, digital, DCT-DWT, security

Procedia PDF Downloads 410
4266 Airborne Molecular Contamination in Clean Room Environment

Authors: T. Rajamäki

Abstract:

In clean room environment molecular contamination in very small concentrations can cause significant harm for the components and processes. This is commonly referred as airborne molecular contamination (AMC). There is a shortage of high sensitivity continuous measurement data for existence and behavior of several of these contaminants. Accordingly, in most cases correlation between concentration of harmful molecules and their effect on processes is not known. In addition, the formation and distribution of contaminating molecules are unclear. In this work sensitive optical techniques are applied in clean room facilities for investigation of concentrations, forming mechanisms and effects of contaminating molecules. Special emphasis is on reactive acid and base gases ammonia (NH3) and hydrogen fluoride (HF). They are the key chemicals in several operations taking place in clean room processes.

Keywords: AMC, clean room, concentration, reactive gas

Procedia PDF Downloads 264
4265 Multi-Scale Modeling of Ti-6Al-4V Mechanical Behavior: Size, Dispersion and Crystallographic Texture of Grains Effects

Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vidal, Farhad Rezai-Aria, Christine Boher

Abstract:

Ti-6Al-4V titanium alloy is one of the most widely used materials in aeronautical and aerospace industries. Because of its high specific strength, good fatigue, and corrosion resistance, this alloy is very suitable for moderate temperature applications. At room temperature, Ti-6Al-4V mechanical behavior is generally controlled by the behavior of alpha phase (beta phase percent is less than 8%). The plastic strain of this phase notably based on crystallographic slip can be hindered by various obstacles and mechanisms (crystal lattice friction, sessile dislocations, strengthening by solute atoms and grain boundaries…). The grains aspect of alpha phase (its morphology and texture) and the nature of its crystallographic lattice (which is hexagonal compact) give to plastic strain heterogeneous, discontinuous and anisotropic characteristics at the local scale. The aim of this work is to develop a multi-scale model for Ti-6Al-4V mechanical behavior using crystal plasticity approach; this multi-scale model is used then to investigate grains size, dispersion of grains size, crystallographic texture and slip systems activation effects on Ti-6Al-4V mechanical behavior under monotone quasi-static loading. Nine representative elementary volume (REV) are built for taking into account the physical elements (grains size, dispersion and crystallographic) mentioned above, then boundary conditions of tension test are applied. Finally, simulation of the mechanical behavior of Ti-6Al-4V and study of slip systems activation in alpha phase is reported. The results show that the macroscopic mechanical behavior of Ti-6Al-4V is strongly linked to the active slip systems family (prismatic, basal or pyramidal). The crystallographic texture determines which family of slip systems can be activated; therefore it gives to the plastic strain a heterogeneous character thus an anisotropic macroscopic mechanical behavior of Ti-6Al-4V alloy modeled. The grains size influences also on mechanical proprieties of Ti-6Al-4V, especially on the yield stress; by decreasing of the grain size, the yield strength increases. Finally, the grains' distribution which characterizes the morphology aspect (homogeneous or heterogeneous) gives to the deformation fields distribution enough heterogeneity because the crystallographic slip is easier in large grains compared to small grains, which generates a localization of plastic deformation in certain areas and a concentration of stresses in others.

Keywords: multi-scale modeling, Ti-6Al-4V alloy, crystal plasticity, grains size, crystallographic texture

Procedia PDF Downloads 144
4264 Phytoremediation Potential of Tomato for Cd and Cr Removal from Polluted Soils

Authors: Jahanshah Saleh, Hossein Ghasemi, Ali Shahriari, Faezeh Alizadeh, Yaaghoob Hosseini

Abstract:

Cadmium and chromium are toxic to most organisms and different mechanisms have been developed for overcoming with the toxic effects of these heavy metals. We studied the uptake and distribution of cadmium and chromium in different organs of tomato (Lycopersicon esculentum L.) plants in nine heavy metal polluted soils in western Hormozgan province, Iran. The accumulation of chromium was in increasing pattern of fruit peel

Keywords: cadmium, chromium, phytoextraction, phytostabilization, tomato

Procedia PDF Downloads 329