Search results for: residual LSTM
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 816

Search results for: residual LSTM

666 Experimental Investigation on Effect of Different Heat Treatments on Phase Transformation and Superelasticity of NiTi Alloy

Authors: Erfan Asghari Fesaghandis, Reza Ghaffari Adli, Abbas Kianvash, Hossein Aghajani, Homa Homaie

Abstract:

NiTi alloys possess magnificent superelastic, shape memory, high strength and biocompatible properties. For improving mechanical properties, foremost, superelasticity behavior, heat treatment process is carried out. In this paper, two different heat treatment methods were undertaken: (1) solid solution, and (2) aging. The effect of each treatment in a constant time is investigated. Five samples were prepared to study the structure and optimize mechanical properties under different time and temperature. For measuring the upper plateau stress, lower plateau stress and residual strain, tensile test is carried out. The samples were aged at two different temperatures to see difference between aging temperatures. The sample aged at 500 °C has a bigger crystallite size and lower amount of Ni which causes the mentioned sample to possess poor pseudo elasticity behaviour than the other aged sample. The sample aged at 460 °C has shown remarkable superelastic properties. The mentioned sample’s higher plateau is 580 MPa with the lowest residual strain (0.17%) while other samples have possessed higher residual strains. X-ray diffraction was used to investigate the produced phases.

Keywords: heat treatment, phase transformation, superelasticity, NiTi alloy

Procedia PDF Downloads 99
665 The Effect of Restaurant Residuals on Performance of Japanese Quail

Authors: A. A. Saki, Y. Karimi, H. J. Najafabadi, P. Zamani, Z. Mostafaie

Abstract:

The restaurant residuals reasons such as competition between human and animal consumption of cereals, increasing environmental pollution and the high cost of production of livestock products is important. Therefore, in this restaurant residuals have a high nutritional value (protein and high energy) that it is possible can replace some of the poultry diets are especially Japanese quail. Today, the challenges of processing and consumption of these lesions occurring in modern industry would be confronting. Increasing costs, pressures, and problems associated with waste excretion, the need for re-evaluation and utilization of waste to livestock and poultry feed fortifies. This study aimed to investigate the effects of different levels of restaurant residuals on performance of 300 layer Japanese quails. This experiment included 5 treatments, 4 replicates, and 15 quails in each from 10 to 18 weeks age in a completely randomized design (CRD). The treatments consist of basal diet including corn and soybean meal (without residual restaurants), and treatments 2, 3, 4 and 5, includes a basal diet containing 5, 10, 15 and 20% of restaurant residuals, respectively. There were no significant effect of restaurant residuals levels on body weight (BW), feed conversion ratio (FCR), percentage of egg production (EP), egg mass (EM) between treatments (P > 0/05). However, feed intake (FI) of 5% restaurant residual was significantly higher than 20% treatment (P < 0/05). Egg weight (EW) was also higher by receiving 20% restaurant residuals compared with 10% in this respect (P < 0/05). Yolk weight (YW) of treatments containing 10 and 20% of the residual restaurant were significantly higher than control (P < 0/05). Eggs white weight (EWW) of 20 and 5% restaurants residual treatments were significantly increased compared by 10% (P < 0/05). Furthermore, EW, egg weight to shell surface area and egg surface area in 20% treatment were significantly higher than control and 10% treatment (P < 0/05). The overall results of this study have shown that restaurant residuals for laying quail diets in levels of 10 and 15 percent could be replaced with a part of the quail ration without any adverse effect.

Keywords: by-product, laying quail, performance, restaurant residuals

Procedia PDF Downloads 138
664 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 16
663 PsyVBot: Chatbot for Accurate Depression Diagnosis using Long Short-Term Memory and NLP

Authors: Thaveesha Dheerasekera, Dileeka Sandamali Alwis

Abstract:

The escalating prevalence of mental health issues, such as depression and suicidal ideation, is a matter of significant global concern. It is plausible that a variety of factors, such as life events, social isolation, and preexisting physiological or psychological health conditions, could instigate or exacerbate these conditions. Traditional approaches to diagnosing depression entail a considerable amount of time and necessitate the involvement of adept practitioners. This underscores the necessity for automated systems capable of promptly detecting and diagnosing symptoms of depression. The PsyVBot system employs sophisticated natural language processing and machine learning methodologies, including the use of the NLTK toolkit for dataset preprocessing and the utilization of a Long Short-Term Memory (LSTM) model. The PsyVBot exhibits a remarkable ability to diagnose depression with a 94% accuracy rate through the analysis of user input. Consequently, this resource proves to be efficacious for individuals, particularly those enrolled in academic institutions, who may encounter challenges pertaining to their psychological well-being. The PsyVBot employs a Long Short-Term Memory (LSTM) model that comprises a total of three layers, namely an embedding layer, an LSTM layer, and a dense layer. The stratification of these layers facilitates a precise examination of linguistic patterns that are associated with the condition of depression. The PsyVBot has the capability to accurately assess an individual's level of depression through the identification of linguistic and contextual cues. The task is achieved via a rigorous training regimen, which is executed by utilizing a dataset comprising information sourced from the subreddit r/SuicideWatch. The diverse data present in the dataset ensures precise and delicate identification of symptoms linked with depression, thereby guaranteeing accuracy. PsyVBot not only possesses diagnostic capabilities but also enhances the user experience through the utilization of audio outputs. This feature enables users to engage in more captivating and interactive interactions. The PsyVBot platform offers individuals the opportunity to conveniently diagnose mental health challenges through a confidential and user-friendly interface. Regarding the advancement of PsyVBot, maintaining user confidentiality and upholding ethical principles are of paramount significance. It is imperative to note that diligent efforts are undertaken to adhere to ethical standards, thereby safeguarding the confidentiality of user information and ensuring its security. Moreover, the chatbot fosters a conducive atmosphere that is supportive and compassionate, thereby promoting psychological welfare. In brief, PsyVBot is an automated conversational agent that utilizes an LSTM model to assess the level of depression in accordance with the input provided by the user. The demonstrated accuracy rate of 94% serves as a promising indication of the potential efficacy of employing natural language processing and machine learning techniques in tackling challenges associated with mental health. The reliability of PsyVBot is further improved by the fact that it makes use of the Reddit dataset and incorporates Natural Language Toolkit (NLTK) for preprocessing. PsyVBot represents a pioneering and user-centric solution that furnishes an easily accessible and confidential medium for seeking assistance. The present platform is offered as a modality to tackle the pervasive issue of depression and the contemplation of suicide.

Keywords: chatbot, depression diagnosis, LSTM model, natural language process

Procedia PDF Downloads 33
662 Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces

Authors: Shweta Singh, Sudaman Katti

Abstract:

The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning.

Keywords: convolutional neural networks, reinforcement learning, self-attention, transformers, unity

Procedia PDF Downloads 95
661 Residual Evaluation by Thresholding and Neuro-Fuzzy System: Application to Actuator

Authors: Y. Kourd, D. Lefebvre, N. Guersi

Abstract:

The monitoring of industrial processes is required to ensure operating conditions of industrial systems through automatic detection and isolation of faults. In this paper we propose a method of fault diagnosis based on neuro-fuzzy technique and the choice of a threshold. The validation of this method on a test bench "Actuator Electro DAMADICS Benchmark". In the first phase of the method, we construct a model represents the normal state of the system to fault detection. With residuals analysis generated and the choice of thresholds for signatures table. These signatures provide us with groups of non-detectable faults. In the second phase, we build faulty models to see the flaws in the system that are not located in the first phase.

Keywords: residuals analysis, threshold, neuro-fuzzy system, residual evaluation

Procedia PDF Downloads 416
660 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network

Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin

Abstract:

The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.

Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake

Procedia PDF Downloads 31
659 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments

Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz

Abstract:

Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.

Keywords: LSTMs, streamflow, hyperparameters, hydrology

Procedia PDF Downloads 12
658 A Comparative Study of Simple and Pre-polymerized Fe Coagulants for Surface Water Treatment

Authors: Petros Gkotsis, Giorgos Stratidis, Manassis Mitrakas, Anastasios Zouboulis

Abstract:

This study investigates the use of original and pre-polymerized iron (Fe) reagents compared to the commonly applied polyaluminum chloride (PACl) coagulant for surface water treatment. Applicable coagulants included both ferric chloride (FeCl₃) and ferric sulfate (Fe₂(SO₄)₃) and their pre-polymerized Fe reagents, such as polyferric sulfate (PFS) and polyferric chloride (PFCl). The efficiency of coagulants was evaluated by the removal of natural organic matter (NOM) and suspended solids (SS), which were determined in terms of reducing the UV absorption at 254 nm and turbidity, respectively. The residual metal concentration (Fe and Al) was also measured. Coagulants were added at five concentrations (1, 2, 3, 4 and 5 mg/L) and three pH values (7.0, 7.3 and 7.6). Experiments were conducted in a jar-test device, with two types of synthetic surface water (i.e., of high and low organic strength) which consisted of humic acid (HA) and kaolin at different concentrations (5 mg/L and 50 mg/L). After the coagulation/flocculation process, clean water was separated with filters of pore size 0.45 μm. Filtration was also conducted before the addition of coagulants in order to compare the ‘net’ effect of the coagulation/flocculation process on the examined parameters (UV at 254 nm, turbidity, and residual metal concentration). Results showed that the use of PACl resulted in the highest removal of humics for both types of surface water. For the surface water of high organic strength (humic acid-kaolin, 50 mg/L-50 mg/L), the highest removal of humics was observed at the highest coagulant dosage of 5 mg/L and at pH=7. On the contrary, turbidity was not significantly affected by the coagulant dosage. However, the use of PACl decreased turbidity the most, especially when the surface water of high organic strength was employed. As expected, the application of coagulation/flocculation prior to filtration improved NOM removal but slightly affected turbidity. Finally, the residual Fe concentration (0.01-0.1 mg/L) was much lower than the residual Al concentration (0.1-0.25 mg/L).

Keywords: coagulation/flocculation, iron and aluminum coagulants, metal salts, pre-polymerized coagulants, surface water treatment

Procedia PDF Downloads 116
657 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation

Procedia PDF Downloads 323
656 Calibration of Site Effect Parameters in the GMPM BSSA 14 for the Region of Spain

Authors: Gonzalez Carlos, Martinez Fransisco

Abstract:

The creation of a seismic prediction model that considers all the regional variations and perfectly adjusts its results to the response spectra is very complicated. To achieve statistically acceptable results, it is necessary to process a sufficiently robust data set, and even if high efficiencies are achieved, this model will only work properly in this region. However, when using it in other regions, differences are found due to different parameters that have not been calibrated to other regions, such as the site effect. The fact that impedance contrasts, as well as other factors belonging to the site, have a great influence on the local response is well known, which is why this work, using the residual method, is intended to establish a regional calibration of the corresponding parameters site effect for the Spain region in the global GMPM BSSA 14.

Keywords: GMPM, seismic prediction equations, residual method, response spectra, impedance contrast

Procedia PDF Downloads 62
655 Direct and Residual Effects of Boron and Zinc on Growth and Nutrient Status of Rice and Wheat Crop

Authors: M. Saleem, M. Shahnawaz, A. W. Gandahi, S. M. Bhatti

Abstract:

The micronutrients boron and zinc deficiencies are extensive in the areas of rice-wheat cropping system. Optimum levels of these nutrients in soil are necessary for healthy crop growth. Since rice and wheat are major staple food of worlds’ populace, the higher yields and nutrition status of these crops has direct effect on the health of human being and economy of the country. A field study was conducted to observe the direct and residual effect of two selected micronutrients boron (B) and zinc (Zn)) on rice and wheat crop growth and its grain nutrient status. Each plot received either B or Zn at the rates of 0, 1, 2, 3 and 4 kg B ha⁻¹, and 5, 10, 15 and 20 kg Zn ha⁻¹, combined B and Zn application at 1 kg B and 5 kg Zn ha⁻¹, 2 kg B and 10 kg Zn ha⁻¹. Colemanite ore were used as source of B and zinc sulfate for Zn. The second season wheat crop was planted in the same plots after the interval period of 30 days and during this time gap soil was fallow. Boron and Zn application significantly enhanced the plant height, number of tillers, Grains panicle⁻¹ seed index fewer empty grains panicle⁻¹ and yield of rice crop at all defined levels as compared to control. The highest yield (10.00 tons/ha) was recorded at 2 Kg B, 10 Kg Zn ha⁻¹ rates. Boron and Zn concentration in grain and straw significantly increased. The application of B also improved the nutrition status of rice as B, protein and total carbohydrates content of grain augmented. The analysis of soil samples collected after harvest of rice crop showed that the B and Zn content in post-harvest soil samples was high in colemanite and zinc sulfate applied plots. The residual B and Zn were also effectual for the second season wheat crop, as the growth parameters plant height, number of tillers, earhead length, weight 1000 grains, B and Zn content of grain significantly improved. The highest wheat grain yield (4.23 tons/ha) was recorded at the residual rates of 2 kg B and 10 kg Zn ha⁻¹ than the other treatments. This study showed that one application of B and Zn can increase crop yields for at least two consecutive seasons and the mineral colemanite can confidently be used as source of B for rice crop because very small quantities of these nutrients are consumed by first season crop and remaining amount was present in soil which were used by second season wheat crop for healthy growth. Consequently, there is no need to apply these micronutrients to the following crop when it is applied on the previous one.

Keywords: residual boron, zinc, rice, wheat

Procedia PDF Downloads 123
654 Studies of Reduction Metal Impurity in Residual Melt by Czochralski Method

Authors: Jaemin Kim, Ilsun Pang, Yongrae Cho, Kwanghun Kim, Sungsun Baik

Abstract:

Manufacturing cost reduction is becoming more important due to excessive oversupply of Single crystalline ingot in recent solar market. Many companies are carrying out extensive research to grow more than one Single crystalline ingot in one batch to reduce manufacturing cost. However what most companies are finding difficult in this process is the effect on ingot due to increasing levels of impurities. Every ingot leaves a certain amount of melt after it is fully grown. This is the impurity that lowers the ingot quality. This impurity increase in the batch after second, third and more are grown subsequently in one batch. In order to solve this problem, the experiment to remove the residual melt in high temperature of hot zone was performed and succeeded. Theoretical average metal concentration of second ingot by new method was calculated and compared to it by conventional method.

Keywords: single crystal, solar cell, metal impurity, Ingot

Procedia PDF Downloads 360
653 A Passive Reaction Force Compensation for a Linear Motor Motion Stage Using Pre-Compressed Springs

Authors: Kim Duc Hoang, Hyeong Joon Ahn

Abstract:

Residual vibration of the system base due to a high-acceleration motion of a stage may reduce life and productivity of the manufacturing device. Although a passive RFC can reduce vibration of the system base, spring or dummy mass should be replaced to tune performance of the RFC. In this paper, we develop a novel concept of the passive RFC mechanism for a linear motor motion stage using pre-compressed springs. Dynamic characteristic of the passive RFC can be adjusted by pre-compression of the spring without exchanging the spring or dummy mass. First, we build a linear motor motion stage with pre-compressed springs. Then, the effect of the pre-compressed spring on the passive RFC is investigated by changing both pre-compressions and stiffness of springs. Finally, the effectiveness of the passive RFC using pre-compressed springs was verified with both simulations and experiments.

Keywords: linear motor motion stage, residual vibration, passive RFC, pre-compressed spring

Procedia PDF Downloads 316
652 Fatigue Life Estimation of Spiral Welded Waterworks Pipelines

Authors: Suk Woo Hong, Chang Sung Seok, Jae Mean Koo

Abstract:

Recently, the welding is widely used in modern industry for joining the structures. However, the waterworks pipes are exposed to the fatigue load by cars, earthquake and etc because of being buried underground. Moreover, the residual stress exists in weld zone by welding process and it is well known that the fatigue life of welded structures is degraded by residual stress. Due to such reasons, the crack can occur in the weld zone of pipeline. In this case, The ground subsidence or sinkhole can occur, if the soil and sand are washed down by fluid leaked from the crack of water pipe. These problems can lead to property damage and endangering lives. For these reasons, the estimation of fatigue characteristics for water pipeline weld zone is needed. Therefore, in this study, for fatigue characteristics estimation of spiral welded waterworks pipe, ASTM standard specimens and Curved Plate specimens were collected from the spiral welded waterworks pipe and the fatigue tests were performed. The S-N curves of each specimen were estimated, and then the fatigue life of weldment Curved Plate specimen was predicted by theoretical and analytical methods. After that, the weldment Curved Plate specimens were collected from the pipe and verification fatigue tests were performed. Finally, it was verified that the predicted S-N curve of weldment Curved Plate specimen was good agreement with fatigue test data.

Keywords: spiral welded pipe, prediction fatigue life, endurance limit modifying factors, residual stress

Procedia PDF Downloads 268
651 In situ Immobilization of Mercury in a Contaminated Calcareous Soil Using Water Treatment Residual Nanoparticles

Authors: Elsayed A. Elkhatib, Ahmed M. Mahdy, Mohamed L. Moharem, Mohamed O. Mesalem

Abstract:

Mercury (Hg) is one of the most toxic and bio-accumulative heavy metal in the environment. However, cheap and effective in situ remediation technology is lacking. In this study, the effects of water treatment residuals nanoparticles (nWTR) on mobility, fractionation and speciation of mercury in an arid zone soil from Egypt were evaluated. Water treatment residual nanoparticles with high surface area (129 m 2 g-1) were prepared using Fritsch planetary mono mill. Scanning and transmission electron microscopy revealed that the nanoparticles of WTR nanoparticles are spherical in shape, and single particle sizes are in the range of 45 to 96 nm. The x-ray diffraction (XRD) results ascertained that amorphous iron, aluminum (hydr)oxides and silicon oxide dominating all nWTR, with no apparent crystalline iron–Al (hydr)oxides. Addition of nWTR, greatly increased the Hg sorption capacities of studied soils and greatly reduced the cumulative Hg released from the soils. Application of nWTR at 0.10 and 0.30 % rates reduced the released Hg from the soil by 50 and 85 % respectively. The power function and first order kinetics models well described the desorption process from soils and nWTR amended soils as evidenced by high coefficient of determination (R2) and low SE values. Application of nWTR greatly increased the association of Hg with the residual fraction. Meanwhile, application of nWTR at a rate of 0.3% greatly increased the association of Hg with the residual fraction (>93%) and significantly increased the most stable Hg species (Hg(OH)2 amor) which in turn enhanced Hg immobilization in the studied soils. Fourier transmission infrared spectroscopy analysis indicated the involvement of nWTR in the retention of Hg (II) through OH groups which suggest inner-sphere adsorption of Hg ions to surface functional groups on nWTR. These results demonstrated the feasibility of using a low-cost nWTR as best management practice to immobilize excess Hg in contaminated soils.

Keywords: release kinetics, Fourier transmission infrared spectroscopy, Hg fractionation, Hg species

Procedia PDF Downloads 200
650 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 117
649 Study of Error Analysis and Sources of Uncertainty in the Measurement of Residual Stresses by the X-Ray Diffraction

Authors: E. T. Carvalho Filho, J. T. N. Medeiros, L. G. Martinez

Abstract:

Residual stresses are self equilibrating in a rigid body that acts on the microstructure of the material without application of an external load. They are elastic stresses and can be induced by mechanical, thermal and chemical processes causing a deformation gradient in the crystal lattice favoring premature failure in mechanicals components. The search for measurements with good reliability has been of great importance for the manufacturing industries. Several methods are able to quantify these stresses according to physical principles and the response of the mechanical behavior of the material. The diffraction X-ray technique is one of the most sensitive techniques for small variations of the crystalline lattice since the X-ray beam interacts with the interplanar distance. Being very sensitive technique is also susceptible to variations in measurements requiring a study of the factors that influence the final result of the measurement. Instrumental, operational factors, form deviations of the samples and geometry of analyzes are some variables that need to be considered and analyzed in order for the true measurement. The aim of this work is to analyze the sources of errors inherent to the residual stress measurement process by X-ray diffraction technique making an interlaboratory comparison to verify the reproducibility of the measurements. In this work, two specimens were machined, differing from each other by the surface finishing: grinding and polishing. Additionally, iron powder with particle size less than 45 µm was selected in order to be a reference (as recommended by ASTM E915 standard) for the tests. To verify the deviations caused by the equipment, those specimens were positioned and with the same analysis condition, seven measurements were carried out at 11Ψ tilts. To verify sample positioning errors, seven measurements were performed by positioning the sample at each measurement. To check geometry errors, measurements were repeated for the geometry and Bragg Brentano parallel beams. In order to verify the reproducibility of the method, the measurements were performed in two different laboratories and equipments. The results were statistically worked out and the quantification of the errors.

Keywords: residual stress, x-ray diffraction, repeatability, reproducibility, error analysis

Procedia PDF Downloads 139
648 Study of Biomechanical Model for Smart Sensor Based Prosthetic Socket Design System

Authors: Wei Xu, Abdo S. Haidar, Jianxin Gao

Abstract:

Prosthetic socket is a component that connects the residual limb of an amputee with an artificial prosthesis. It is widely recognized as the most critical component that determines the comfort of a patient when wearing the prosthesis in his/her daily activities. Through the socket, the body weight and its associated dynamic load are distributed and transmitted to the prosthesis during walking, running or climbing. In order to achieve a good-fit socket for an individual amputee, it is essential to obtain the biomechanical properties of the residual limb. In current clinical practices, this is achieved by a touch-and-feel approach which is highly subjective. Although there have been significant advancements in prosthetic technologies such as microprocessor controlled knee and ankle joints in the last decade, the progress in designing a comfortable socket has been rather limited. This means that the current process of socket design is still very time-consuming, and highly dependent on the expertise of the prosthetist. Supported by the state-of-the-art sensor technologies and numerical simulations, a new socket design system is being developed to help prosthetists achieve rapid design of comfortable sockets for above knee amputees. This paper reports the research work related to establishing biomechanical models for socket design. Through numerical simulation using finite element method, comprehensive relationships between pressure on residual limb and socket geometry were established. This allowed local topological adjustment for the socket so as to optimize the pressure distributions across the residual limb. When the full body weight of a patient is exerted on the residual limb, high pressures and shear forces between the residual limb and the socket occur. During numerical simulations, various hyperplastic models, namely Ogden, Yeoh and Mooney-Rivlin, were used, and their effectiveness in representing the biomechanical properties of soft tissues of the residual limb was evaluated. This also involved reverse engineering, which resulted in an optimal representative model under compression test. To validate the simulation results, a range of silicone models were fabricated. They were tested by an indentation device which yielded the force-displacement relationships. Comparisons of results obtained from FEA simulations and experimental tests showed that the Ogden model did not fit well the soft tissue material indentation data, while the Yeoh model gave the best representation of the soft tissue mechanical behavior under indentation. Compared with hyperplastic model, the result showed that elastic model also had significant errors. In addition, normal and shear stress distributions on the surface of the soft tissue model were obtained. The effect of friction in compression testing and the influence of soft tissue stiffness and testing boundary conditions were also analyzed. All these have contributed to the overall goal of designing a good-fit socket for individual above knee amputees.

Keywords: above knee amputee, finite element simulation, hyperplastic model, prosthetic socket

Procedia PDF Downloads 174
647 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction

Authors: Joy Cao, Min Zhou

Abstract:

Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.

Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.

Procedia PDF Downloads 56
646 A Construct to Perform in Situ Deformation Measurement of Material Extrusion-Fabricated Structures

Authors: Daniel Nelson, Valeria La Saponara

Abstract:

Material extrusion is an additive manufacturing modality that continues to show great promise in the ability to create low-cost, highly intricate, and exceedingly useful structural elements. As more capable and versatile filament materials are devised, and the resolution of manufacturing systems continues to increase, the need to understand and predict manufacturing-induced warping will gain ever greater importance. The following study presents an in situ remote sensing and data analysis construct that allows for the in situ mapping and quantification of surface displacements induced by residual stresses on a specified test structure. This proof-of-concept experimental process shows that it is possible to provide designers and manufacturers with insight into the manufacturing parameters that lead to the manifestation of these deformations and a greater understanding of the behavior of these warping events over the course of the manufacturing process.

Keywords: additive manufacturing, deformation, digital image correlation, fused filament fabrication, residual stress, warping

Procedia PDF Downloads 47
645 Laser - Ultrasonic Method for the Measurement of Residual Stresses in Metals

Authors: Alexander A. Karabutov, Natalia B. Podymova, Elena B. Cherepetskaya

Abstract:

The theoretical analysis is carried out to get the relation between the ultrasonic wave velocity and the value of residual stresses. The laser-ultrasonic method is developed to evaluate the residual stresses and subsurface defects in metals. The method is based on the laser thermooptical excitation of longitudinal ultrasonic wave sand their detection by a broadband piezoelectric detector. A laser pulse with the time duration of 8 ns of the full width at half of maximum and with the energy of 300 µJ is absorbed in a thin layer of the special generator that is inclined relative to the object under study. The non-uniform heating of the generator causes the formation of a broadband powerful pulse of longitudinal ultrasonic waves. It is shown that the temporal profile of this pulse is the convolution of the temporal envelope of the laser pulse and the profile of the in-depth distribution of the heat sources. The ultrasonic waves reach the surface of the object through the prism that serves as an acoustic duct. At the interface ‚laser-ultrasonic transducer-object‘ the conversion of the most part of the longitudinal wave energy takes place into the shear, subsurface longitudinal and Rayleigh waves. They spread within the subsurface layer of the studied object and are detected by the piezoelectric detector. The electrical signal that corresponds to the detected acoustic signal is acquired by an analog-to-digital converter and when is mathematically processed and visualized with a personal computer. The distance between the generator and the piezodetector as well as the spread times of acoustic waves in the acoustic ducts are the characteristic parameters of the laser-ultrasonic transducer and are determined using the calibration samples. There lative precision of the measurement of the velocity of longitudinal ultrasonic waves is 0.05% that corresponds to approximately ±3 m/s for the steels of conventional quality. This precision allows one to determine the mechanical stress in the steel samples with the minimal detection threshold of approximately 22.7 MPa. The results are presented for the measured dependencies of the velocity of longitudinal ultrasonic waves in the samples on the values of the applied compression stress in the range of 20-100 MPa.

Keywords: laser-ultrasonic method, longitudinal ultrasonic waves, metals, residual stresses

Procedia PDF Downloads 293
644 FESA: Fuzzy-Controlled Energy-Efficient Selective Allocation and Reallocation of Tasks Among Mobile Robots

Authors: Anuradha Banerjee

Abstract:

Energy aware operation is one of the visionary goals in the area of robotics because operability of robots is greatly dependent upon their residual energy. Practically, the tasks allocated to robots carry different priority and often an upper limit of time stamp is imposed within which the task needs to be completed. If a robot is unable to complete one particular task given to it the task is reallocated to some other robot. The collection of robots is controlled by a Central Monitoring Unit (CMU). Selection of the new robot is performed by a fuzzy controller called Task Reallocator (TRAC). It accepts the parameters like residual energy of robots, possibility that the task will be successfully completed by the new robot within stipulated time, distance of the new robot (where the task is reallocated) from distance of the old one (where the task was going on) etc. The proposed methodology increases the probability of completing globally assigned tasks and saves huge amount of energy as far as the collection of robots is concerned.

Keywords: energy-efficiency, fuzzy-controller, priority, reallocation, task

Procedia PDF Downloads 286
643 A Deep Learning Approach to Real Time and Robust Vehicular Traffic Prediction

Authors: Bikis Muhammed, Sehra Sedigh Sarvestani, Ali R. Hurson, Lasanthi Gamage

Abstract:

Vehicular traffic events have overly complex spatial correlations and temporal interdependencies and are also influenced by environmental events such as weather conditions. To capture these spatial and temporal interdependencies and make more realistic vehicular traffic predictions, graph neural networks (GNN) based traffic prediction models have been extensively utilized due to their capability of capturing non-Euclidean spatial correlation very effectively. However, most of the already existing GNN-based traffic prediction models have some limitations during learning complex and dynamic spatial and temporal patterns due to the following missing factors. First, most GNN-based traffic prediction models have used static distance or sometimes haversine distance mechanisms between spatially separated traffic observations to estimate spatial correlation. Secondly, most GNN-based traffic prediction models have not incorporated environmental events that have a major impact on the normal traffic states. Finally, most of the GNN-based models did not use an attention mechanism to focus on only important traffic observations. The objective of this paper is to study and make real-time vehicular traffic predictions while incorporating the effect of weather conditions. To fill the previously mentioned gaps, our prediction model uses a real-time driving distance between sensors to build a distance matrix or spatial adjacency matrix and capture spatial correlation. In addition, our prediction model considers the effect of six types of weather conditions and has an attention mechanism in both spatial and temporal data aggregation. Our prediction model efficiently captures the spatial and temporal correlation between traffic events, and it relies on the graph attention network (GAT) and Bidirectional bidirectional long short-term memory (Bi-LSTM) plus attention layers and is called GAT-BILSTMA.

Keywords: deep learning, real time prediction, GAT, Bi-LSTM, attention

Procedia PDF Downloads 45
642 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review

Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha

Abstract:

Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.

Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text

Procedia PDF Downloads 91
641 Analysis on Prediction Models of TBM Performance and Selection of Optimal Input Parameters

Authors: Hang Lo Lee, Ki Il Song, Hee Hwan Ryu

Abstract:

An accurate prediction of TBM(Tunnel Boring Machine) performance is very difficult for reliable estimation of the construction period and cost in preconstruction stage. For this purpose, the aim of this study is to analyze the evaluation process of various prediction models published since 2000 for TBM performance, and to select the optimal input parameters for the prediction model. A classification system of TBM performance prediction model and applied methodology are proposed in this research. Input and output parameters applied for prediction models are also represented. Based on these results, a statistical analysis is performed using the collected data from shield TBM tunnel in South Korea. By performing a simple regression and residual analysis utilizinFg statistical program, R, the optimal input parameters are selected. These results are expected to be used for development of prediction model of TBM performance.

Keywords: TBM performance prediction model, classification system, simple regression analysis, residual analysis, optimal input parameters

Procedia PDF Downloads 278
640 Deep Learning Prediction of Residential Radon Health Risk in Canada and Sweden to Prevent Lung Cancer Among Non-Smokers

Authors: Selim M. Khan, Aaron A. Goodarzi, Joshua M. Taron, Tryggve Rönnqvist

Abstract:

Indoor air quality, a prime determinant of health, is strongly influenced by the presence of hazardous radon gas within the built environment. As a health issue, dangerously high indoor radon arose within the 20th century to become the 2nd leading cause of lung cancer. While the 21st century building metrics and human behaviors have captured, contained, and concentrated radon to yet higher and more hazardous levels, the issue is rapidly worsening in Canada. It is established that Canadians in the Prairies are the 2nd highest radon-exposed population in the world, with 1 in 6 residences experiencing 0.2-6.5 millisieverts (mSv) radiation per week, whereas the Canadian Nuclear Safety Commission sets maximum 5-year occupational limits for atomic workplace exposure at only 20 mSv. This situation is also deteriorating over time within newer housing stocks containing higher levels of radon. Deep machine learning (LSTM) algorithms were applied to analyze multiple quantitative and qualitative features, determine the most important contributory factors, and predicted radon levels in the known past (1990-2020) and projected future (2021-2050). The findings showed gradual downwards patterns in Sweden, whereas it would continue to go from high to higher levels in Canada over time. The contributory factors found to be the basement porosity, roof insulation depthness, R-factor, and air dynamics of the indoor environment related to human window opening behaviour. Building codes must consider including these factors to ensure adequate indoor ventilation and healthy living that can prevent lung cancer in non-smokers.

Keywords: radon, building metrics, deep learning, LSTM prediction model, lung cancer, canada, sweden

Procedia PDF Downloads 86
639 Optimal Analysis of Structures by Large Wing Panel Using FEM

Authors: Byeong-Sam Kim, Kyeongwoo Park

Abstract:

In this study, induced structural optimization is performed to compare the trade-off between wing weight and induced drag for wing panel extensions, construction of wing panel and winglets. The aerostructural optimization problem consists of parameters with strength condition, and two maneuver conditions using residual stresses in panel production. The results of kinematic motion analysis presented a homogenization based theory for 3D beams and 3D shells for wing panel. This theory uses a kinematic description of the beam based on normalized displacement moments. The displacement of the wing is a significant design consideration as large deflections lead to large stresses and increased fatigue of components cause residual stresses. The stresses in the wing panel are small compared to the yield stress of aluminum alloy. This study describes the implementation of a large wing panel, aerostructural analysis and structural parameters optimization framework that couples a three-dimensional panel method.

Keywords: wing panel, aerostructural optimization, FEM, structural analysis

Procedia PDF Downloads 556
638 New Methodology for Monitoring Alcoholic Fermentation Processes Using Refractometry

Authors: Boukhiar Aissa, Iguergaziz Nadia, Halladj Fatima, Lamrani Yasmina, Benamara Salem

Abstract:

Determining the alcohol content in alcoholic fermentation bioprocess has a great importance. In fact, it is a key indicator for monitoring this fermentation bioprocess. Several methodologies (chemical, spectrophotometric, chromatographic...) are used to the determination of this parameter. However, these techniques are very long and require: rigorous preparations, sometimes dangerous chemical reagents, and/or expensive equipment. In the present study, the date juice is used as a substrate of alcoholic fermentation. The extracted juice undergoes an alcoholic fermentation by Saccharomyces cerevisiae. The study of the possible use of refractometry as a sole means for the in situ control of this process revealed a good correlation (R2 = 0.98) between initial and final ° Brix: ° Brix f = 0.377× ° Brixi. In addition, we verified the relationship between the variation in final and initial ° Brix (Δ ° Brix) and alcoholic rate produced (A exp): CΔ° Brix / A exp = 1.1. This allows the tracing of abacus isoresponses that permit to determine the alcoholic and residual sugar rates with a mean relative error (MRE) of 5.35%.

Keywords: refractometry, alcohol, residual sugar, fermentation, brix, date, juice

Procedia PDF Downloads 443
637 Measurements of Recovery Stress and Recovery Strain of Ni-Based Shape Memory Alloys

Authors: W. J. Kim

Abstract:

The behaviors of the recovery stress and strain of an ultrafine-grained Ni-50.2 at.% Ti alloy prepared by high-ratio differential speed rolling (HRDSR) were examined by a specially designed tensile-testing set up, and the factors that influence the recovery stress and strain were studied. After HRDSR, both the recovery stress and strain were enhanced compared to the initial condition. The constitutive equation showing that the maximum recovery stress is a sole function of the recovery strain was developed based on the experimental data. The recovery strain increased as the yield stress increased. The maximum recovery stress increased with an increase in yield stress. The residual recovery stress was affected by the yield stress as well as the austenite-to-martensite transformation temperature. As the yield stress increased and as the martensitic transformation temperature decreased, the residual recovery stress increased.

Keywords: high-ratio differential speed rolling, tensile testing, severe plastic deformation, shape memory alloys

Procedia PDF Downloads 335