Search results for: EB thermal processing
4732 Efficiency Validation of Hybrid Cooling Application in Hot and Humid Climate Houses of KSA
Authors: Jamil Hijazi, Stirling Howieson
Abstract:
Reducing energy consumption and CO2 emissions are probably the greatest challenge now facing mankind. From considerations surrounding global warming and CO2 production, it has to be recognized that oil is a finite resource and the KSA like many other oil-rich countries will have to start to consider a horizon where hydro-carbons are not the dominant energy resource. The employment of hybrid ground-cooling pipes in combination with the black body solar collection and radiant night cooling systems may have the potential to displace a significant proportion of oil currently used to run conventional air conditioning plant. This paper presents an investigation into the viability of such hybrid systems with the specific aim of reducing cooling load and carbon emissions while providing all year-round thermal comfort in a typical Saudi Arabian urban housing block. Soil temperatures were measured in the city of Jeddah. A parametric study then was carried out by computational simulation software (DesignBuilder) that utilized the field measurements and predicted the cooling energy consumption of both a base case and an ideal scenario (typical block retro-fitted with insulation, solar shading, ground pipes integrated with hypocaust floor slabs/stack ventilation and radiant cooling pipes embed in floor). Initial simulation results suggest that careful ‘ecological design’ combined with hybrid radiant and ground pipe cooling techniques can displace air conditioning systems, producing significant cost and carbon savings (both capital and running) without appreciable deprivation of amenity.Keywords: cooling load, energy efficiency, ground pipe cooling, hybrid cooling strategy, hydronic radiant systems, low carbon emission, passive designs, thermal comfort
Procedia PDF Downloads 2354731 Traditional Ceramics Value in the Middle East
Authors: Abdelmessih Malak Sadek Labib
Abstract:
The Stability in harsh environments thanks to excellent electrical, mechanical and thermal properties is what ceramics are all about selected materials for many applications despite advent of new materials such as plastics and composites. However, ceramic materials have disadvantages, including brittleness. Fragility is often attributed to pottery strong covalent and ionic bonds in the ceramic body. There is still much to learn about brittle cracks in a attention to detail, hence the fragility of the ceramic and its catastrophic failure of a frequently studied topic, particularly in charging applications. One of the most commonly used ceramics for load-bearing applications such as veneers is porcelain. Porcelain is a type of traditional pottery. Traditional pottery consists mainly of three basic ingredients: clay, which gives plasticity; silica which maintains the shape and stability of the ceramic body over temperature high temperature; and feldspar affecting glazing. In traditional pottery, the inversion of quartz during cooling the process can create microcracks that act as a stress concentration centers. Consequently, subcritical crack growth is caused due to quartz inversion origins unpredictable catastrophic failure of the work of ceramic bodies when reloading. In the case of porcelain, however, this is what the mullite hypothesis says the strength of porcelain can be significantly increased with felt Interlocking of mullite needles in the ceramic body.in this way realistic assessment of the role of quartz and mullite Porcelain with a strength of is needed to grow stronger and smaller fragile porcelain. Currently,the lack of reports on Young's moduli in the literature leads to erroneous conclusions in this regard mechanical behavior of porcelain. Therefore, the current project uses the Young's modulus approach for the investigation the role of quartz and mullite on the mechanical strength of various porcelains, in addition to reducing particle size, flexural strength fractographic forces and techniques.Keywords: materials, technical, ceramics, properties, thermal, stability, advantages
Procedia PDF Downloads 904730 Effect of Air Temperatures (°C) and Slice Thickness (mm) on Drying Characteristics and Some Quality Properties of Omani Banana
Authors: Atheer Al-Maqbali, Mohammed Al-Rizeiqi, Pankaj Pathare
Abstract:
There is an ever-increased demand for the consumption of banana products in Oman and elsewhere in the region due to the nutritional value and the decent taste of the product. There are approximately 3,751 acres of land designated for banana cultivation in the Sultanate of Oman, which produces approximately 18,447 tons of banana product. The fresh banana product is extremely perishable, resulting in a significant post-harvest economic loss. Since the product has high sensory acceptability, the drying method is a common method for processing fresh banana products. This study aims to use the drying technology in the production of dried bananas to preserve the largest amount of natural color and delicious taste for the consumer. The study also aimed to assess the shelf stability of both water activity (aw) and color (L*, a*, b*) for fresh and finished dried bananas by using a Conventional Air Drying System. Water activity aw, color characteristic L a b, and product’s hardness were analyzed for 3mm, 5mm, and7 mm thickness at different temperaturesoC. All data were analyzed statistically using STATA 13.0, and α ≤ 0.05 was considered for the significance level. The study is useful to banana farmers to improve cultivation, food processors to optimize producer’s output and policy makers in the optimization of banana processing and post-harvest management of the products.Keywords: banana, drying, oman, quality, thickness, hardness, color
Procedia PDF Downloads 964729 Segmentation of the Liver and Spleen From Abdominal CT Images Using Watershed Approach
Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid
Abstract:
The phase of segmentation is an important step in the processing and interpretation of medical images. In this paper, we focus on the segmentation of liver and spleen from the abdomen computed tomography (CT) images. The importance of our study comes from the fact that the segmentation of ROI from CT images is usually a difficult task. This difficulty is the gray’s level of which is similar to the other organ also the ROI are connected to the ribs, heart, kidneys, etc. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to remove the surrounding and connected organs and tissues by applying morphological filters. This first step makes the extraction of interest regions easier. The second step consists of improving the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce these deficiencies by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts.Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm
Procedia PDF Downloads 4974728 Visual Template Detection and Compositional Automatic Regular Expression Generation for Business Invoice Extraction
Authors: Anthony Proschka, Deepak Mishra, Merlyn Ramanan, Zurab Baratashvili
Abstract:
Small and medium-sized businesses receive over 160 billion invoices every year. Since these documents exhibit many subtle differences in layout and text, extracting structured fields such as sender name, amount, and VAT rate from them automatically is an open research question. In this paper, existing work in template-based document extraction is extended, and a system is devised that is able to reliably extract all required fields for up to 70% of all documents in the data set, more than any other previously reported method. The approaches are described for 1) detecting through visual features which template a given document belongs to, 2) automatically generating extraction rules for a given new template by composing regular expressions from multiple components, and 3) computing confidence scores that indicate the accuracy of the automatic extractions. The system can generate templates with as little as one training sample and only requires the ground truth field values instead of detailed annotations such as bounding boxes that are hard to obtain. The system is deployed and used inside a commercial accounting software.Keywords: data mining, information retrieval, business, feature extraction, layout, business data processing, document handling, end-user trained information extraction, document archiving, scanned business documents, automated document processing, F1-measure, commercial accounting software
Procedia PDF Downloads 1344727 Time Series Forecasting (TSF) Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window
Procedia PDF Downloads 1594726 Preparation and Characterization of Poly(L-Lactic Acid)/Oligo(D-Lactic Acid) Grafted Cellulose Composites
Authors: Md. Hafezur Rahaman, Mohd. Maniruzzaman, Md. Shadiqul Islam, Md. Masud Rana
Abstract:
With the growth of environmental awareness, enormous researches are running to develop the next generation materials based on sustainability, eco-competence, and green chemistry to preserve and protect the environment. Due to biodegradability and biocompatibility, poly (L-lactic acid) (PLLA) has a great interest in ecological and medical applications. Also, cellulose is one of the most abundant biodegradable, renewable polymers found in nature. It has several advantages such as low cost, high mechanical strength, biodegradability and so on. Recently, an immense deal of attention has been paid for the scientific and technological development of α-cellulose based composite material. PLLA could be used for grafting of cellulose to improve the compatibility prior to the composite preparation. Here it is quite difficult to form a bond between lower hydrophilic molecules like PLLA and α-cellulose. Dimmers and oligomers can easily be grafted onto the surface of the cellulose by ring opening or polycondensation method due to their low molecular weight. In this research, α-cellulose extracted from jute fiber is grafted with oligo(D-lactic acid) (ODLA) via graft polycondensation reaction in presence of para-toluene sulphonic acid and potassium persulphate in toluene at 130°C for 9 hours under 380 mmHg. Here ODLA is synthesized by ring opening polymerization of D-lactides in the presence of stannous octoate (0.03 wt% of lactide) and D-lactic acids at 140°C for 10 hours. Composites of PLLA with ODLA grafted α-cellulose are prepared by solution mixing and film casting method. Confirmation of grafting was carried out through FTIR spectroscopy and SEM analysis. A strongest carbonyl peak of FTIR spectroscopy at 1728 cm⁻¹ of ODLA grafted α-cellulose confirms the grafting of ODLA onto α-cellulose which is absent in α-cellulose. It is also observed from SEM photographs that there are some white areas (spot) on ODLA grafted α-cellulose as compared to α-cellulose may indicate the grafting of ODLA and consistent with FTIR results. Analysis of the composites is carried out by FTIR, SEM, WAXD and thermal gravimetric analyzer. Most of the FTIR characteristic absorption peak of the composites shifted to higher wave number with increasing peak area may provide a confirmation that PLLA and grafted cellulose have better compatibility in composites via intermolecular hydrogen bonding and this supports previously published results. Grafted α-cellulose distributions in composites are uniform which is observed by SEM analysis. WAXD studied show that only homo-crystalline structures of PLLA present in the composites. Thermal stability of the composites is enhanced with increasing the percentages of ODLA grafted α-cellulose in composites. As a consequence, the resultant composites have a resistance toward the thermal degradation. The effects of length of the grafted chain and biodegradability of the composites will be studied in further research.Keywords: α-cellulose, composite, graft polycondensation, oligo(D-lactic acid), poly(L-lactic acid)
Procedia PDF Downloads 1244725 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 1444724 Energy System Analysis Using Data-Driven Modelling and Bayesian Methods
Authors: Paul Rowley, Adam Thirkill, Nick Doylend, Philip Leicester, Becky Gough
Abstract:
The dynamic performance of all energy generation technologies is impacted to varying degrees by the stochastic properties of the wider system within which the generation technology is located. This stochasticity can include the varying nature of ambient renewable energy resources such as wind or solar radiation, or unpredicted changes in energy demand which impact upon the operational behaviour of thermal generation technologies. An understanding of these stochastic impacts are especially important in contexts such as highly distributed (or embedded) generation, where an understanding of issues affecting the individual or aggregated performance of high numbers of relatively small generators is especially important, such as in ESCO projects. Probabilistic evaluation of monitored or simulated performance data is one technique which can provide an insight into the dynamic performance characteristics of generating systems, both in a prognostic sense (such as the prediction of future performance at the project’s design stage) as well as in a diagnostic sense (such as in the real-time analysis of underperforming systems). In this work, we describe the development, application and outcomes of a new approach to the acquisition of datasets suitable for use in the subsequent performance and impact analysis (including the use of Bayesian approaches) for a number of distributed generation technologies. The application of the approach is illustrated using a number of case studies involving domestic and small commercial scale photovoltaic, solar thermal and natural gas boiler installations, and the results as presented show that the methodology offers significant advantages in terms of plant efficiency prediction or diagnosis, along with allied environmental and social impacts such as greenhouse gas emission reduction or fuel affordability.Keywords: renewable energy, dynamic performance simulation, Bayesian analysis, distributed generation
Procedia PDF Downloads 4994723 pH and Temperature Triggered Release of Doxorubicin from Hydogen Bonded Multilayer Films of Polyoxazolines
Authors: Meltem Haktaniyan, Eda Cagli, Irem Erel Goktepe
Abstract:
Polymers that change their properties in response to different stimuli (e.g. light, temperature, pH, ionic strength or magnetic field) are called ‘smart’ or ‘stimuli-responsive polymers’. These polymers have been widely used in biomedical applications such as sensors, gene delivery, drug delivery or tissue engineering. Temperature-responsive polymers have been studied extensively for controlled drug delivery applications. As regard of pseudo-peptides, poly (2-alky-2-oxazoline)s are considered as good candidates for delivery systems due to their stealth behavior and nontoxicity. In order to build responsive multilayer films for controlled drug release applications from surface, Layer by layer technique (LBL) is a powerful technique with an advantage of nanometer scale control over spatial architecture and morphology. Multilayers can be constructed on surface where non-covalent interactions including electrostatic interactions, hydrogen bonding, and charge-transfer or hydrophobic-hydrophobic interactions. In the present study, hydrogen bounded multilayer films of poly (2-alky-2-oxazoline) s with tannic acid were prepared in order to use as a platform to release Doxorubicin (DOX) from surface with pH and thermal triggers. For this purpose, poly (2-isopropyl-2-oxazoline) (PIPOX) and poly (2-ethyl-2-oxazoline) (PETOX) were synthesized via cationic ring opening polymerization (CROP) with hydroxyl end groups. Two polymeric multilayer systems ((PETOX)/(DOX)-(TA) complexes and (PIPOX)/(DOX)-(TA) complexes) were designed to investigate of controlled release of Doxorubicin (DOX) from surface with pH and thermal triggers. The drug release profiles from the multilayer thin films with alterations of pH and temperature will been examined with UV-Vis Spectroscopy and Fluorescence Spectroscopy.Keywords: temperature responsive polymers, h-bonded multilayer films, drug release, polyoxazoline
Procedia PDF Downloads 3104722 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 344721 Automatic Fluid-Structure Interaction Modeling and Analysis of Butterfly Valve Using Python Script
Authors: N. Guru Prasath, Sangjin Ma, Chang-Wan Kim
Abstract:
A butterfly valve is a quarter turn valve which is used to control the flow of a fluid through a section of pipe. Generally, butterfly valve is used in wide range of applications such as water distribution, sewage, oil and gas plants. In particular, butterfly valve with larger diameter finds its immense applications in hydro power plants to control the fluid flow. In-lieu with the constraints in cost and size to run laboratory setup, analysis of large diameter values will be mostly studied by computational method which is the best and inexpensive solution. For fluid and structural analysis, CFD and FEM software is used to perform large scale valve analyses, respectively. In order to perform above analysis in butterfly valve, the CAD model has to recreate and perform mesh in conventional software’s for various dimensions of valve. Therefore, its limitation is time consuming process. In-order to overcome that issue, python code was created to outcome complete pre-processing setup automatically in Salome software. Applying dimensions of the model clearly in the python code makes the running time comparatively lower and easier way to perform analysis of the valve. Hence, in this paper, an attempt was made to study the fluid-structure interaction (FSI) of butterfly valves by varying the valve angles and dimensions using python code in pre-processing software, and results are produced.Keywords: butterfly valve, flow coefficient, automatic CFD analysis, FSI analysis
Procedia PDF Downloads 2434720 Music Reading Expertise Facilitates Implicit Statistical Learning of Sentence Structures in a Novel Language: Evidence from Eye Movement Behavior
Authors: Sara T. K. Li, Belinda H. J. Chung, Jeffery C. N. Yip, Janet H. Hsiao
Abstract:
Music notation and text reading both involve statistical learning of music or linguistic structures. However, it remains unclear how music reading expertise influences text reading behavior. The present study examined this issue through an eye-tracking study. Chinese-English bilingual musicians and non-musicians read English sentences, Chinese sentences, musical phrases, and sentences in Tibetan, a language novel to the participants, with their eye movement recorded. Each set of stimuli consisted of two conditions in terms of structural regularity: syntactically correct and syntactically incorrect musical phrases/sentences. They then completed a sentence comprehension (for syntactically correct sentences) or a musical segment/word recognition task afterwards to test their comprehension/recognition abilities. The results showed that in reading musical phrases, as compared with non-musicians, musicians had a higher accuracy in the recognition task, and had shorter reading time, fewer fixations, and shorter fixation duration when reading syntactically correct (i.e., in diatonic key) than incorrect (i.e., in non-diatonic key/atonal) musical phrases. This result reflects their expertise in music reading. Interestingly, in reading Tibetan sentences, which was novel to both participant groups, while non-musicians did not show any behavior differences between reading syntactically correct or incorrect Tibetan sentences, musicians showed a shorter reading time and had marginally fewer fixations when reading syntactically correct sentences than syntactically incorrect ones. However, none of the musicians reported discovering any structural regularities in the Tibetan stimuli after the experiment when being asked explicitly, suggesting that they may have implicitly acquired the structural regularities in Tibetan sentences. This group difference was not observed when they read English or Chinese sentences. This result suggests that music reading expertise facilities reading texts in a novel language (i.e., Tibetan), but not in languages that the readers are already familiar with (i.e., English and Chinese). This phenomenon may be due to the similarities between reading music notations and reading texts in a novel language, as in both cases the stimuli follow particular statistical structures but do not involve semantic or lexical processing. Thus, musicians may transfer their statistical learning skills stemmed from music notation reading experience to implicitly discover structures of sentences in a novel language. This speculation is consistent with a recent finding showing that music reading expertise modulates the processing of English nonwords (i.e., words that do not follow morphological or orthographic rules) but not pseudo- or real words. These results suggest that the modulation of music reading expertise on language processing depends on the similarities in the cognitive processes involved. It also has important implications for the benefits of music education on language and cognitive development.Keywords: eye movement behavior, eye-tracking, music reading expertise, sentence reading, structural regularity, visual processing
Procedia PDF Downloads 3854719 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction
Authors: Qais M. Yousef, Yasmeen A. Alshaer
Abstract:
Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.Keywords: artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization
Procedia PDF Downloads 1824718 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform
Authors: David Jurado, Carlos Ávila
Abstract:
Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis
Procedia PDF Downloads 884717 Contribution of Remote Sensing and GIS to the Study of the Impact of the Salinity of Sebkhas on the Quality of Groundwater: Case of Sebkhet Halk El Menjel (Sousse)
Authors: Gannouni Sonia, Hammami Asma, Saidi Salwa, Rebai Noamen
Abstract:
Water resources in Tunisia have experienced quantitative and qualitative degradation, especially when talking about wetlands and Sbekhas. Indeed, the objective of this work is to study the spatio-temporal evolution of salinity for 29 years (from 1987 to 2016). A study of the connection between surface water and groundwater is necessary to know the degree of influence of the Sebkha brines on the water table. The evolution of surface salinity is determined by remote sensing based on Landsat TM and OLI/TIRS satellite images of the years 1987, 2007, 2010, and 2016. The processing of these images allowed us to determine the NDVI(Normalized Difference Vegetation Index), the salinity index, and the surface temperature around Sebkha. In addition, through a geographic information system(GIS), we could establish a map of the distribution of salinity in the subsurface of the water table of Chott Mariem and Hergla/SidiBouAli/Kondar. The results of image processing and the calculation of the index and surface temperature show an increase in salinity downstream of in addition to the sebkha and the development of vegetation cover upstream and the western part of the sebkha. This richness may be due both to contamination by seawater infiltration from the barrier beach of Hergla as well as the passage of groundwater to the sebkha.Keywords: spatio-temporal monitoring, salinity, satellite images, NDVI, sebkha
Procedia PDF Downloads 1364716 Common Used Non-Medical Practice and Perceived Benefits in Couples with Fertility Problems in Turkey
Authors: S. Fata, M. A. Tokat, N. Bagardi, B. Yilmaz
Abstract:
Nowadays, various traditional practices are used throughout the world with aim to improve fertility. Various traditional remedies, acupuncture, religious practices such as sacrifice are frequently used. Studies often evaluate the traditional practices used by the women. But the use of this non-medical practice by couples and specific application reasons of this methods has been less investigated. The aim of this study was to evaluate the common used non-medical practices and determine perceived benefits by couples with fertility problems in Turkey. This is a descriptive study. Research data were collected between May-July 2016, in Izmir Ege Birth Education and Research Hospital Assisted Reproduction Clinic, from 151 couples with fertility problem. Personal Information Form and Non-Medical Practices Used for Fertility Evaluation Form was used. Number 'GOA 2649' permission letter from Dokuz Eylul University Non-Invasive Research Ethics Board, permission letter from the institution and the written consent from participants has been received to carry out the study. In the evaluation of the data, frequencies and proportions analysis were used. The average age of women participating in the study was 32.87, the 35.8% were high school graduates, 60.3% were housewife and the 58.9% lived in city. The 30.5% of husbands were high school graduates, the 96.7% were employed and the 60.9% lived in city. The 78.1% of couples lived as a nuclear family, the average marriage year was 7.58, in 33.8% the fertility problem stems from women, 42.4% of them received a diagnosis for 1-2 years, 35.1% were being treated for 1-2 years. The 35.8% of women reported use of non-medical applications. The 24.4% of women used figs, onion cure, hacemat, locust, bee-pollen milk, the 18.2% used herbs, the 13.1% vowed, the 12.1% went to the tomb, the 10.1% did not bath a few days after the embryo transfer, the 9.1% used thermal water baths, the 5.0% manually corrected the womb, the 5.0% printed amulets by Hodja, the 3.0% went to the Hodja/pilgrims. Among the perceived benefits of using non-medical practices; facilitate pregnancy and implantation, improve oocyte quality were the most recently expressed. Women said that they often used herbs to develop follicles, did not bath after embryo transfer with aim to provide implantation, and used thermal waters to get rid of the infection. Compared to women, only the 25.8% of men used the non-medical practice. The 52.1% reported that they used peanuts, hacemat, locust, bee-pollen milk, the 14.9% used herbs, the 12.8% vowed, the 10.1% went to the tomb, the 10.1% used thermal water baths. Improve sperm number, motility and quality were the most expected benefits. Men said that they often used herbs to improve sperm number, used peanuts, hacemat, locust, bee-pollen milk to improve sperm motility and quality. Couples in Turkey often use non-medical practices to deal with fertility problems. Some of the practices considered as useful can adversely affect health. Healthcare providers should evaluate the use of non-medical practices and should inform if the application is known adverse effects on health.Keywords: fertility, couples, non-medical practice, perceived benefit
Procedia PDF Downloads 3444715 Revealing the Urban Heat Island: Investigating its Spatial and Temporal Changes and Relationship with Air Quality
Authors: Aneesh Mathew, Arunab K. S., Atul Kumar Sharma
Abstract:
The uncontrolled rise in population has led to unplanned, swift, and unsustainable urban expansion, causing detrimental environmental impacts on both local and global ecosystems. This research delves into a comprehensive examination of the Urban Heat Island (UHI) phenomenon in Bengaluru and Hyderabad, India. It centers on the spatial and temporal distribution of UHI and its correlation with air pollutants. Conducted across summer and winter seasons from 2001 to 2021 in Bangalore and Hyderabad, this study discovered that UHI intensity varies seasonally, peaking in summer and decreasing in winter. The annual maximum UHI intensities range between 4.65 °C to 6.69 °C in Bengaluru and 5.74 °C to 6.82 °C in Hyderabad. Bengaluru particularly experiences notable fluctuations in average UHI intensity. Introducing the Urban Thermal Field Variance Index (UTFVI), the study indicates a consistent strong UHI effect in both cities, significantly impacting living conditions. Moreover, hotspot analysis demonstrates a rising trend in UHI-affected areas over the years in Bengaluru and Hyderabad. This research underscores the connection between air pollutant concentrations and land surface temperature (LST), highlighting the necessity of comprehending UHI dynamics for urban environmental management and public health. It contributes to a deeper understanding of UHI patterns in swiftly urbanizing areas, providing insights into the intricate relationship between urbanization, climate, and air quality. These findings serve as crucial guidance for policymakers, urban planners, and researchers, facilitating the development of innovative, sustainable strategies to mitigate the adverse impacts of uncontrolled expansion while promoting the well-being of local communities and the global environment.Keywords: urban heat island effect, land surface temperature, air pollution, urban thermal field variance index
Procedia PDF Downloads 864714 Alphabet Recognition Using Pixel Probability Distribution
Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay
Abstract:
Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix
Procedia PDF Downloads 3924713 Fault Detection and Diagnosis of Broken Bar Problem in Induction Motors Base Wavelet Analysis and EMD Method: Case Study of Mobarakeh Steel Company in Iran
Authors: M. Ahmadi, M. Kafil, H. Ebrahimi
Abstract:
Nowadays, induction motors have a significant role in industries. Condition monitoring (CM) of this equipment has gained a remarkable importance during recent years due to huge production losses, substantial imposed costs and increases in vulnerability, risk, and uncertainty levels. Motor current signature analysis (MCSA) is one of the most important techniques in CM. This method can be used for rotor broken bars detection. Signal processing methods such as Fast Fourier transformation (FFT), Wavelet transformation and Empirical Mode Decomposition (EMD) are used for analyzing MCSA output data. In this study, these signal processing methods are used for broken bar problem detection of Mobarakeh steel company induction motors. Based on wavelet transformation method, an index for fault detection, CF, is introduced which is the variation of maximum to the mean of wavelet transformation coefficients. We find that, in the broken bar condition, the amount of CF factor is greater than the healthy condition. Based on EMD method, the energy of intrinsic mode functions (IMF) is calculated and finds that when motor bars become broken the energy of IMFs increases.Keywords: broken bar, condition monitoring, diagnostics, empirical mode decomposition, fourier transform, wavelet transform
Procedia PDF Downloads 1544712 Modernization of Garri-Frying Technologies with Respect to Women Anthromophic Quality in Nigeria
Authors: Adegbite Bashiru Adeniyi, Olaniyi Akeem Olawale, Ayobamidele Sinatu Juliet
Abstract:
The study was carried out in the 6 South Western states of Nigeria to analyze socio-economic characteristic of garri processors and their anthropometric qualities with respect to modern technologies used in garri processing. About 20 respondents were randomly selected from each of the 6 workstations purposively considered for the study due to their daily processing activities already attracted high patronage of customers. These include Oguntolu village (Ogun State), Igoba-Akure (Ondo State), Imo-Ilesa (Osun State), Odo Oba-Ileri (Oyo State), Irasa village (Ekiti State) and Epe in Lagos state. Interview schedule was conducted for 120 respondents to elicit information. Data were analyzed using descriptive statistical tools. It was observed from the findings that respondents were in their most productive age range (36-45 years) except Ogun state where majority (45%) were relatively older than 45 years. A fewer processors were much younger than 26 years old. It furthers revealed that not less than 55% have body weight greater than 50.0 kilogram, also not less than 70% were taller than 1.5 meter. So also, the hand length and hand thickness of the majority were long and bulky which are considered suitable for operating some modern and improved technologies in garri-frying process. This information could be used by various technological developers to enhance production of modern equipment and tools for a greater efficiency.Keywords: agro-business, anthromorphic, modernization, proficiency
Procedia PDF Downloads 5184711 Instant Location Detection of Objects Moving at High Speed in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data off the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as 'signaling parameters' (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of C-OTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as a rule. This report contains describing the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems
Procedia PDF Downloads 4754710 Drivers of Farmers' Contract Compliance Behaviour: Evidence from a Case Study of Dangote Tomato Processing Plant in Northern Nigeria.
Authors: Umar Shehu Umar
Abstract:
Contract farming is a viable strategy agribusinesses rely on to strengthen vertical coordination. However, low contract compliance remains a significant setback to agribusinesses' contract performance. The present study aims to understand what drives smallholder farmers’ contract compliance behaviour. Qualitative information was collected through Focus Group Discussions to enrich the design of the survey questionnaire administered on a sample of 300 randomly selected farmers contracted by the Dangote Tomato Processing Plant (DTPP) in four regions of northern Nigeria. Novel transaction level data of tomato sales covering one season were collected in addition to socio-economic information of the sampled farmers. Binary logistic model results revealed that open fresh market tomato prices and payment delays negatively affect farmers' compliance behaviour while quantity harvested, education level and input provision correlated positively with compliance. The study suggests that contract compliance will increase if contracting firms devise a reliable and timely payment plan (e.g., digital payment), continue input and service provisions (e.g., improved seeds, extension services) and incentives (e.g., loyalty rewards, bonuses) in the contract.Keywords: contract farming, compliance, farmers and processors., smallholder
Procedia PDF Downloads 614709 Tracking and Classifying Client Interactions with Personal Coaches
Authors: Kartik Thakore, Anna-Roza Tamas, Adam Cole
Abstract:
The world health organization (WHO) reports that by 2030 more than 23.7 million deaths annually will be caused by Cardiovascular Diseases (CVDs); with a 2008 economic impact of $3.76 T. Metabolic syndrome is a disorder of multiple metabolic risk factors strongly indicated in the development of cardiovascular diseases. Guided lifestyle intervention driven by live coaching has been shown to have a positive impact on metabolic risk factors. Individuals’ path to improved (decreased) metabolic risk factors are driven by personal motivation and personalized messages delivered by coaches and augmented by technology. Using interactions captured between 400 individuals and 3 coaches over a program period of 500 days, a preliminary model was designed. A novel real time event tracking system was created to track and classify clients based on their genetic profile, baseline questionnaires and usage of a mobile application with live coaching sessions. Classification of clients and coaches was done using a support vector machines application build on Apache Spark, Stanford Natural Language Processing Library (SNLPL) and decision-modeling.Keywords: guided lifestyle intervention, metabolic risk factors, personal coaching, support vector machines application, Apache Spark, natural language processing
Procedia PDF Downloads 4344708 Evaluation of Different Cowpea Genotypes Using Grain Yield and Canning Quality Traits
Authors: Magdeline Pakeng Mohlala, R. L. Molatudi, M. A. Mofokeng
Abstract:
Cowpea (Vigna unguiculata (L.) Walp) is an important annual leguminous crop in semi-arid and tropics. Most of cowpea grain production in South Africa is mainly used for domestic consumption, as seed planting and little or none gets to be used in industrial processing; thus, there is a need to expand the utilization of cowpea through industrial processing. Agronomic traits contribute to the understanding of the association between yield and its component traits to facilitate effective selection for yield improvement. The aim of this study was to evaluate cowpea genotypes using grain yield and canning quality traits. The field experiment was conducted in two locations in Limpopo Province, namely Syferkuil Agricultural Experimental farm and Ga-Molepo village during 2017/2018 growing season and canning took place at ARC-Grain Crops Potchefstroom. The experiment comprised of 100 cowpea genotypes laid out in a Randomized Complete Block Designs (RCBD). The grain yield, yield components, and canning quality traits were analysed using Genstat software. About 62 genotypes were suitable for canning, 38 were not due to their seed coat texture, and water uptake was less than 80% resulting in too soft (mushy) seeds. Grain yield for RV115, 99k-494-6, ITOOK1263, RV111, RV353 and 53 other genotypes recorded high positive association with number of branches, pods per plant, and number of seeds per pod, unshelled weight and shelled weight for Syferkuil than at Ga-Molepo are therefore recommended for canning quality.Keywords: agronomic traits, canning quality, genotypes, yield
Procedia PDF Downloads 1574707 Effect of Baffles on the Cooling of Electronic Components
Authors: O. Bendermel, C. Seladji, M. Khaouani
Abstract:
In this work, we made a numerical study of the thermal and dynamic behaviour of air in a horizontal channel with electronic components. The influence to use baffles on the profiles of velocity and temperature is discussed. The finite volume method and the algorithm Simple are used for solving the equations of conservation of mass, momentum and energy. The results found show that baffles improve heat transfer between the cooling air and electronic components. The velocity will increase from 3 times per rapport of the initial velocity.Keywords: electronic components, baffles, cooling, fluids engineering
Procedia PDF Downloads 3004706 Restoration of Digital Design Using Row and Column Major Parsing Technique from the Old/Used Jacquard Punched Cards
Authors: R. Kumaravelu, S. Poornima, Sunil Kumar Kashyap
Abstract:
The optimized and digitalized restoration of the information from the old and used manual jacquard punched card in textile industry is referred to as Jacquard Punch Card (JPC) reader. In this paper, we present a novel design and development of photo electronics based system for reading old and used punched cards and storing its binary information for transforming them into an effective image file format. In our textile industry the jacquard punched cards holes diameters having the sizes of 3mm, 5mm and 5.5mm pitch. Before the adaptation of computing systems in the field of textile industry those punched cards were prepared manually without digital design source, but those punched cards are having rich woven designs. Now, the idea is to retrieve binary information from the jacquard punched cards and store them in digital (Non-Graphics) format before processing it. After processing the digital format (Non-Graphics) it is converted into an effective image file format through either by Row major or Column major parsing technique.To accomplish these activities, an embedded system based device and software integration is developed. As part of the test and trial activity the device was tested and installed for industrial service at Weavers Service Centre, Kanchipuram, Tamilnadu in India.Keywords: file system, SPI. UART, ARM controller, jacquard, punched card, photo LED, photo diode
Procedia PDF Downloads 1694705 Identification of Lipo-Alkaloids and Fatty Acids in Aconitum carmichaelii Using Liquid Chromatography–Mass Spectrometry and Gas Chromatography–Mass Spectrometry
Authors: Ying Liang, Na Li
Abstract:
Lipo-alkaloid is a kind of C19-norditerpenoid alkaloids existed in Aconitum species, which usually contains an aconitane skeleton and one or two fatty acid residues. The structures are very similar to that of diester-type alkaloids, which are considered as the main bioactive components in Aconitum carmichaelii. They have anti-inflammatory, anti-nociceptive, and anti-proliferative activities. So far, more than 200 lipo-alkaloids were reported from plants, semisynthesis, and biotransformations. In our research, by the combination of ultra-high performance liquid chromatography-quadruple-time of flight mass spectrometry (UHPLC-Q-TOF-MS) and an in-house database, 148 lipo-alkaloids were identified from A. carmichaelii, including 93 potential new compounds and 38 compounds with oxygenated fatty acid moieties. To our knowledge, this is the first time of the reporting of the oxygenated fatty acids as the side chains in naturally-occurring lipo-alkaloids. Considering the fatty acid residues in lipo-alkaloids should come from the free acids in the plant, the fatty acids and their relationship with lipo-alkaloids were further investigated by GC-MS and LC-MS. Among 17 fatty acids identified by GC-MS, 12 were detected as the side chains of lipo-alkaloids, which accounted for about 1/3 of total lipo-alkaloids, while these fatty acid residues were less than 1/4 of total fatty acid residues. And, total of 37 fatty acids were determined by UHPCL-Q-TOF-MS, including 18 oxidized fatty acids firstly identified from A. carmichaelii. These fatty acids were observed as the side chains of lipo-alkaloids. In addition, although over 140 lipo-alkaloids were identified, six lipo-alkaloids, 8-O-linoleoyl-14-benzoylmesaconine (1), 8-O-linoleoyl-14-benzoylaconine (2), 8-O-palmitoyl-14-benzoylmesaconine (3), 8-O-oleoyl-14-benzoylmesaconine (4), 8-O-pal-benzoylaconine (5), and 8-O-ole-Benzoylaconine (6), were found to be the main components, which accounted for over 90% content of total lipo-alkaloids. Therefore, using these six components as standards, a UHPLC-Triple Quadrupole-MS (UHPLC-QQQ-MS) approach was established to investigate the influence of processing on the contents of lipo-alkaloids. Although it was commonly supposed that the contents of lipo-alkaloids increased after processing, our research showed that no significant change was observed before and after processing. Using the same methods, the lipo-alkaloids in the lateral roots of A. carmichaelii and the roots of A. kusnezoffii were determined and quantified. The contents of lipo-alkaloids in A. kusnezoffii were close to that of the parent roots of A. carmichaelii, while the lateral roots had less lipo-alkaloids than the parent roots. This work was supported by Macao Science and Technology Development Fund (086/2013/A3 and 003/2016/A1).Keywords: Aconitum carmichaelii, fatty acids, GC-MS, LC-MS, lipo-alkaloids
Procedia PDF Downloads 3044704 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing
Procedia PDF Downloads 1914703 Study and Fine Characterization of the SS 316L Microstructures Obtained by Laser Beam Melting Process
Authors: Sebastien Relave, Christophe Desrayaud, Aurelien Vilani, Alexey Sova
Abstract:
Laser beam melting (LBM) is an additive manufacturing process that enables complex 3D parts to be designed. This process is now commonly employed for various applications such as chemistry or energy, requiring the use of stainless steel grades. LBM can offer comparable and sometimes superior mechanical properties to those of wrought materials. However, we observed an anisotropic microstructure which results from the process, caused by the very high thermal gradients along the building axis. This microstructure can be harmful depending on the application. For this reason, control and prediction of the microstructure are important to ensure the improvement and reproducibility of the mechanical properties. This study is focused on the 316L SS grade and aims at understanding the solidification and transformation mechanisms during process. Experiments to analyse the nucleation and growth of the microstructure obtained by the LBM process according to several conditions. These samples have been designed on different type of support bulk and lattice. Samples are produced on ProX DMP 200 LBM device. For the two conditions the analysis of microstructures, thanks to SEM and EBSD, revealed a single phase Austenite with preferential crystallite growth along the (100) plane. The microstructure was presented a hierarchical structure consisting columnar grains sizes in the range of 20-100 µm and sub grains structure of size 0.5 μm. These sub-grains were found in different shapes (columnar and cellular). This difference can be explained by a variation of the thermal gradient and cooling rate or element segregation while no sign of element segregation was found at the sub-grain boundaries. A high dislocation concentration was observed at sub-grain boundaries. These sub-grains are separated by very low misorientation walls ( < 2°) this causes a lattice of curvature inside large grain. A discussion is proposed on the occurrence of these microstructures formation, in regard of the LBM process conditions.Keywords: selective laser melting, stainless steel, microstructure
Procedia PDF Downloads 162