Search results for: machine modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4562

Search results for: machine modelling

1022 Modelling of Heat Transfer during Controlled Cooling of Thermo-Mechanically Treated Rebars Using Computational Fluid Dynamics Approach

Authors: Rohit Agarwal, Mrityunjay K. Singh, Soma Ghosh, Ramesh Shankar, Biswajit Ghosh, Vinay V. Mahashabde

Abstract:

Thermo-mechanical treatment (TMT) of rebars is a critical process to impart sufficient strength and ductility to rebar. TMT rebars are produced by the Tempcore process, involves an 'in-line' heat treatment in which hot rolled bar (temperature is around 1080°C) is passed through water boxes where it is quenched under high pressure water jets (temperature is around 25°C). The quenching rate dictates composite structure consisting (four non-homogenously distributed phases of rebar microstructure) pearlite-ferrite, bainite, and tempered martensite (from core to rim). The ferrite and pearlite phases present at core induce ductility to rebar while martensitic rim induces appropriate strength. The TMT process is difficult to model as it brings multitude of complex physics such as heat transfer, highly turbulent fluid flow, multicomponent and multiphase flow present in the control volume. Additionally the presence of film boiling regime (above Leidenfrost point) due to steam formation adds complexity to domain. A coupled heat transfer and fluid flow model based on computational fluid dynamics (CFD) has been developed at product technology division of Tata Steel, India which efficiently predicts temperature profile and percentage martensite rim thickness of rebar during quenching process. The model has been validated with 16 mm rolling of New Bar mill (NBM) plant of Tata Steel Limited, India. Furthermore, based on the scenario analyses, optimal configuration of nozzles was found which helped in subsequent increase in rolling speed.

Keywords: boiling, critical heat flux, nozzles, thermo-mechanical treatment

Procedia PDF Downloads 215
1021 Upsetting of Tri-Metallic St-Cu-Al and St-Cu60Zn-Al Cylindrical Billets

Authors: Isik Cetintav, Cenk Misirli, Yilmaz Can

Abstract:

This work investigates upsetting of the tri-metallic cylindrical billets both experimentally and analytically with a reduction ratio 30%. Steel, brass, and copper are used for the outer and outmost rings and aluminum for the inner core. Two different models have been designed to show material flow and the cavity took place over the two interfaces during forming after this reduction ratio. Each model has an outmost ring material as steel. Model 1 has an outer ring between the outmost ring and the solid core material as copper and Model 2 has a material as brass. Solid core is aluminum for each model. Billets were upset in press machine by using parallel flat dies. Upsetting load was recorded and compared for models and single billets. To extend the tests and compare with experimental procedure to a wider range of inner core and outer ring geometries, finite element model was performed. ABAQUS software was used for the simulations. The aim is to show how contact between outmost ring, outer ring and the inner core are carried on throughout the upsetting process. Results have shown that, with changing in height, between outmost ring, outer ring and inner core, the Model 1 and Model 2 had very good interaction, and the contact surfaces of models had various interface behaviour. It is also observed that tri-metallic materials have lower weight but better mechanical properties than single materials. This can give an idea for using and producing these new materials for different purposes.

Keywords: tri-metallic, upsetting, copper, brass, steel, aluminum

Procedia PDF Downloads 342
1020 Optimization of Alkali Silicate Glass Heat Treatment for the Improvement of Thermal Expansion and Flexural Strength

Authors: Stephanie Guerra-Arias, Stephani Nevarez, Calvin Stewart, Rachel Grodsky, Denis Eichorst

Abstract:

The objective of this study is to describe the framework for optimizing the heat treatment of alkali silicate glasses, to enhance the performance of hermetic seals in extreme environments. When connectors are exposed to elevated temperatures, residual stresses develop due to the mismatch of thermal expansions between the glass, metal pin, and metal shell. Excessive thermal expansion mismatch compromises the reliability of hermetic seals. In this study, a series of heat treatment schedules will be performed on two commercial sealing glasses (one conventional sealing glass and one crystallizable sealing glass) using a design of experiments (DOE) approach. The coefficient of thermal expansion (CTE) will be measured pre- and post-heat treatment using thermomechanical analysis (TMA). Afterwards, the flexural strength of the specimen will be measured using a four-point bend fixture mounted in a static universal testing machine. The measured material properties will be statistically analyzed using MiniTab software to determine which factors of the heat treatment process have a strong correlation to the coefficient of thermal expansion and/or flexural strength. Finally, a heat-treatment will be designed and tested to ensure the optimal performance of the hermetic seals in connectors.

Keywords: glass-ceramics, design of experiment, hermetic connectors, material characterization

Procedia PDF Downloads 150
1019 Diversity Indices as a Tool for Evaluating Quality of Water Ways

Authors: Khadra Ahmed, Khaled Kheireldin

Abstract:

In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.

Keywords: planktons, diversity indices, water quality index, water ways

Procedia PDF Downloads 518
1018 Selective Laser Melting (SLM) Process and Its Influence on the Machinability of TA6V Alloy

Authors: Rafał Kamiński, Joel Rech, Philippe Bertrand, Christophe Desrayaud

Abstract:

Titanium alloys are among the most important material in the aircraft industry, due to its low density, high strength, and corrosion resistance. However, these alloys are considered as difficult to machine because they have poor thermal properties and high reactivity with cutting tools. The Selective Laser Melting (SLM) process becomes even more popular through industry since it enables the design of new complex components, that cannot be manufactured by standard processes. However, the high temperature reached during the melting phase as well as the several rapid heating and cooling phases, due to the movement of the laser, induce complex microstructures. These microstructures differ from conventional equiaxed ones obtained by casting+forging. Parts obtained by SLM have to be machined in order calibrate the dimensions and the surface roughness of functional surfaces. The ball milling technique is widely applied to finish complex shapes. However, the machinability of titanium is strongly influenced by the microstructure. So the objective of this work is to investigate the influence of the SLM process, i.e. microstructure, on the machinability of titanium, compared to conventional forming processes. The machinability is analyzed by measuring surface roughness, cutting forces, cutting tool wear for a range of cutting conditions (depth of cut ap, feed per tooth fz, spindle speed N) in accordance with industrial practices.

Keywords: ball milling, microstructure, surface roughness, titanium

Procedia PDF Downloads 297
1017 A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm

Authors: Javad Rahimipour Anaraki, Saeed Samet, Mahdi Eftekhari, Chang Wook Ahn

Abstract:

Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.

Keywords: binary shuffled frog leaping algorithm, feature selection, fuzzy-rough set, minimal reduct

Procedia PDF Downloads 225
1016 Performance Comparison of Situation-Aware Models for Activating Robot Vacuum Cleaner in a Smart Home

Authors: Seongcheol Kwon, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

We assume an IoT-based smart-home environment where the on-off status of each of the electrical appliances including the room lights can be recognized in a real time by monitoring and analyzing the smart meter data. At any moment in such an environment, we can recognize what the household or the user is doing by referring to the status data of the appliances. In this paper, we focus on a smart-home service that is to activate a robot vacuum cleaner at right time by recognizing the user situation, which requires a situation-aware model that can distinguish the situations that allow vacuum cleaning (Yes) from those that do not (No). We learn as our candidate models a few classifiers such as naïve Bayes, decision tree, and logistic regression that can map the appliance-status data into Yes and No situations. Our training and test data are obtained from simulations of user behaviors, in which a sequence of user situations such as cooking, eating, dish washing, and so on is generated with the status of the relevant appliances changed in accordance with the situation changes. During the simulation, both the situation transition and the resulting appliance status are determined stochastically. To compare the performances of the aforementioned classifiers we obtain their learning curves for different types of users through simulations. The result of our empirical study reveals that naïve Bayes achieves a slightly better classification accuracy than the other compared classifiers.

Keywords: situation-awareness, smart home, IoT, machine learning, classifier

Procedia PDF Downloads 422
1015 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.

Keywords: CNC machining, six sigma, surface roughness, Taguchi methodology

Procedia PDF Downloads 242
1014 Supported Gold Nanocatalysts for CO Oxidation in Mainstream Cigarette Smoke

Authors: Krasimir Ivanov, Dimitar Dimitrov, Tatyana Tabakova, Stefka Kirkova, Anna Stoilova, Violina Angelova

Abstract:

It has been suggested that nicotine, CO and tar in mainstream smoke are the most important substances and have been judged as the most harmful compounds, responsible for the health hazards of smoking. As nicotine is extremely important for smoking qualities of cigarettes and the tar yield in the tobacco smoke is significantly reduced due to the use of filters with various content and design, the main efforts of cigarettes researchers and manufacturers are related to the search of opportunities for CO content reduction. Highly active ceria supported gold catalyst was prepared by the deposition-precipitation method, and the possibilities for CO oxidation in the synthetic gaseous mixture were evaluated using continuous flow equipment with fixed bed glass reactor at atmospheric pressure. The efficiently of the catalyst in CO oxidation in the real cigarette smoke was examined by a single port, puf-by-puff smoking machine. Quality assessment of smoking using cigarette holder containing catalyst was carried out. It was established that the catalytic activity toward CO oxidation in cigarette smoke rapidly decreases from 70% for the first cigarette to nearly zero for the twentieth cigarette. The present study shows that there are two critical factors which do not permit the successful use of catalysts to reduce the CO content in the mainstream cigarette smoke: (i) significant influence of the processes of adsorption and oxidation on the main characteristics of tobacco products and (ii) rapid deactivation of the catalyst due to the covering of the catalyst’s grains with condensate.

Keywords: cigarette smoke, CO oxidation, gold catalyst, mainstream

Procedia PDF Downloads 219
1013 A Comparison between Shear Bond Strength of VMK Master Porcelain with Three Base-Metal Alloys (Ni-Cr-T3, Verabond, Super Cast) and One Noble Alloy (X-33) in Metal-Ceramic Restorations

Authors: Ammar Neshati, Elham Hamidi Shishavan

Abstract:

Statement of Problem: The increase in the use of metal-ceramic restorations and a high prevalence of porcelain chipping entails introducing an alloy which is more compatible with porcelain and which causes a stronger bond between the two. This study is to compare shear bond strength of three base-metal alloys and one noble alloy with the common VMK Master Porcelain. Materials and Method: Three different groups of base-metal alloys (Ni-cr-T3, Super Cast, Verabond) and one group of noble alloy (x-33) were selected. The number of alloys in each group was 15. All the groups went through the casting process and change from wax pattern into metal disks. Then, VMK Master Porcelain was fired on each group. All the specimens were put in the UTM and a shear force was loaded until a fracture occurred. The fracture force was then recorded by the machine. The data was subjected to SPSS Version 16 and One-Way ANOVA was run to compare shear strength between the groups. Furthermore, the groups were compared two by two through running Tukey test. Results: The findings of this study revealed that shear bond strength of Ni-Cr-T3 alloy was higher than the three other alloys (94 Mpa or 330 N). Super Cast alloy had the second greatest shear bond strength (80. 87 Mpa or 283.87 N). Both Verabond (69.66 Mpa or 245 N) and x-33 alloys (66.53 Mpa or 234 N) took the third place. Conclusion: Ni-Cr-T3 with VMK Master Porcelain has the greatest shear bond strength. Therefore, the use of this low-cost alloy is recommended in metal-ceramic restorations.

Keywords: shear bond, base-metal alloy, noble alloy, porcelain

Procedia PDF Downloads 487
1012 A Coupled Model for Two-Phase Simulation of a Heavy Water Pressure Vessel Reactor

Authors: D. Ramajo, S. Corzo, M. Nigro

Abstract:

A Multi-dimensional computational fluid dynamics (CFD) two-phase model was developed with the aim to simulate the in-core coolant circuit of a pressurized heavy water reactor (PHWR) of a commercial nuclear power plant (NPP). Due to the fact that this PHWR is a Reactor Pressure Vessel type (RPV), three-dimensional (3D) detailed modelling of the large reservoirs of the RPV (the upper and lower plenums and the downcomer) were coupled with an in-house finite volume one-dimensional (1D) code in order to model the 451 coolant channels housing the nuclear fuel. Regarding the 1D code, suitable empirical correlations for taking into account the in-channel distributed (friction losses) and concentrated (spacer grids, inlet and outlet throttles) pressure losses were used. A local power distribution at each one of the coolant channels was also taken into account. The heat transfer between the coolant and the surrounding moderator was accurately calculated using a two-dimensional theoretical model. The implementation of subcooled boiling and condensation models in the 1D code along with the use of functions for representing the thermal and dynamic properties of the coolant and moderator (heavy water) allow to have estimations of the in-core steam generation under nominal flow conditions for a generic fission power distribution. The in-core mass flow distribution results for steady state nominal conditions are in agreement with the expected from design, thus getting a first assessment of the coupled 1/3D model. Results for nominal condition were compared with those obtained with a previous 1/3D single-phase model getting more realistic temperature patterns, also allowing visualize low values of void fraction inside the upper plenum. It must be mentioned that the current results were obtained by imposing prescribed fission power functions from literature. Therefore, results are showed with the aim of point out the potentiality of the developed model.

Keywords: PHWR, CFD, thermo-hydraulic, two-phase flow

Procedia PDF Downloads 468
1011 Investigation of Projected Organic Waste Impact on a Tropical Wetland in Singapore

Authors: Swee Yang Low, Dong Eon Kim, Canh Tien Trinh Nguyen, Yixiong Cai, Shie-Yui Liong

Abstract:

Nee Soon swamp forest is one of the last vestiges of tropical wetland in Singapore. Understanding the hydrological regime of the swamp forest and implications for water quality is critical to guide stakeholders in implementing effective measures to preserve the wetland against anthropogenic impacts. In particular, although current field measurement data do not indicate a concern with organic pollution, reviewing the ways in which the wetland responds to elevated organic waste influx (and the corresponding impact on dissolved oxygen, DO) can help identify potential hotspots, and the impact on the outflow from the catchment which drains into downstream controlled watercourses. An integrated water quality model is therefore developed in this study to investigate spatial and temporal concentrations of DO levels and organic pollution (as quantified by biochemical oxygen demand, BOD) within the catchment’s river network under hypothetical, projected scenarios of spiked upstream inflow. The model was developed using MIKE HYDRO for modelling the study domain, as well as the MIKE ECO Lab numerical laboratory for characterising water quality processes. Model parameters are calibrated against time series of observed discharges at three measurement stations along the river network. Over a simulation period of April 2014 to December 2015, the calibrated model predicted that a continuous spiked inflow of 400 mg/l BOD will elevate downstream concentrations at the catchment outlet to an average of 12 mg/l, from an assumed nominal baseline BOD of 1 mg/l. Levels of DO were decreased from an initial 5 mg/l to 0.4 mg/l. Though a scenario of spiked organic influx at the swamp forest’s undeveloped upstream sub-catchments is currently unlikely to occur, the outcomes nevertheless will be beneficial for future planning studies in understanding how the water quality of the catchment will be impacted should urban redevelopment works be considered around the swamp forest.

Keywords: hydrology, modeling, water quality, wetland

Procedia PDF Downloads 140
1010 Human-factor and Ergonomics in Bottling Lines

Authors: Parameshwaran Nair

Abstract:

Filling and packaging lines for bottling of beverages into glass, PET or aluminum containers require specialized expertise and a different configuration of equipment like – Filler, Warmer, Labeller, Crater/Recrater, Shrink Packer, Carton Erector, Carton Sealer, Date Coder, Palletizer, etc. Over the period of time, the packaging industry has evolved from manually operated single station machines to highly automized high-speed lines. Human factor and ergonomics have gained significant consideration in this course of transformation. A pre-requisite for such bottling lines, irrespective of the container type and size, is to be suitable for multi-format applications. It should also be able to handle format changeovers with minimal adjustment. It should have variable capacity and speeds, for providing great flexibility of use in managing accumulation times as a function of production characteristics. In terms of layout as well, it should demonstrate flexibility for operator movement and access to machine areas for maintenance. Packaging technology during the past few decades has risen to these challenges by a series of major breakthroughs interspersed with periods of refinement and improvement. The milestones are many and varied and are described briefly in this paper. In order to have a brief understanding of the human factor and ergonomics in the modern packaging lines, this paper, highlights the various technologies, design considerations and statutory requirements in packaging equipment for different types of containers used in India.

Keywords: human-factor, ergonomics, bottling lines, automized high-speed lines

Procedia PDF Downloads 437
1009 Major Depressive Disorder: Diagnosis based on Electroencephalogram Analysis

Authors: Wajid Mumtaz, Aamir Saeed Malik, Syed Saad Azhar Ali, Mohd Azhar Mohd Yasin

Abstract:

In this paper, a technique based on electroencephalogram (EEG) analysis is presented, aiming for diagnosing major depressive disorder (MDD) among a potential population of MDD patients and healthy controls. EEG is recognized as a clinical modality during applications such as seizure diagnosis, index for anesthesia, detection of brain death or stroke. However, its usability for psychiatric illnesses such as MDD is less studied. Therefore, in this study, for the sake of diagnosis, 2 groups of study participants were recruited, 1) MDD patients, 2) healthy people as controls. EEG data acquired from both groups were analyzed involving inter-hemispheric asymmetry and composite permutation entropy index (CPEI). To automate the process, derived quantities from EEG were utilized as inputs to classifier such as logistic regression (LR) and support vector machine (SVM). The learning of these classification models was tested with a test dataset. Their learning efficiency is provided as accuracy of classifying MDD patients from controls, their sensitivities and specificities were reported, accordingly (LR =81.7 % and SVM =81.5 %). Based on the results, it is concluded that the derived measures are indicators for diagnosing MDD from a potential population of normal controls. In addition, the results motivate further exploring other measures for the same purpose.

Keywords: major depressive disorder, diagnosis based on EEG, EEG derived features, CPEI, inter-hemispheric asymmetry

Procedia PDF Downloads 546
1008 Long-Term Modal Changes in International Traffic - Modelling Exercise

Authors: Tomasz Komornicki

Abstract:

The primary aim of the presentation is to try to model border traffic and, at the same time to explain on which economic variables the intensity of border traffic depended in the long term. For this purpose, long series of traffic data on the Polish borders were used. Models were estimated for three variants of explanatory variables: a) for total arrivals and departures (total movement of Poles and foreigners), b) for arrivals and departures of Poles, and c) for arrivals and departures of foreigners. Each of the defined explanatory variables in the models appeared as the logarithm of the natural number of persons. Data from 1994-2017 were used for modeling (for internal Schengen borders for the years 1994-2007). Information on the number of people arriving in and leaving Poland was collected for a total of 303 border crossings. On the basis of the analyses carried out, it was found that one of the main factors determining border traffic is generally differences in the level of economic development (GDP) and the condition of the economy (level of unemployment) and the degree of border permeability. Also statistically significant for border traffic are differences in the prices of goods (fuels, tobacco, and alcohol products) and services (mainly basic ones, e.g., hairdressing services). Such a relationship exists mainly on the eastern border (border traffic determined largely by differences in the prices of goods) and on the border with Germany (in the first analysed period, border traffic was determined mainly by the prices of goods, later - after Poland's accession to the EU and the Schengen area - also by the prices of services). The models also confirmed differences in the set of factors shaping the volume and structure of border traffic on the Polish borders resulting from general geopolitical conditions, with the year 2007 being an important caesura, after which the classical population mobility factors became visible. The results obtained were additionally related to changes in traffic that occurred as a result of the CPOVID-19 pandemic and as a result of the Russian aggression against Ukraine.

Keywords: border, modal structure, transport, Ukraine

Procedia PDF Downloads 115
1007 A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments

Authors: Mohammadhossein Moghimi, Robert Stone, Pia Rotshtein

Abstract:

Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.

Keywords: virtual reality, effective computing, effective VR, emotion-based effective physiological database

Procedia PDF Downloads 233
1006 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data

Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple

Abstract:

In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.

Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network

Procedia PDF Downloads 139
1005 On Exploring Search Heuristics for improving the efficiency in Web Information Extraction

Authors: Patricia Jiménez, Rafael Corchuelo

Abstract:

Nowadays the World Wide Web is the most popular source of information that relies on billions of on-line documents. Web mining is used to crawl through these documents, collect the information of interest and process it by applying data mining tools in order to use the gathered information in the best interest of a business, what enables companies to promote theirs. Unfortunately, it is not easy to extract the information a web site provides automatically when it lacks an API that allows to transform the user-friendly data provided in web documents into a structured format that is machine-readable. Rule-based information extractors are the tools intended to extract the information of interest automatically and offer it in a structured format that allow mining tools to process it. However, the performance of an information extractor strongly depends on the search heuristic employed since bad choices regarding how to learn a rule may easily result in loss of effectiveness and/or efficiency. Improving search heuristics regarding efficiency is of uttermost importance in the field of Web Information Extraction since typical datasets are very large. In this paper, we employ an information extractor based on a classical top-down algorithm that uses the so-called Information Gain heuristic introduced by Quinlan and Cameron-Jones. Unfortunately, the Information Gain relies on some well-known problems so we analyse an intuitive alternative, Termini, that is clearly more efficient; we also analyse other proposals in the literature and conclude that none of them outperforms the previous alternative.

Keywords: information extraction, search heuristics, semi-structured documents, web mining.

Procedia PDF Downloads 335
1004 Study on Capability of the Octocopter Configurations in Finite Element Analysis Simulation Environment

Authors: Jeet Shende, Leonid Shpanin, Misko Abramiuk, Mattew Goodwin, Nicholas Pickett

Abstract:

Energy harvesting on board the Unmanned Ariel Vehicle (UAV) is one of the most rapidly growing emerging technologies and consists of the collection of small amounts of energy, for different applications, from unconventional sources that are incidental to the operation of the parent system or device. Different energy harvesting techniques have already been investigated in the multirotor drones, where the energy collected comes from the systems surrounding ambient environment and typically involves the conversion of solar, kinetic, or thermal energies into electrical energy. The energy harvesting from the vibrated propeller using the piezoelectric components inside the propeller has also been proven to be feasible. However, the impact on the UAV flight performance using this technology has not been investigated. In this contribution the impact on the multirotor drone operation has been investigated at different flight control configurations which support the efficient performance of the propeller vibration energy harvesting. The industrially made MANTIS X8-PRO octocopter frame kit was used to explore the octocopter operation which was modelled using SolidWorks 3D CAD package for simulation studies. The octocopter flight control strategy is developed through integration of the SolidWorks 3D CAD software and MATLAB/Simulink simulation environment for evaluation of the octocopter behaviour under different simulated flight modes and octocopter geometries. Analysis of the two modelled octocopter geometries and their flight performance is presented via graphical representation of simulated parameters. The possibility of not using the landing gear in octocopter geometry is demonstrated. The conducted study evaluates the octocopter’s flight control technique and its impact on the energy harvesting mechanism developed on board the octocopter. Finite Element Analysis (FEA) simulation results of the modelled octocopter in operation are presented exploring the performance of the octocopter flight control and structural configurations. Applications of both octocopter structures and their flight control strategy are discussed.

Keywords: energy harvesting, flight control modelling, object modeling, unmanned aerial vehicle

Procedia PDF Downloads 76
1003 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform

Authors: David Jurado, Carlos Ávila

Abstract:

Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.

Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis

Procedia PDF Downloads 83
1002 Enhancing Information Technologies with AI: Unlocking Efficiency, Scalability, and Innovation

Authors: Abdal-Hafeez Alhussein

Abstract:

Artificial Intelligence (AI) has become a transformative force in the field of information technologies, reshaping how data is processed, analyzed, and utilized across various domains. This paper explores the multifaceted applications of AI within information technology, focusing on three key areas: automation, scalability, and data-driven decision-making. We delve into how AI-powered automation is optimizing operational efficiency in IT infrastructures, from automated network management to self-healing systems that reduce downtime and enhance performance. Scalability, another critical aspect, is addressed through AI’s role in cloud computing and distributed systems, enabling the seamless handling of increasing data loads and user demands. Additionally, the paper highlights the use of AI in cybersecurity, where real-time threat detection and adaptive response mechanisms significantly improve resilience against sophisticated cyberattacks. In the realm of data analytics, AI models—especially machine learning and natural language processing—are driving innovation by enabling more precise predictions, automated insights extraction, and enhanced user experiences. The paper concludes with a discussion on the ethical implications of AI in information technologies, underscoring the importance of transparency, fairness, and responsible AI use. It also offers insights into future trends, emphasizing the potential of AI to further revolutionize the IT landscape by integrating with emerging technologies like quantum computing and IoT.

Keywords: artificial intelligence, information technology, automation, scalability

Procedia PDF Downloads 17
1001 Crop Classification using Unmanned Aerial Vehicle Images

Authors: Iqra Yaseen

Abstract:

One of the well-known areas of computer science and engineering, image processing in the context of computer vision has been essential to automation. In remote sensing, medical science, and many other fields, it has made it easier to uncover previously undiscovered facts. Grading of diverse items is now possible because of neural network algorithms, categorization, and digital image processing. Its use in the classification of agricultural products, particularly in the grading of seeds or grains and their cultivars, is widely recognized. A grading and sorting system enables the preservation of time, consistency, and uniformity. Global population growth has led to an increase in demand for food staples, biofuel, and other agricultural products. To meet this demand, available resources must be used and managed more effectively. Image processing is rapidly growing in the field of agriculture. Many applications have been developed using this approach for crop identification and classification, land and disease detection and for measuring other parameters of crop. Vegetation localization is the base of performing these task. Vegetation helps to identify the area where the crop is present. The productivity of the agriculture industry can be increased via image processing that is based upon Unmanned Aerial Vehicle photography and satellite. In this paper we use the machine learning techniques like Convolutional Neural Network, deep learning, image processing, classification, You Only Live Once to UAV imaging dataset to divide the crop into distinct groups and choose the best way to use it.

Keywords: image processing, UAV, YOLO, CNN, deep learning, classification

Procedia PDF Downloads 107
1000 Continuous Improvement as an Organizational Capability in the Industry 4.0 Era

Authors: Lodgaard Eirin, Myklebust Odd, Eleftheriadis Ragnhild

Abstract:

Continuous improvement is becoming increasingly a prerequisite for manufacturing companies to remain competitive in a global market. In addition, future survival and success will depend on the ability to manage the forthcoming digitalization transformation in the industry 4.0 era. Industry 4.0 promises substantially increased operational effectiveness, were all equipment are equipped with integrated processing and communication capabilities. Subsequently, the interplay of human and technology will evolve and influence the range of worker tasks and demands. Taking into account these changes, the concept of continuous improvement must evolve accordingly. Based on a case study from manufacturing industry, the purpose of this paper is to point out what the concept of continuous improvement will meet and has to take into considering when entering the 4th industrial revolution. In the past, continuous improvement has the focus on a culture of sustained improvement targeting the elimination of waste in all systems and processes of an organization by involving everyone. Today, it has to be evolved into the forthcoming digital transformation and the increased interplay of human and digital communication system to reach its full potential. One main findings of this study, is how digital communication systems will act as an enabler to strengthen the continuous improvement process, by moving from collaboration within individual teams to interconnection of teams along the product value chain. For academics and practitioners, it will help them to identify and prioritize their steps towards an industry 4.0 implementation integrated with focus on continuous improvement.

Keywords: continuous improvement, digital communication system, human-machine-interaction, industry 4.0, team perfomance

Procedia PDF Downloads 204
999 Geophysical Mapping of Anomalies Associated with Sediments of Gwandu Formation Around Argungu and Its Environs NW, Nigeria

Authors: Adamu Abubakar, Abdulganiyu Yunusa, Likkason Othniel Kamfani, Abdulrahman Idris Augie

Abstract:

This research study is being carried out in accordance with the Gwandu formation's potential exploratory activities in the inland basin of northwest Nigeria.The present research aims to identify and characterize subsurface anomalies within Gwandu formation using electrical resistivity tomography (ERT) and magnetic surveys, providing valuable insights for mineral exploration. The study utilizes various data enhancement techniques like derivatives, upward continuation, and spectral analysis alongside 2D modeling of electrical imaging profiles to analyze subsurface structures and anomalies. Data was collected through ERT and magnetic surveys, with subsequent processing including derivatives, spectral analysis, and 2D modeling. The results indicate significant subsurface structures such as faults, folds, and sedimentary layers. The study area's geoelectric and magnetic sections illustrate the depth and distribution of sedimentary formations, enhancing understanding of the geological framework. Thus, showed that the entire formations of Eocene sediment of Gwandu are overprinted by the study area's Tertiary strata. The NE to SW and E to W cross-profile for the pseudo geoelectric sections beneath the study area were generated using a two-dimensional (2D) electrical resistivity imaging. 2D magnetic modelling, upward continuation, and derivative analysis are used to delineate the signatures of subsurface magnetic anomalies. The results also revealed The sediment thickness by surface depth ranges from ∼4.06 km and ∼23.31 km. The Moho interface, the lower and upper mantle crusts boundary, and magnetic crust are all located at depths of around ∼10.23 km. The vertical distance between the local models of the foundation rocks to the north and south of the Sokoto Group was approximately ∼6 to ∼8 km and ∼4.5 km, respectively.

Keywords: high-resolution aeromagnetic data, electrical resistivity imaging, subsurface anomalies, 2d dorward modeling

Procedia PDF Downloads 13
998 Influence of Exfoliated Graphene Nanoplatelets on Thermal Stability of Polypropylene Reinforced Hybrid Graphen-rice Husk Nanocomposites

Authors: Obinna Emmanuel Ezenkwa, Sani Amril Samsudin, Azman Hassan, Ede Anthony

Abstract:

A major challenge of polypropylene (PP) in high-heat application areas is its poor thermal stability. Under high temperature, PP burns readily with high degradation temperature and can self-ignite. In this study, PP is reinforced with hybrid filler of graphene (xGNP) and rice husk (RH) with RH at 15 wt%, and xGNP varied at 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 parts per hundred (phr) of the composite. Compatibilizer MAPP was also added in each sample at 4phr of the composite. Sample formulations were melt-blended using twin screw extruder and injection moulding machine. At xGNP optimum content of 1.5 phr, hybrid PP/RH/G1.5/MAPP nanocomposite increased in thermal stability by 24 °C and 30 °C compared to pure PP and unhybridized PP/RH composite respectively; char residue increased by 513% compared to pure PP and degree of crystallization (Xc) increased from 35.4% to 36.4%. The observed thermal properties enhancement in the hybrid nanocomposites can be related to the high surface area, gap-filling effect and exfoliation characteristics of the graphene nanofiller which worked in synergy with rice husk fillers in reinforcing PP. This study therefore, shows that graphene nanofiller inclusion in polymer composites fabrication can enhance the thermal stability of polyolefins for high heat applications.

Keywords: polymer nanocomposites, thermal stability, exfoliation, hybrid fillers, polymer reinforcement

Procedia PDF Downloads 39
997 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 99
996 Disentangling the Relationship between Sustainable Consumption and Psychological Well-Being

Authors: Isabel Carrero, Raquel Redondo, Carmen Valor

Abstract:

An unclosed issue in sustainable consumption (SC) literature is the relationship between SC and well-being. This paper seeks to address three limitations in past research. First, well-being has been measured as a single-faceted construct. However, other authors have defended the need to broaden the well-being construct since it goes beyond the emotional experiences and life satisfaction. By examining the relationship between SC and the multifaceted construct of psychological well-being, past contradictory results may be reconciled. To illustrate, past studies have shown that sustainable consumers experience negative emotions when they become aware of the harm that human beings inflict on the planet but they realize they have limited power to solving the problem or when they find limited alternatives or useful information to make sustainable decisions. Thus, these experiences may negatively affect the dimension of well-being 'environmental mastery'. However, as past studies have demonstrated that sustainable consumers feel meaningful, their assessment of the dimension 'purpose in life' would be positive. Thus, we need to understand how SC impinge on the different facets of psychological well-being, in order to better understand the relationship between SC and well-being. Another limitation of past research is that most studies failed to distinguish among different pro-environmental actions under SC (i.e., boycotting, buycotting) among others. For instance, activists have been found to experience higher levels of well-being and sense of meaning than less committed sustainable consumers but also burnt-out and social rejection, which should affect negatively the dimension of 'positive relations'. Finally, the influence of gender has been overlooked in the literature of SC and well-being when it has been identified consistently as a moderator variable in SC. Therefore, this study aims to (1) investigate the effect of SC on the six facets of psychological well-being, (2) distinguish between conventional SC behaviors vs. activism to examine whether these behaviors influence psychological well-being differently (3) and test gender as a moderator variable. It does so by surveying 861 individuals. This paper contributes to existing literature by showing that the relationship between well-being and SC is more intricate than it has been presented in previous literature, as it depends on the facet, the type of behavior carried out and gender.

Keywords: activism, gender, psychological well-being, structural equation modelling, sustainable consumption

Procedia PDF Downloads 165
995 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models

Authors: Morten Brøgger, Kim Wittchen

Abstract:

Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.

Keywords: building stock energy modelling, energy-savings, archetype

Procedia PDF Downloads 154
994 Mechanical Characterization of Porcine Skin with the Finite Element Method Based Inverse Optimization Approach

Authors: Djamel Remache, Serge Dos Santos, Michael Cliez, Michel Gratton, Patrick Chabrand, Jean-Marie Rossi, Jean-Louis Milan

Abstract:

Skin tissue is an inhomogeneous and anisotropic material. Uniaxial tensile testing is one of the primary testing techniques for the mechanical characterization of skin at large scales. In order to predict the mechanical behavior of materials, the direct or inverse analytical approaches are often used. However, in case of an inhomogeneous and anisotropic material as skin tissue, analytical approaches are not able to provide solutions. The numerical simulation is thus necessary. In this work, the uniaxial tensile test and the FEM (finite element method) based inverse method were used to identify the anisotropic mechanical properties of porcine skin tissue. The uniaxial tensile experiments were performed using Instron 8800 tensile machine®. The uniaxial tensile test was simulated with FEM, and then the inverse optimization approach (or the inverse calibration) was used for the identification of mechanical properties of the samples. Experimentally results were compared to finite element solutions. The results showed that the finite element model predictions of the mechanical behavior of the tested skin samples were well correlated with experimental results.

Keywords: mechanical skin tissue behavior, uniaxial tensile test, finite element analysis, inverse optimization approach

Procedia PDF Downloads 408
993 Theoretical Performance of a Sustainable Clean Energy On-Site Generation Device to Convert Consumers into Producers and Its Possible Impact on Electrical National Grids

Authors: Eudes Vera

Abstract:

In this paper, a theoretical evaluation is carried out of the performance of a forthcoming fuel-less clean energy generation device, the Air Motor. The underlying physical principles that support this technology are succinctly described. Examples of the machine and theoretical values of input and output powers are also given. In addition, its main features like portability, on-site energy generation and delivery, miniaturization of generation plants, efficiency, and scaling down of the whole electric infrastructure are discussed. The main component of the Air Motor, the Thermal Air Turbine, generates useful power by converting in mechanical energy part of the thermal energy contained in a fan-produced airflow while leaving intact its kinetic energy. Due to this fact an air motor can contain a long succession of identical air turbines and the total power generated out of a single airflow can be very large, as well as its mechanical efficiency. It is found using the corresponding formulae that the mechanical efficiency of this device can be much greater than 100%, while its thermal efficiency is always less than 100%. On account of its multiple advantages, the Air Motor seems to be the perfect device to convert energy consumers into energy producers worldwide. If so, it would appear that current national electrical grids would no longer be necessary, because it does not seem practical or economical to bring the energy from far-away distances while it can be generated and consumed locally at the consumer’s premises using just the thermal energy contained in the ambient air.

Keywords: electrical grid, clean energy, renewable energy, in situ generation and delivery, generation efficiency

Procedia PDF Downloads 175