Search results for: Critical Thinking and Problem Solving Skills and Teamwork Skills
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5564

Search results for: Critical Thinking and Problem Solving Skills and Teamwork Skills

344 Current Status and Future Trends of Mechanized Fruit Thinning Devices and Sensor Technology

Authors: Marco Lopes, Pedro D. Gaspar, Maria P. Simões

Abstract:

This paper reviews the different concepts that have been investigated concerning the mechanization of fruit thinning as well as multiple working principles and solutions that have been developed for feature extraction of horticultural products, both in the field and industrial environments. The research should be committed towards selective methods, which inevitably need to incorporate some kinds of sensor technology. Computer vision often comes out as an obvious solution for unstructured detection problems, although leaves despite the chosen point of view frequently occlude fruits. Further research on non-traditional sensors that are capable of object differentiation is needed. Ultrasonic and Near Infrared (NIR) technologies have been investigated for applications related to horticultural produce and show a potential to satisfy this need while simultaneously providing spatial information as time of flight sensors. Light Detection and Ranging (LIDAR) technology also shows a huge potential but it implies much greater costs and the related equipment is usually much larger, making it less suitable for portable devices, which may serve a purpose on smaller unstructured orchards. Portable devices may serve a purpose on these types of orchards. In what concerns sensor methods, on-tree fruit detection, major challenge is to overcome the problem of fruits’ occlusion by leaves and branches. Hence, nontraditional sensors capable of providing some type of differentiation should be investigated.

Keywords: Fruit thinning, horticultural field, portable devices, sensor technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 983
343 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition

Authors: Hazem M. El-Bakry

Abstract:

Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.

Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
342 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography

Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi

Abstract:

Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.

Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618
341 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement. On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: Automatic landing, multirotor, nonlinear control, parameters estimation, optical flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 527
340 Electricity Load Modeling: An Application to Italian Market

Authors: Giovanni Masala, Stefania Marica

Abstract:

Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.

Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
339 Production of Biocomposites Using Chars Obtained by Co-Pyrolysis of Olive Pomace with Plastic Wastes

Authors: Esra Yel, Tabriz Aslanov, Merve Sogancioglu, Suheyla Kocaman, Gulnare Ahmetli

Abstract:

The disposal of waste plastics has become a major worldwide environmental problem. Pyrolysis of waste plastics is one of the routes to waste minimization and recycling that has been gaining interest. In pyrolysis, the pyrolysed material is separated into gas, liquid (both are fuel) and solid (char) products. All fractions have utilities and economical value depending upon their characteristics. The first objective of this study is to determine the co-pyrolysis product fractions of waste HDPE- (high density polyethylene) and LDPE (low density polyethylene)-olive pomace (OP) and to determine the qualities of the solid product char. Chars obtained at 700 °C pyrolysis were used in biocomposite preparation as additive. As the second objective, the effects of char on biocomposite quality were investigated. Pyrolysis runs were performed at temperature 700 °C with heating rates of 5 °C/min. Biocomposites were prepared by mixing of chars with bisphenol-F type epoxy resin in various wt%. Biocomposite properties were determined by measuring electrical conductivity, surface hardness, Young’s modulus and tensile strength of the composites. The best electrical conductivity results were obtained with HDPE-OP char. For HDPE-OP char and LDPE-OP char, compared to neat epoxy, the tensile strength values of the composites increased by 102% and 78%, respectively, at 10% char dose. The hardness measurements showed similar results to the tensile tests, since there is a correlation between the hardness and the tensile strength.

Keywords: Pyrolysis, olive pomace, char, biocomposite, PE plastics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906
338 Sustainable Building Technologies for Post-Disaster Temporary Housing: Integrated Sustainability Assessment and Life Cycle Assessment

Authors: S. M. Amin Hosseini, Oriol Pons, Albert de la Fuente

Abstract:

After natural disasters, displaced people (DP) require important numbers of housing units, which have to be erected quickly due to emergency pressures. These tight timeframes can cause the multiplication of the environmental construction impacts. These negative impacts worsen the already high energy consumption and pollution caused by the building sector. Indeed, post-disaster housing, which is often carried out without pre-planning, usually causes high negative environmental impacts, besides other economic and social impacts. Therefore, it is necessary to establish a suitable strategy to deal with this problem which also takes into account the instability of its causes, like changing ratio between rural and urban population. To this end, this study aims to present a model that assists decision-makers to choose the most suitable building technology for post-disaster housing units. This model focuses on the alternatives sustainability and fulfillment of the stakeholders’ satisfactions. Four building technologies have been analyzed to determine the most sustainability technology and to validate the presented model. In 2003, Bam earthquake DP had their temporary housing units (THUs) built using these four technologies: autoclaved aerated concrete blocks (AAC), concrete masonry unit (CMU), pressed reeds panel (PR), and 3D sandwich panel (3D). The results of this analysis confirm that PR and CMU obtain the highest sustainability indexes. However, the second life scenario of THUs could have considerable impacts on the results.

Keywords: Sustainability, post-disaster temporary housing, integrated value model for sustainability assessment (MIVES), life cycle assessment (LCA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631
337 Integration Methods and Processes of Product Design and Flexible Production for Direct Production within the iCIM 3000 System

Authors: Roman Ružarovský, Radovan Holubek, Daynier Rolando Delgado Sobrino

Abstract:

Currently is characterized production engineering together with the integration of industrial automation and robotics such very quick view of to manufacture the products. The production range is continuously changing, expanding and producers have to be flexible in this regard. It means that need to offer production possibilities, which can respond to the quick change. Engineering product development is focused on supporting CAD software, such systems are mainly used for product design. That manufacturers are competitive, it should be kept procured machines made available capable of responding to output flexibility. In response to that problem is the development of flexible manufacturing systems, consisting of various automated systems. The integration of flexible manufacturing systems and subunits together with product design and of engineering is a possible solution for this issue. Integration is possible through the implementation of CIM systems. Such a solution and finding a hyphen between CAD and procurement system ICIM 3000 from Festo Co. is engaged in the research project and this contribution. This can be designed the products in CAD systems and watch the manufacturing process from order to shipping by the development of methods and processes of integration, This can be modeled in CAD systems products and watch the manufacturing process from order to shipping to develop methods and processes of integration, which will improve support for product design parameters by monitoring of the production process, by creating of programs for production using the CAD and therefore accelerates the a total of process from design to implementation.

Keywords: CAD- Computer Aided Design, CAM- Computer Aided Manufacturing, CIM- Computer integrated manufacturing, iCIM 3000, integration, direct production from CAD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2299
336 Urban Air Pollution – Trend and Forecasting of Major Pollutants by Timeseries Analysis

Authors: A.L. Seetharam, B.L. Udaya Simha

Abstract:

The Bangalore City is facing the acute problem of pollution in the atmosphere due to the heavy increase in the traffic and developmental activities in recent years. The present study is an attempt in the direction to assess trend of the ambient air quality status of three stations, viz., AMCO Batteries Factory, Mysore Road, GRAPHITE INDIA FACTORY, KHB Industrial Area, Whitefield and Ananda Rao Circle, Gandhinagar with respect to some of the major criteria pollutants such as Total Suspended particular matter (SPM), Oxides of nitrogen (NOx), and Oxides of sulphur (SO2). The sites are representative of various kinds of growths viz., commercial, residential and industrial, prevailing in Bangalore, which are contributing to air pollution. The concentration of Sulphur Dioxide (SO2) at all locations showed a falling trend due to use of refined petrol and diesel in the recent years. The concentration of Oxides of nitrogen (NOx) showed an increasing trend but was within the permissible limits. The concentration of the Suspended particular matter (SPM) showed the mixed trend. The correlation between model and observed values is found to vary from 0.4 to 0.7 for SO2, 0.45 to 0.65 for NOx and 0.4 to 0.6 for SPM. About 80% of data is observed to fall within the error band of ±50%. Forecast test for the best fit models showed the same trend as actual values in most of the cases. However, the deviation observed in few cases could be attributed to change in quality of petro products, increase in the volume of traffic, introduction of LPG as fuel in many types of automobiles, poor condition of roads, prevailing meteorological conditions, etc.

Keywords: Bangalore, urban air pollution, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008
335 Design and Performance Comparison of Metamaterial Based Antenna for 4G/5G Mobile Devices

Authors: Jalal Khan, Daniyal Ali Sehrai, Shakeel Ahmad

Abstract:

This paper presents the design and performance evaluation of multiband metamaterial based antenna operating in the 3.6 GHz (4G), 14.33 GHz, and 28.86 GHz (5G) frequency bands, for future mobile and handheld devices. The radiating element of the proposed design is made up of a conductive material supported by a 1.524 mm thicker Rogers-4003 substrate, having a relative dielectric constant and loss tangent of 3.55 and 0.0027, respectively. The substrate is backed by truncated ground plane. The future mobile communication system is based on higher frequencies, which are highly affected by the atmospheric conditions. Therefore, to overcome the path loss problem, essential enhancements and improvements must be made in the overall performance of the antenna. The traditional ground plane does not provide the in-phase reflection and surface wave suppression due to which side and back lobes are produced. This will affect the antenna performance in terms of gain and efficiency. To enhance the overall performance of the antenna, a metamaterial acting as a high impedance surface (HIS) is used as a reflector in the proposed design. The simulated gain of the metamaterial based antenna is enhanced from {2.76-6.47, 4.83-6.71 and 7.52-7.73} dB at 3.6, 14.33 and 28.89 GHz, respectively relative to the gain of the antenna backed by a traditional ground plane. The proposed antenna radiated efficiently with a radiated efficiency (>85 %) in all the three frequency bands with and without metamaterial surface. The total volume of the antenna is (L x W x h=45 x 40 x 1.524) mm3. The antenna can be potentially used for wireless handheld devices and mobile terminal. All the simulations have been performed using the Computer Simulation Technology (CST) software.

Keywords: Multiband, fourth generation (4G), fifth generation (5G), metamaterial, CST MWS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887
334 Hacking the Spatial Limitations in Bridging Virtual and Traditional Teaching Methodologies in Sri Lanka

Authors: Manuela Nayantara Jeyaraj

Abstract:

Having moved into the 21st century, it is way past being arguable that innovative technology needs to be incorporated into conventional classroom teaching. Though the Western world has found presumable success in achieving this, it is still a concept under battle in developing countries such as Sri Lanka. Reaching the acme of implementing interactive virtual learning within classrooms is a struggling idealistic fascination within the island. In order to overcome this problem, this study is set to reveal facts that limit the implementation of virtual, interactive learning within the school classrooms and provide hacks that could prove the augmented use of the Virtual World to enhance teaching and learning experiences. As each classroom moves along with the usage of technology to fulfill its functionalities, a few intense hacks provided will build the administrative onuses on a virtual system. These hacks may divulge barriers based on social conventions, financial boundaries, digital literacy, intellectual capacity of the staff, and highlight the impediments in introducing students to an interactive virtual learning environment and thereby provide the necessary actions or changes to be made to succeed and march along in creating an intellectual society built on virtual learning and lifestyle. This digital learning environment will be composed of multimedia presentations, trivia and pop quizzes conducted on a GUI, assessments conducted via a virtual system, records maintained on a database, etc. The ultimate objective of this study could enhance every child's basic learning environment; hence, diminishing the digital divide that exists in certain communities.

Keywords: Digital divide, digital learning, digitization, Sri Lanka, teaching methodologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1201
333 Modeling Non-Darcy Natural Convection Flow of a Micropolar Dusty Fluid with Convective Boundary Condition

Authors: F. M. Hady, A. Mahdy, R. A. Mohamed, Omima A. Abo Zaid

Abstract:

A numerical approach of the effectiveness of numerous parameters on magnetohydrodynamic (MHD) natural convection heat and mass transfer problem of a dusty micropolar fluid in a non-Darcy porous regime is prepared in the current paper. In addition, a convective boundary condition is scrutinized into the micropolar dusty fluid model. The governing boundary layer equations are converted utilizing similarity transformations to a system of dimensionless equations to be convenient for numerical treatment. The resulting equations for fluid phase and dust phases of momentum, angular momentum, energy, and concentration with the appropriate boundary conditions are solved numerically applying the Runge-Kutta method of fourth-order. In accordance with the numerical study, it is obtained that the magnitude of the velocity of both fluid phase and particle phase reduces with an increasing magnetic parameter, the mass concentration of the dust particles, and Forchheimer number. While rises due to an increment in convective parameter and Darcy number. Also, the results refer that high values of the magnetic parameter, convective parameter, and Forchheimer number support the temperature distributions. However, deterioration occurs as the mass concentration of the dust particles and Darcy number increases. The angular velocity behavior is described by progress when studying the effect of the magnetic parameter and microrotation parameter.

Keywords: Micropolar dusty fluid, convective heating, natural convection, MHD, porous media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 940
332 Igbo Art: A Reflection of the Igbo’s Visual Culture

Authors: David Osa-Egonwa

Abstract:

Visual culture is the expression of the norms and social behavior of a society in visual images. A reflection simply shows you how you look when you stand before a mirror, a clear water or stream. The mirror does not alter, improve or distort your original appearance, neither does it show you a caricature of what stands before it, this is the case with visual images created by a tribe or society. The ‘uli’ is hand drawn body design done on Igbo women and speaks of a culture of body adornment which is a practice that is appreciated by that tribe. The use of pattern of the gliding python snake ‘ije eke’ or ‘ijeagwo’ for wall painting speaks of the Igbo culture as one that appreciates wall paintings based on these patterns. Modern life came and brought a lot of change to the Igbo-speaking people of Nigeria. Change cloaked in the garment of Westernization has influenced the culture of the Igbos. This has resulted in a problem which is a break in the cultural practice that has also affected art produced by the Igbos. Before the colonial masters arrived and changed the established culture practiced by the Igbos, visual images were created that retained the culture of this people. To bring this point to limelight, this paper has adopted a historical method. A large number of works produced during pre and post-colonial era which range from sculptural pieces, paintings and other artifacts, just to mention a few, were studied carefully and it was discovered that the visual images hold the culture or aspects of the culture of the Igbos in their renditions and can rightly serve as a mirror of the Igbo visual culture.

Keywords: Artistic renditions, historical method, Igbo visual culture, changes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014
331 Simulation of Ammonia-Water Two Phase Flow in Bubble Pump

Authors: Jemai Rabeb, Benhmidene Ali, Hidouri Khaoula, Chaouachi Bechir

Abstract:

The diffusion-absorption refrigeration cycle consists of a generator bubble pump, an absorber, an evaporator and a condenser, and usually operates with ammonia/water/ hydrogen or helium as the working fluid. The aim of this paper is to study the stability problem a bubble pump. In fact instability can caused a reduction of bubble pump efficiency. To achieve this goal, we have simulated the behaviour of two-phase flow in a bubble pump by using a drift flow model. Equations of a drift flow model are formulated in the transitional regime, non-adiabatic condition and thermodynamic equilibrium between the liquid and vapour phases. Equations resolution allowed to define void fraction, and liquid and vapour velocities, as well as pressure and mixing enthalpy. Ammonia-water mixing is used as working fluid, where ammonia mass fraction in the inlet is 0.6. Present simulation is conducted out for a heating flux of 2 kW/m² to 5 kW/m² and bubble pump tube length of 1 m and 2.5 mm of inner diameter. Simulation results reveal oscillations of vapour and liquid velocities along time. Oscillations decrease with time and with heat flux. For sufficient time the steady state is established, it is characterised by constant liquid velocity and void fraction values. However, vapour velocity does not have the same behaviour, it increases for steady state too. On the other hand, pressure drop oscillations are studied.

Keywords: Bubble pump, drift flow model, instability, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1090
330 Catalytic Gasification of Olive Mill Wastewater as a Biomass Source under Supercritical Conditions

Authors: Ekin Kıpçak, Mesut Akgün

Abstract:

Recently, a growing interest has emerged on the development of new and efficient energy sources, due to the inevitable extinction of the nonrenewable energy reserves. One of these alternative sources which have a great potential and sustainability to meet up the energy demand is biomass energy. This significant energy source can be utilized with various energy conversion technologies, one of which is biomass gasification in supercritical water.

Water, being the most important solvent in nature, has very important characteristics as a reaction solvent under supercritical circumstances. At temperatures above its critical point (374.8oC and 22.1MPa), water becomes more acidic and its diffusivity increases. Working with water at high temperatures increases the thermal reaction rate, which in consequence leads to a better dissolving of the organic matters and a fast reaction with oxygen. Hence, supercritical water offers a control mechanism depending on solubility, excellent transport properties based on its high diffusion ability and new reaction possibilities for hydrolysis or oxidation.

In this study the gasification of a real biomass, namely olive mill wastewater (OMW), in supercritical water conditions is investigated with the use of Ru/Al2O3 catalyst. OMW is a by-product obtained during olive oil production, which has a complex nature characterized by a high content of organic compounds and polyphenols. These properties impose OMW a significant pollution potential, but at the same time, the high content of organics makes OMW a desirable biomass candidate for energy production.

The catalytic gasification experiments were made with five different reaction temperatures (400, 450, 500, 550 and 600°C) and five reaction times (30, 60, 90, 120 and 150s), under a constant pressure of 25MPa. Through these experiments, the effects of reaction temperature and time on the gasification yield, gaseous product composition and OMW treatment efficiency were investigated.

Keywords: Catalyst, Gasification, Olive mill wastewater, Ru/Al2O3, Supercritical water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2279
329 Improving Cleanability by Changing Fish Processing Equipment Design

Authors: Lars A. L. Giske, Ola J. Mork, Emil Bjoerlykhaug

Abstract:

The design of fish processing equipment greatly impacts how easy the cleaning process for the equipment is. This is a critical issue in fish processing, as cleaning of fish processing equipment is a task that is both costly and time consuming, in addition to being very important with regards to product quality. Even more, poorly cleaned equipment could in the worst case lead to contaminated product from which consumers could get ill. This paper will elucidate how equipment design changes could improve the work for the cleaners and saving money for the fish processing facilities by looking at a case for product design improvements. The design of fish processing equipment largely determines how easy it is to clean. “Design for cleaning” is the new hype in the industry and equipment where the ease of cleaning is prioritized gets a competitive advantage over equipment in which design for cleaning has not been prioritized. Design for cleaning is an important research area for equipment manufacturers. SeaSide AS is doing continuously improvements in the design of their products in order to gain a competitive advantage. The focus in this paper will be conveyors for internal logistic and a product called the “electro stunner” will be studied with regards to “Design for cleaning”. Often together with SeaSide’s customers, ideas for new products or product improvements are sketched out, 3D-modelled, discussed, revised, built and delivered. Feedback from the customers is taken into consideration, and the product design is revised once again. This loop was repeated multiple times, and led to new product designs. The new designs sometimes also cause the manufacturing processes to change (as in going from bolted to welded connections). Customers report back that the concrete changes applied to products by SeaSide has resulted in overall more easily cleaned equipment. These changes include, but are not limited to; welded connections (opposed to bolted connections), gaps between contact faces, opening up structures to allow cleaning “inside” equipment, and generally avoiding areas in which humidity and water may gather and build up. This is important, as there will always be bacteria in the water which will grow if the area never dries up. The work of creating more cleanable design is still ongoing, and will “never” be finished as new designs and new equipment will have their own challenges.

Keywords: Cleaning, design, equipment, fish processing, innovation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1153
328 Automated Transformation of 3D Point Cloud to Building Information Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Petar Penchev

Abstract:

The digital era has revolutionized architectural practices, with Building Information Modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research presents a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data — a collection of data points in space, typically produced by 3D scanners — into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historical preservation.

Keywords: Algorithmic modeling, Building Information Modeling, point cloud, reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20
327 An Evaluation Method for Two-Dimensional Position Errors and Assembly Errors of a Rotational Table on a 4 Axis Machine Tool

Authors: Jooho Hwang, Chang-Kyu Song, Chun-Hong Park

Abstract:

This paper describes a method to measure and compensate a 4 axes ultra-precision machine tool that generates micro patterns on the large surfaces. The grooving machine is usually used for making a micro mold for many electrical parts such as a light guide plate for LCD and fuel cells. The ultra precision machine tool has three linear axes and one rotational table. Shaping is usually used to generate micro patterns. In the case of 50 μm pitch and 25 μm height pyramid pattern machining with a 90° wedge angle bite, one of linear axis is used for long stroke motion for high cutting speed and other linear axis are used for feeding. The triangular patterns can be generated with many times of long stroke of one axis. Then 90° rotation of work piece is needed to make pyramid patterns with superposition of machined two triangular patterns. To make a two dimensional positioning error, straightness of two axes in out of plane, squareness between the each axis are important. Positioning errors, straightness and squarness were measured by laser interferometer system. Those were compensated and confirmed by ISO230-6. One of difficult problem to measure the error motions is squareness or parallelism of axis between the rotational table and linear axis. It was investigated by simultaneous moving of rotary table and XY axes. This compensation method is introduced in this paper.

Keywords: Ultra-precision machine tool, muti-axis errors, squraness, positioning errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581
326 Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores

Authors: Ankit Sinha, Soham Banerjee, Pratik Chattopadhyay

Abstract:

Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies  the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model.

Keywords: Retail stores, Faster-RCNN, object localization, ResNet-18, triplet loss, data augmentation, product recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 583
325 Choosing R-tree or Quadtree Spatial DataIndexing in One Oracle Spatial Database System to Make Faster Showing Geographical Map in Mobile Geographical Information System Technology

Authors: Maruto Masserie Sardadi, Mohd Shafry bin Mohd Rahim, Zahabidin Jupri, Daut bin Daman

Abstract:

The latest Geographic Information System (GIS) technology makes it possible to administer the spatial components of daily “business object," in the corporate database, and apply suitable geographic analysis efficiently in a desktop-focused application. We can use wireless internet technology for transfer process in spatial data from server to client or vice versa. However, the problem in wireless Internet is system bottlenecks that can make the process of transferring data not efficient. The reason is large amount of spatial data. Optimization in the process of transferring and retrieving data, however, is an essential issue that must be considered. Appropriate decision to choose between R-tree and Quadtree spatial data indexing method can optimize the process. With the rapid proliferation of these databases in the past decade, extensive research has been conducted on the design of efficient data structures to enable fast spatial searching. Commercial database vendors like Oracle have also started implementing these spatial indexing to cater to the large and diverse GIS. This paper focuses on the decisions to choose R-tree and quadtree spatial indexing using Oracle spatial database in mobile GIS application. From our research condition, the result of using Quadtree and R-tree spatial data indexing method in one single spatial database can save the time until 42.5%.

Keywords: Indexing, Mobile GIS, MapViewer, Oracle SpatialDatabase.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4035
324 Simulation of Utility Accrual Scheduling and Recovery Algorithm in Multiprocessor Environment

Authors: A. Idawaty, O. Mohamed, A. Z. Zuriati

Abstract:

This paper presents the development of an event based Discrete Event Simulation (DES) for a recovery algorithm known Backward Recovery Global Preemptive Utility Accrual Scheduling (BR_GPUAS). This algorithm implements the Backward Recovery (BR) mechanism as a fault recovery solution under the existing Time/Utility Function/ Utility Accrual (TUF/UA) scheduling domain for multiprocessor environment. The BR mechanism attempts to take the faulty tasks back to its initial safe state and then proceeds to re-execute the affected section of the faulty tasks to enable recovery. Considering that faults may occur in the components of any system; a fault tolerance system that can nullify the erroneous effect is necessary to be developed. Current TUF/UA scheduling algorithm uses the abortion recovery mechanism and it simply aborts the erroneous task as their fault recovery solution. None of the existing algorithm in TUF/UA scheduling domain in multiprocessor scheduling environment have considered the transient fault and implement the BR mechanism as a fault recovery mechanism to nullify the erroneous effect and solve the recovery problem in this domain. The developed BR_GPUAS simulator has derived the set of parameter, events and performance metrics according to a detailed analysis of the base model. Simulation results revealed that BR_GPUAS algorithm can saved almost 20-30% of the accumulated utilities making it reliable and efficient for the real-time application in the multiprocessor scheduling environment.

Keywords: Time Utility Function/ Utility Accrual (TUF/UA) scheduling, Real-time system (RTS), Backward Recovery, Multiprocessor, Discrete Event Simulation (DES).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 969
323 Study on Mitigation Measures of Gumti Hydro Power Plant Using Analytic Hierarchy Process and Concordance Analysis Techniques

Authors: K. Majumdar, S. Datta

Abstract:

Electricity is recognized as fundamental to industrialization and improving the quality of life of the people. Harnessing the immense untapped hydropower potential in Tripura region opens avenues for growth and provides an opportunity to improve the well-being of the people of the region, while making substantial contribution to the national economy. Gumti hydro power plant generates power to mitigate the crisis of power in Tripura, India. The first unit of hydro power plant (5MW) was commissioned in June 1976 & another two units of 5 MW was commissioned simultaneously. But out of 15MW capacity at present only 8MW- 9MW power is produced from Gumti hydro power plant during rainy season. But during lean season the production reduces to 0.5MW due to shortage of water. Now, it is essential to implement some mitigation measures so that the further atrocities can be prevented and originality will be possible to restore. The decision making ability of the Analytic Hierarchy Process (AHP) and Concordance Analysis Techniques (CAT) are utilized to identify the better decision or solution to the present problem. Some related attributes are identified by the method of surveying within the experts and the available reports and literatures. Similar criteria are removed and ultimately seven relevant ones are identified. All the attributes are compared with each other and rated accordingly to their importance over the other with the help of Pair wise Comparison Matrix. In the present investigation different mitigation measures are identified and compared to find the best suitable alternative which can solve the present uncertainties involving the existence of the Gumti Hydro Power Plant.

Keywords: Concordance Analysis Techniques, Analytic Hierarchy Process, Hydro Power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1990
322 Community Perceptions and Attitudes Regarding Wildlife Crime in South Africa

Authors: Louiza C. Duncker, Duarte Gonçalves

Abstract:

Wildlife crime is a complex problem with many interconnected facets, which are generally responded to in parts or fragments in efforts to “break down” the complexity into manageable components. However, fragmentation increases complexity as coherence and cooperation become diluted. A whole-of-society approach has been developed towards finding a common goal and integrated approach to preventing wildlife crime. As part of this development, research was conducted in rural communities adjacent to conservation areas in South Africa to define and comprehend the challenges faced by them, and to understand their perceptions of wildlife crime. The results of the research showed that the perceptions of community members varied - most were in favor of conservation and of protecting rhinos, only if they derive adequate benefit from it. Regardless of gender, income level, education level, or access to services, conservation was perceived to be good and bad by the same people. Even though people in the communities are poor, a willingness to stop rhino poaching does exist amongst them, but their perception of parks not caring about people triggered an attitude of not being willing to stop, prevent or report poaching. Understanding the nuances, the history, the interests and values of community members, and the drivers behind poaching mind-sets (intrinsic or driven by transnational organized crime) is imperative to create sustainable and resilient communities on multiple levels that make a substantial positive impact on people’s lives, but also conserve wildlife for posterity.

Keywords: Conservation, community perceptions, wildlife crime, rhino poaching, interest and value creation, whole-of-society approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1879
321 Parameters Affecting the Elasto-Plastic Behavior of Outrigger Braced Walls to Earthquakes

Authors: T. A. Sakr, Hanaa E. Abd-El- Mottaleb

Abstract:

Outrigger-braced wall systems are commonly used to provide high rise buildings with the required lateral stiffness for wind and earthquake resistance. The existence of outriggers adds to the stiffness and strength of walls as reported by several studies. The effects of different parameters on the elasto-plastic dynamic behavior of outrigger-braced wall systems to earthquakes are investigated in this study. Parameters investigated include outrigger stiffness, concrete strength, and reinforcement arrangement as the main design parameters in wall design. In addition to being significantly affect the wall behavior, such parameters may lead to the change of failure mode and the delay of crack propagation and consequently failure as the wall is excited by earthquakes. Bi-linear stress-strain relation for concrete with limited tensile strength and truss members with bi-linear stress-strain relation for reinforcement were used in the finite element analysis of the problem. The famous earthquake record, El-Centro, 1940 is used in the study. Emphasize was given to the lateral drift, normal stresses and crack pattern as behavior controlling determinants. Results indicated significant effect of the studied parameters such that stiffer outrigger, higher grade concrete and concentrating the reinforcement at wall edges enhance the behavior of the system. Concrete stresses and cracking behavior are too much enhanced while less drift improvements are observed.

Keywords: Structures, High rise, Outrigger, Shear Wall, Earthquake, Nonlinear.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2355
320 Budget Optimization for Maintenance of Bridges in Egypt

Authors: Hesham Abd Elkhalek, Sherif M. Hafez, Yasser M. El Fahham

Abstract:

Allocating limited budget to maintain bridge networks and selecting effective maintenance strategies for each bridge represent challenging tasks for maintenance managers and decision makers. In Egypt, bridges are continuously deteriorating. In many cases, maintenance works are performed due to user complaints. The objective of this paper is to develop a practical and reliable framework to manage the maintenance, repair, and rehabilitation (MR&R) activities of Bridges network considering performance and budget limits. The model solves an optimization problem that maximizes the average condition of the entire network given the limited available budget using Genetic Algorithm (GA). The framework contains bridge inventory, condition assessment, repair cost calculation, deterioration prediction, and maintenance optimization. The developed model takes into account multiple parameters including serviceability requirements, budget allocation, element importance on structural safety and serviceability, bridge impact on network, and traffic. A questionnaire is conducted to complete the research scope. The proposed model is implemented in software, which provides a friendly user interface. The framework provides a multi-year maintenance plan for the entire network for up to five years. A case study of ten bridges is presented to validate and test the proposed model with data collected from Transportation Authorities in Egypt. Different scenarios are presented. The results are reasonable, feasible and within acceptable domain.

Keywords: Bridge Management Systems (BMS), cost optimization condition assessment, fund allocation, Markov chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958
319 Political Economy of Integrated Soil Fertility Management in the Okavango Delta, Botswana

Authors: Oluwatoyin D. Kolawole, Oarabile Mogobe, Lapologang Magole

Abstract:

Although many factors play a significant role in agricultural production and productivity, the importance of soil fertility cannot be underestimated. The extent to which small farmers are able to manage the fertility of their farmlands is crucial in agricultural development particularly in sub-Saharan Africa (SSA).  This paper assesses the nutrient status of selected farmers’ fields in relation to how government policy addresses the allocation of and access to agricultural inputs (e.g. chemical fertilizers) in a unique social-ecological environment of the Okavango Delta in northern Botswana. It also analyses small farmers and soil scientists’ perceptions about the political economy of integrated soil fertility management (ISFM) in the area. A multi-stage sampling procedure was used to elicit quantitative and qualitative information from 228 farmers and 9 soil researchers through the use of interview schedules and questionnaires, respectively. Knowledge validation workshops and focus group discussions (FGDs) were also used to collect qualitative data from farmers. Thirty-three composite soil samples were collected from 30 farmers’ plots in three farming communities of Makalamabedi, Nokaneng and Mohembo for laboratory analysis. While meeting points exist, farmers and scientists have divergent perspectives on soil fertility management. Laboratory analysis carried out shows that most soils in the wetland and the adjoining dry-land/upland surroundings are low in essential nutrients as well as in cation exchange capacity (CEC). Although results suggest the identification and use of appropriate inorganic fertilizers, the low CEC is an indication that holistic cultural practices, which are beyond mere chemical fertilizations, are critical and more desirable for improved soil health and sustainable livelihoods in the area. Farmers’ age (t= -0.728; p≤0.10); their perceptions about the political economy (t = -0.485; p≤0.01) of ISFM; and their preference for the use of local knowledge in soil fertility management (t = -10.254; p≤0.01) had a significant relationship with how they perceived their involvement in the implementation of ISFM.

Keywords: Access, Botswana, ecology, inputs, Okavango Delta, policy, scientists, small farmers, soil fertility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2567
318 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Antonio Vitale, Nicola Genito, Giovanni Cuciniello, Ferdinando Montemari

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: Flapping Dynamics, Flight Dynamics, System Identification, Tilt-Rotor Modeling and Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1286
317 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: Computational fluid dynamics, erosion, slurry transportation, k-ε Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918
316 Mean-Variance Optimization of Portfolios with Return of Premium Clauses in a DC Pension Plan with Multiple Contributors under Constant Elasticity of Variance Model

Authors: Bright O. Osu, Edikan E. Akpanibah, Chidinma Olunkwa

Abstract:

In this paper, mean-variance optimization of portfolios with the return of premium clauses in a defined contribution (DC) pension plan with multiple contributors under constant elasticity of variance (CEV) model is studied. The return clauses which permit death members to claim their accumulated wealth are considered, the remaining wealth is not equally distributed by the remaining members as in literature. We assume that before investment, the surplus which includes funds of members who died after retirement adds to the total wealth. Next, we consider investments in a risk-free asset and a risky asset to meet up the expected returns of the remaining members and obtain an optimized problem with the help of extended Hamilton Jacobi Bellman equation. We obtained the optimal investment strategies for the two assets and the efficient frontier of the members by using a stochastic optimal control technique. Furthermore, we studied the effect of the various parameters of the optimal investment strategies and the effect of the risk-averse level on the efficient frontier. We observed that the optimal investment strategy is the same as in literature, secondly, we observed that the surplus decreases the proportion of the wealth invested in the risky asset.

Keywords: DC pension fund, Hamilton Jacobi Bellman equation, optimal investment strategies, stochastic optimal control technique, return of premiums clauses, mean-variance utility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 775
315 MHD Stagnation Point Flow towards a Shrinking Sheet with Suction in an Upper-Convected Maxwell (UCM) Fluid

Authors: K. Jafar, R. Nazar, A. Ishak, I. Pop

Abstract:

The present analysis considers the steady stagnation point flow and heat transfer towards a permeable shrinking sheet in an upper-convected Maxwell (UCM) electrically conducting fluid, with a constant magnetic field applied in the transverse direction to flow and a local heat generation within the boundary layer, with a heat generation rate proportional to (T-T\infty)p Using a similarity transformation, the governing system of partial differential equations is first transformed into a system of ordinary differential equations, which is then solved numerically using a finite-difference scheme known as the Keller-box method. Numerical results are obtained for the flow and thermal fields for various values of the stretching/shrinking parameter λ, the magnetic parameter M, the elastic parameter K, the Prandtl number Pr, the suction parameter s, the heat generation parameter Q, and the exponent p. The results indicate the existence of dual solutions for the shrinking sheet up to a critical value λc whose value depends on the value of M, K, and s. In the presence of internal heat absorption (Q<0)  the surface heat transfer rate decreases with increasing p but increases with parameters Q and s when the sheet is either stretched or shrunk.

Keywords: Magnetohydrodynamic (MHD), boundary layer flow, UCM fluid, stagnation point, shrinking sheet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068