Search results for: Clustering Applications
183 Ingenious Use of Hypo Sludge in M25 Concrete
Authors: Abhinandan Singh Gill
Abstract:
Paper mill sludge is one of the major economic and environmental problems for paper and board industry, million tonnes quantity of sludge is produced in the world. It is essential to dispose these wastes safely without affecting health of human being, environment, fertile land; sources of water bodies, economy as it adversely affect the strength, durability and other properties of building materials based on them. Moreover, in developing countries like India where there is low availability of non-renewable resources and large need of building material like cement therefore it is essential to develop eco-efficient utilization of paper sludge. Primarily in functional terms paper sludge comprises of cellulose fibers, calcium carbonate, china clay, low silica, residual chemical bonds with water. The material is sticky and full of moisture content which is hard to dry. The manufacturing of paper usually produce loads of solid waste. These paper fibers are recycled in paper mills to limited number of times till they become weak to produce high quality paper. Thereafter, these left out small and weak pieces called as low quality paper fibers are detached out to become paper sludge. The material is by-product of de-inking and re-pulping of paper. This hypo sludge includes all kinds of inks, dyes, coating etc inscribed on the paper. This paper presents an overview of the published work on the use of hypo sludge in M25 concrete formulations as a supplementary cementitious material exploring its properties such as compressive strength, splitting and parameters like modulus of elasticity, density, applications and most importantly investigation of low cost concrete by using hypo sludge are presented.
Keywords: Concrete, sludge waste, hypo sludge, supplementary cementitious material.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1262182 Development of a System for Measuring the Three-Axis Pedal Force in Cycling and Its Applications
Authors: Joo-Hack Lee, Jin-Seung Choi, Dong-Won Kang, Jeong-Woo Seo, Ju-Young Kim, Dae-Hyeok Kim, Seung-Tae Yang, Gye-Rae Tack
Abstract:
For cycling, the analysis of the pedal force is one of the important factors in the study of exercise ability assessment and overuse injuries. In past studies, a two-axis measurement sensor was used at the sagittal plane to measure the force only in the anterior, posterior, and vertical directions and to analyze the loss of force and the injury on the frontal plane due to the forces in the right and left directions. In this study, which is a basic study on diverse analyses of the pedal force that consider the forces on the sagittal plane and the frontal plane, a three-axis pedal force measurement sensor was developed to measure the anterior-posterior (Fx), medio-lateral (Fz), and vertical (Fy) forces. The sensor was fabricated with a size and shape similar to those of the general flat pedal, and had a 550g weight that allowed smooth pedaling. Its measurement range was ±1000 N for Fx and Fz and ±2000 N for Fy, and its non-linearity, hysteresis, and repeatability were approximately 0.5%. The data were sampled at 1000 Hz using a signal collector. To use the developed sensor, the pedaling efficiency (index of efficiency, IE) and the range of left and right (medio-lateral, ML) forces were measured with two seat heights (low and high). The results of the measurement showed that the IE was higher and the force range in the ML direction was lower with the high position than with the low position. The developed measurement sensor and its application results will be useful in understanding and explaining the complicated pedaling technique, and will enable diverse kinematic analyses of the pedal force on the sagittal plane and the frontal plane.
Keywords: Cycling, Index of effectiveness, Pedal force.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2572181 Numerical Study of Bubbling Fluidized Beds Operating at Sub-atmospheric Conditions
Authors: Lanka Dinushke Weerasiri, Subrat Das, Daniel Fabijanic, William Yang
Abstract:
Fluidization at vacuum pressure has been a topic that is of growing research interest. Several industrial applications (such as drying, extractive metallurgy, and chemical vapor deposition (CVD)) can potentially take advantage of vacuum pressure fluidization. Particularly, the fine chemical industry requires processing under safe conditions for thermolabile substances, and reduced pressure fluidized beds offer an alternative. Fluidized beds under vacuum conditions provide optimal conditions for treatment of granular materials where the reduced gas pressure maintains an operational environment outside of flammability conditions. The fluidization at low-pressure is markedly different from the usual gas flow patterns of atmospheric fluidization. The different flow regimes can be characterized by the dimensionless Knudsen number. Nevertheless, hydrodynamics of bubbling vacuum fluidized beds has not been investigated to author’s best knowledge. In this work, the two-fluid numerical method was used to determine the impact of reduced pressure on the fundamental properties of a fluidized bed. The slip flow model implemented by Ansys Fluent User Defined Functions (UDF) was used to determine the interphase momentum exchange coefficient. A wide range of operating pressures was investigated (1.01, 0.5, 0.25, 0.1 and 0.03 Bar). The gas was supplied by a uniform inlet at 1.5Umf and 2Umf. The predicted minimum fluidization velocity (Umf) shows excellent agreement with the experimental data. The results show that the operating pressure has a notable impact on the bed properties and its hydrodynamics. Furthermore, it also shows that the existing Gorosko correlation that predicts bed expansion is not applicable under reduced pressure conditions.
Keywords: Computational fluid dynamics, fluidized bed, gas-solid flow, vacuum pressure, slip flow, minimum fluidization velocity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773180 Uncertainty Multiple Criteria Decision Making Analysis for Stealth Combat Aircraft Selection
Authors: C. Ardil
Abstract:
Fuzzy set theory and its extensions (intuitionistic fuzzy sets, picture fuzzy sets, and neutrosophic sets) have been widely used to address imprecision and uncertainty in complex decision-making. However, they may struggle with inherent indeterminacy and inconsistency in real-world situations. This study introduces uncertainty sets as a promising alternative, offering a structured framework for incorporating both types of uncertainty into decision-making processes.This work explores the theoretical foundations and applications of uncertainty sets. A novel decision-making algorithm based on uncertainty set-based proximity measures is developed and demonstrated through a practical application: selecting the most suitable stealth combat aircraft.
The results highlight the effectiveness of uncertainty sets in ranking alternatives under uncertainty. Uncertainty sets offer several advantages, including structured uncertainty representation, robust ranking mechanisms, and enhanced decision-making capabilities due to their ability to account for ambiguity.Future research directions are also outlined, including comparative analysis with existing MCDM methods under uncertainty, sensitivity analysis to assess the robustness of rankings,and broader application to various MCDM problems with diverse complexities. By exploring these avenues, uncertainty sets can be further established as a valuable tool for navigating uncertainty in complex decision-making scenarios.
Keywords: Uncertainty set, stealth combat aircraft selection multiple criteria decision-making analysis, MCDM, uncertainty proximity analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184179 Iris Recognition Based On the Low Order Norms of Gradient Components
Authors: Iman A. Saad, Loay E. George
Abstract:
Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.
Keywords: Iris recognition, contrast stretching, gradient features, texture features, Euclidean metric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1964178 Multiple Targets Classification and Fuzzy Logic Decision Fusion in Wireless Sensor Networks
Authors: Ahmad Aljaafreh
Abstract:
This paper proposes a hierarchical hidden Markov model (HHMM) to model the detection of M vehicles in a wireless sensor network (WSN). The HHMM model contains an extra level of hidden Markov model to model the temporal transitions of each state of the first HMM. By modeling the temporal transitions, only those hypothesis with nonzero transition probabilities needs to be tested. Thus, this method efficiently reduces the computation load, which is preferable in WSN applications.This paper integrates several techniques to optimize the detection performance. The output of the states of the first HMM is modeled as Gaussian Mixture Model (GMM), where the number of states and the number of Gaussians are experimentally determined, while the other parameters are estimated using Expectation Maximization (EM). HHMM is used to model the sequence of the local decisions which are based on multiple hypothesis testing with maximum likelihood approach. The states in the HHMM represent various combinations of vehicles of different types. Due to the statistical advantages of multisensor data fusion, we propose a heuristic based on fuzzy weighted majority voting to enhance cooperative classification of moving vehicles within a region that is monitored by a wireless sensor network. A fuzzy inference system weighs each local decision based on the signal to noise ratio of the acoustic signal for target detection and the signal to noise ratio of the radio signal for sensor communication. The spatial correlation among the observations of neighboring sensor nodes is efficiently utilized as well as the temporal correlation. Simulation results demonstrate the efficiency of this scheme.
Keywords: Classification, decision fusion, fuzzy logic, hidden Markov model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6248177 An Intelligent Cascaded Fuzzy Logic Based Controller for Controlling the Room Temperature in Hydronic Heating System
Authors: Vikram Jeganathan, A. V. Sai Balasubramanian, N. Ravi Shankar, S. Subbaraman, R. Rengaraj
Abstract:
Heating systems are a necessity for regions which brace extreme cold weather throughout the year. To maintain a comfortable temperature inside a given place, heating systems making use of- Hydronic boilers- are used. The principle of a single pipe system serves as a base for their working. It is mandatory for these heating systems to control the room temperature, thus maintaining a warm environment. In this paper, the concept of regulation of the room temperature over a wide range is established by using an Adaptive Fuzzy Controller (AFC). This fuzzy controller automatically detects the changes in the outside temperatures and correspondingly maintains the inside temperature to a palatial value. Two separate AFC's are put to use to carry out this function: one to determine the quantity of heat needed to reach the prospective temperature required and to set the desired temperature; the other to control the position of the valve, which is directly proportional to the error between the present room temperature and the user desired temperature. The fuzzy logic controls the position of the valve as per the requirement of the heat. The amount by which the valve opens or closes is controlled by 5 knob positions, which vary from minimum to maximum, thereby regulating the amount of heat flowing through the valve. For the given test system data, different de-fuzzifier methods have been implemented and the results are compared. In order to validate the effectiveness of the proposed approach, a fuzzy controller has been designed by obtaining a test data from a real time system. The simulations are performed in MATLAB and are verified with standard system data. The proposed approach can be implemented for real time applications.Keywords: Adaptive fuzzy controller, Hydronic heating system
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1976176 Survey of Communication Technologies for IoT Deployments in Developing Regions
Authors: Namugenyi Ephrance Eunice, Julianne Sansa Otim, Marco Zennaro, Stephen D. Wolthusen
Abstract:
The Internet of Things (IoT) is a network of connected data processing devices, mechanical and digital machinery, items, animals, or people that may send data across a network without requiring human-to-human or human-to-computer interaction. Each component has sensors that can pick up on specific phenomena, as well as processing software and other technologies that can link to and communicate with other systems and/or devices over the Internet or other communication networks and exchange data with them. IoT is increasingly being used in fields other than consumer electronics, such as public safety, emergency response, industrial automation, autonomous vehicles, the Internet of Medical Things (IoMT), and general environmental monitoring. Consumer-based IoT applications, like smart home gadgets and wearables, are also becoming more prevalent. This paper presents the main IoT deployment areas for environmental monitoring in developing regions and the backhaul options suitable for them based on a couple of related works. The study includes an overview of existing IoT deployments, the underlying communication architectures, protocols, and technologies that support them. This overview shows that Low Power Wireless Area Networks (LPWANs) are very well suited for monitoring environment architectures designed for remote locations. LoRa technology, particularly the LoRaWAN protocol, has an advantage over other technologies due to its low power consumption, adaptability, and suitable communication range. The current challenges of various architectures are discussed in detail, with the major issue identified as obstruction of communication paths by buildings, trees, hills, etc.
Keywords: Communication technologies, environmental monitoring, Internet of Things, IoT, IoT deployment challenges.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 391175 Stereo Motion Tracking
Authors: Yudhajit Datta, Jonathan Bandi, Ankit Sethia, Hamsi Iyer
Abstract:
Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.
Keywords: Kalman Filter, Stereo Vision, Motion Tracking, Matlab, Object Tracking, Camera Calibration, Computer Vision System Toolbox.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2821174 A Three-Dimensional TLM Simulation Method for Thermal Effect in PV-Solar Cells
Authors: R. Hocine, A. Boudjemai, A. Amrani, K. Belkacemi
Abstract:
Temperature rising is a negative factor in almost all systems. It could cause by self heating or ambient temperature. In solar photovoltaic cells this temperature rising affects on the behavior of cells. The ability of a PV module to withstand the effects of periodic hot-spot heating that occurs when cells are operated under reverse biased conditions is closely related to the properties of the cell semi-conductor material.
In addition, the thermal effect also influences the estimation of the maximum power point (MPP) and electrical parameters for the PV modules, such as maximum output power, maximum conversion efficiency, internal efficiency, reliability, and lifetime. The cells junction temperature is a critical parameter that significantly affects the electrical characteristics of PV modules. For practical applications of PV modules, it is very important to accurately estimate the junction temperature of PV modules and analyze the thermal characteristics of the PV modules. Once the temperature variation is taken into account, we can then acquire a more accurate MPP for the PV modules, and the maximum utilization efficiency of the PV modules can also be further achieved.
In this paper, the three-Dimensional Transmission Line Matrix (3D-TLM) method was used to map the surface temperature distribution of solar cells while in the reverse bias mode. It was observed that some cells exhibited an inhomogeneity of the surface temperature resulting in localized heating (hot-spot). This hot-spot heating causes irreversible destruction of the solar cell structure. Hot spots can have a deleterious impact on the total solar modules if individual solar cells are heated. So, the results show clearly that the solar cells are capable of self-generating considerable amounts of heat that should be dissipated very quickly to increase PV module's lifetime.
Keywords: Thermal effect, Conduction, Heat dissipation, Thermal conductivity, Solar cell, PV module, Nodes, 3D-TLM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2346173 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro Grids
Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone
Abstract:
Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.
Keywords: Short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, Gain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2601172 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: Autonomous vehicles, deformable part model, dpm, pedestrian recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1396171 Evaluation and Analysis of Lean-Based Manufacturing Equipment and Technology System for Jordanian Industries
Authors: Mohammad D. AL-Tahat, Shahnaz M. Alkhalil
Abstract:
International markets driven forces are changing continuously, therefore companies need to gain a competitive edge in such markets. Improving the company's products, processes and practices is no longer auxiliary. Lean production is a production management philosophy that consolidates work tasks with minimum waste resulting in improved productivity. Lean production practices can be mapped into many production areas. One of these is Manufacturing Equipment and Technology (MET). Many lean production practices can be implemented in MET, namely, specific equipment configurations, total preventive maintenance, visual control, new equipment/ technologies, production process reengineering and shared vision of perfection.The purpose of this paper is to investigate the implementation level of these six practices in Jordanian industries. To achieve that a questionnaire survey has been designed according to five-point Likert scale. The questionnaire is validated through pilot study and through experts review. A sample of 350 Jordanian companies were surveyed, the response rate was 83%. The respondents were asked to rate the extent of implementation for each of practices. A relationship conceptual model is developed, hypotheses are proposed, and consequently the essential statistical analyses are then performed. An assessment tool that enables management to monitor the progress and the effectiveness of lean practices implementation is designed and presented. Consequently, the results show that the average implementation level of lean practices in MET is 77%, Jordanian companies are implementing successfully the considered lean production practices, and the presented model has Cronbach-s alpha value of 0.87 which is good evidence on model consistency and results validation.Keywords: Lean Production, SME applications, Visual Control, New equipment/technologies, Specific equipment configurations, Jordan
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2296170 Multi-Scale Gabor Feature Based Eye Localization
Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Dusik Oh, Jaemin Kim, Seongwon Cho
Abstract:
Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported so far still need to be improved about precision and computational time for successful applications. In this paper, we propose an eye location method based on multi-scale Gabor feature vectors, which is more robust with respect to initial points. The eye localization based on Gabor feature vectors first needs to constructs an Eye Model Bunch for each eye (left or right eye) which consists of n Gabor jets and average eye coordinates of each eyes obtained from n model face images, and then tries to localize eyes in an incoming face image by utilizing the fact that the true eye coordinates is most likely to be very close to the position where the Gabor jet will have the best Gabor jet similarity matching with a Gabor jet in the Eye Model Bunch. Similar ideas have been already proposed in such as EBGM (Elastic Bunch Graph Matching). However, the method used in EBGM is known to be not robust with respect to initial values and may need extensive search range for achieving the required performance, but extensive search ranges will cause much more computational burden. In this paper, we propose a multi-scale approach with a little increased computational burden where one first tries to localize eyes based on Gabor feature vectors in a coarse face image obtained from down sampling of the original face image, and then localize eyes based on Gabor feature vectors in the original resolution face image by using the eye coordinates localized in the coarse scaled image as initial points. Several experiments and comparisons with other eye localization methods reported in the other papers show the efficiency of our proposed method.Keywords: Eye Localization, Gabor features, Multi-scale, Gabor wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1820169 The Challenges of Cloud Computing Adoption in Nigeria
Authors: Chapman Eze Nnadozie
Abstract:
Cloud computing, a technology that is made possible through virtualization within networks represents a shift from the traditional ownership of infrastructure and other resources by distinct organization to a more scalable pattern in which computer resources are rented online to organizations on either as a pay-as-you-use basis or by subscription. In other words, cloud computing entails the renting of computing resources (such as storage space, memory, servers, applications, networks, etc.) by a third party to its clients on a pay-as-go basis. It is a new innovative technology that is globally embraced because of its renowned benefits, profound of which is its cost effectiveness on the part of organizations engaged with its services. In Nigeria, the services are provided either directly to companies mostly by the key IT players such as Microsoft, IBM, and Google; or in partnership with some other players such as Infoware, Descasio, and Sunnet. This action enables organizations to rent IT resources on a pay-as-you-go basis thereby salvaging them from wastages accruable on acquisition and maintenance of IT resources such as ownership of a separate data centre. This paper intends to appraise the challenges of cloud computing adoption in Nigeria, bearing in mind the country’s peculiarities’ in terms of infrastructural development. The methodologies used in this paper include the use of research questionnaires, formulated hypothesis, and the testing of the formulated hypothesis. The major findings of this paper include the fact that there are some addressable challenges to the adoption of cloud computing in Nigeria. Furthermore, the country will gain significantly if the challenges especially in the area of infrastructural development are well addressed. This is because the research established the fact that there are significant gains derivable by the adoption of cloud computing by organizations in Nigeria. However, these challenges can be overturned by concerted efforts in the part of government and other stakeholders.
Keywords: Cloud computing, data centre, infrastructure, IT resources, network, servers, virtualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793168 Design of a Hand-Held, Clamp-on, Leakage Current Sensor for High Voltage Direct Current Insulators
Authors: Morné Roman, Robert van Zyl, Nishanth Parus, Nishal Mahatho
Abstract:
Leakage current monitoring for high voltage transmission line insulators is of interest as a performance indicator. Presently, to the best of our knowledge, there is no commercially available, clamp-on type, non-intrusive device for measuring leakage current on energised high voltage direct current (HVDC) transmission line insulators. The South African power utility, Eskom, is investigating the development of such a hand-held sensor for two important applications; first, for continuous real-time condition monitoring of HVDC line insulators and, second, for use by live line workers to determine if it is safe to work on energised insulators. In this paper, a DC leakage current sensor based on magnetic field sensing techniques is developed. The magnetic field sensor used in the prototype can also detect alternating current up to 5 MHz. The DC leakage current prototype detects the magnetic field associated with the current flowing on the surface of the insulator. Preliminary HVDC leakage current measurements are performed on glass insulators. The results show that the prototype can accurately measure leakage current in the specified current range of 1-200 mA. The influence of external fields from the HVDC line itself on the leakage current measurements is mitigated through a differential magnetometer sensing technique. Thus, the developed sensor can perform measurements on in-service HVDC insulators. The research contributes to the body of knowledge by providing a sensor to measure leakage current on energised HVDC insulators non-intrusively. This sensor can also be used by live line workers to inform them whether or not it is safe to perform maintenance on energized insulators.
Keywords: Direct current, insulator, leakage current, live line, magnetic field, sensor, transmission lines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910167 Theoretical and Analytical Approaches for Investigating the Relations between Sediment Transport and Channel Shape
Authors: Nidal Hadadin
Abstract:
This study investigated the effect of cross sectional geometry on sediment transport rate. The processes of sediment transport are generally associated to environmental management, such as pollution caused by the forming of suspended sediment in the channel network of a watershed and preserving physical habitats and native vegetations, and engineering applications, such as the influence of sediment transport on hydraulic structures and flood control design. Many equations have been proposed for computing the sediment transport, the influence of many variables on sediment transport has been understood; however, the effect of other variables still requires further research. For open channel flow, sediment transport capacity is recognized to be a function of friction slope, flow velocity, grain size, grain roughness and form roughness, the hydraulic radius of the bed section and the type and quantity of vegetation cover. The effect of cross sectional geometry of the channel on sediment transport is one of the variables that need additional investigation. The width-depth ratio (W/d) is a comparative indicator of the channel shape. The width is the total distance across the channel and the depth is the mean depth of the channel. The mean depth is best calculated as total cross-sectional area divided by the top width. Channels with high W/d ratios tend to be shallow and wide, while channels with low (W/d) ratios tend to be narrow and deep. In this study, the effects of the width-depth ratio on sediment transport was demonstrated theoretically by inserting the shape factor in sediment continuity equation and analytically by utilizing the field data sets for Yalobusha River. It was found by utilizing the two approaches as a width-depth ratio increases the sediment transport decreases.Keywords: Sediment transport, shape factor, hydraulicgeometry, flow discharge, width depth ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393166 Improved Dynamic Bayesian Networks Applied to Arabic on Line Characters Recognition
Authors: Redouane Tlemsani, Abdelkader Benyettou
Abstract:
Work is in on line Arabic character recognition and the principal motivation is to study the Arab manuscript with on line technology.
This system is a Markovian system, which one can see as like a Dynamic Bayesian Network (DBN). One of the major interests of these systems resides in the complete models training (topology and parameters) starting from training data.
Our approach is based on the dynamic Bayesian Networks formalism. The DBNs theory is a Bayesians networks generalization to the dynamic processes. Among our objective, amounts finding better parameters, which represent the links (dependences) between dynamic network variables.
In applications in pattern recognition, one will carry out the fixing of the structure, which obliges us to admit some strong assumptions (for example independence between some variables). Our application will relate to the Arabic isolated characters on line recognition using our laboratory database: NOUN. A neural tester proposed for DBN external optimization.
The DBN scores and DBN mixed are respectively 70.24% and 62.50%, which lets predict their further development; other approaches taking account time were considered and implemented until obtaining a significant recognition rate 94.79%.
Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780165 Dengue Disease Mapping with Standardized Morbidity Ratio and Poisson-gamma Model: An Analysis of Dengue Disease in Perak, Malaysia
Authors: N. A. Samat, S. H. Mohd Imam Ma’arof
Abstract:
Dengue disease is an infectious vector-borne viral disease that is commonly found in tropical and sub-tropical regions, especially in urban and semi-urban areas, around the world and including Malaysia. There is no currently available vaccine or chemotherapy for the prevention or treatment of dengue disease. Therefore prevention and treatment of the disease depend on vector surveillance and control measures. Disease risk mapping has been recognized as an important tool in the prevention and control strategies for diseases. The choice of statistical model used for relative risk estimation is important as a good model will subsequently produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for dengue disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and one of the earliest applications of Bayesian methodology called Poisson-gamma model. This paper begins by providing a review of the SMR method, which we then apply to dengue data of Perak, Malaysia. We then fit an extension of the SMR method, which is the Poisson-gamma model. Both results are displayed and compared using graph, tables and maps. Results of the analysis shows that the latter method gives a better relative risk estimates compared with using the SMR. The Poisson-gamma model has been demonstrated can overcome the problem of SMR when there is no observed dengue cases in certain regions. However, covariate adjustment in this model is difficult and there is no possibility for allowing spatial correlation between risks in adjacent areas. The drawbacks of this model have motivated many researchers to propose other alternative methods for estimating the risk.
Keywords: Dengue disease, Disease mapping, Standardized Morbidity Ratio, Poisson-gamma model, Relative risk.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3293164 Non-Linear Load-Deflection Response of Shape Memory Alloys-Reinforced Composite Cylindrical Shells under Uniform Radial Load
Authors: Behrang Tavousi Tehrani, Mohammad-Zaman Kabir
Abstract:
Shape memory alloys (SMA) are often implemented in smart structures as the active components. Their ability to recover large displacements has been used in many applications, including structural stability/response enhancement and active structural acoustic control. SMA wires or fibers can be embedded with composite cylinders to increase their critical buckling load, improve their load-deflection behavior, and reduce the radial deflections under various thermo-mechanical loadings. This paper presents a semi-analytical investigation on the non-linear load-deflection response of SMA-reinforced composite circular cylindrical shells. The cylinder shells are under uniform external pressure load. Based on first-order shear deformation shell theory (FSDT), the equilibrium equations of the structure are derived. One-dimensional simplified Brinson’s model is used for determining the SMA recovery force due to its simplicity and accuracy. Airy stress function and Galerkin technique are used to obtain non-linear load-deflection curves. The results are verified by comparing them with those in the literature. Several parametric studies are conducted in order to investigate the effect of SMA volume fraction, SMA pre-strain value, and SMA activation temperature on the response of the structure. It is shown that suitable usage of SMA wires results in a considerable enhancement in the load-deflection response of the shell due to the generation of the SMA tensile recovery force.
Keywords: Airy stress function, cylindrical shell, Galerkin technique, load-deflection curve, recovery stress, shape memory alloy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 713163 Greywater Treatment Using Activated Biochar Produced from Agricultural Waste
Authors: Pascal Mwenge, Tumisang Seodigeng
Abstract:
The increase in urbanisation in South Africa has led to an increase in water demand and a decline in freshwater supply. Despite this, poor water usage is still a major challenge in South Africa, for instance, freshwater is still used for non-drinking applications. The freshwater shortage can be alleviated by using other sources of water for non-portable purposes such as greywater treated with activated biochar produced from agricultural waste. The success of activated biochar produced from agricultural waste to treat greywater can be both economically and environmentally beneficial. Greywater treated with activated biochar produced from agricultural waste is considered a cost-effective wastewater treatment. This work was aimed at determining the ability of activated biochar to remove Total Suspended Solids (TSS), Ammonium (NH4-N), Nitrate (NO3-N), and Chemical Oxygen Demand (COD) from greywater. The experiments were carried out in 800 ml laboratory plastic cylinders used as filter columns. 2.5 cm layer of gravel was used at the bottom and top of the column to sandwich the activated biochar material. Activated biochar (200 g and 400 g) was loaded in a column and used as a filter medium for greywater. Samples were collected after a week and sent for analysis. Four types of greywater were treated: Kitchen, floor cleaning water, shower and laundry water. The findings showed: 95% removal of TSS, 76% of NO3-N and 63% of COD on kitchen greywater and 85% removal of NH4-N on bathroom greywater, as highest removal of efficiency of the studied pollutants. The results showed that activated biochar produced from agricultural waste reduces a certain amount of pollutants from greywater. The results also indicated the ability of activated biochar to treat greywater for onsite non-potable reuse purposes.
Keywords: Activated biochar produced from agriculture waste, ammonium (NH4-N), chemical oxygen demand (COD), greywater, nitrate (NO3-N), total suspended solids (TSS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1419162 Study on Landscape Pattern Evolution of Ecological-Living-Industrial Land in Plateau Mountainous Area: A Case Study of Yuxi City, Yunnan Province
Authors: Ying Pan, Li Wu, Jing Zhou, Lan Li
Abstract:
The coordination and development of ecological-living-industrial land uses are the premise foundations for the formulation and implementation of the current land space planning, and more attention should be paid to plateau mountainous areas. This research is based on spatial analysis technology and landscape pattern index method taking Yuxi city, a typical mountainous plateau as the research area. By using relevant software such as ArcGIS10.5, Fragstats 4.2 and the four remote sensing images of Yuxi city in 1980, 1995, 2005 and 2015, the temporal-spatial evolution and differentiation pattern of ecological-living-industrial land applications have been discussed. The research results show that: (1) From the perspective of land use type change, ecological land of Yuxi city has been the main source of land from 1980 to 2015, which totally occupies more than 78%. During this period, the spatial structure of the ecological-living-industrial land changed significantly, namely, the living land. Its land area increased significantly from 0.83% of the total area in 1980 to 1.25% in 2015, the change range of ecological land and industrial land is relatively small. (2) In terms of land use landscape pattern transfer matrix, from 1980 to 2015, the industrial land and ecological land in Yuxi city have been gradually transferred to living land. (3) In the aspect of landscape pattern changes, various landscape pattern indexes of Yuxi city indicate that the fragmentation degree of landscape pattern of the ecological-living-industrial land in this region is increasing. The degree of agglomeration goes down, and the landscape types have changed from being relatively simple to relatively rich. The landscape is more diverse, but the patch size is uneven, meanwhile, the integrity of the ecological space is destroyed.
Keywords: Ecological-living-industrial land, spatio-temporal evolution, landscape pattern, plateau mountainous area.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 681161 Improving the Exploitation of Fluid in Elastomeric Polymeric Isolator
Authors: Haithem Elderrat, Huw Davies, Emmanuel Brousseau
Abstract:
Elastomeric polymer foam has been used widely in the automotive industry, especially for isolating unwanted vibrations. Such material is able to absorb unwanted vibration due to its combination of elastic and viscous properties. However, the ‘creep effect’, poor stress distribution and susceptibility to high temperatures are the main disadvantages of such a system. In this study, improvements in the performance of elastomeric foam as a vibration isolator were investigated using the concept of Foam Filled Fluid (FFFluid). In FFFluid devices, the foam takes the form of capsule shapes, and is mixed with viscous fluid, while the mixture is contained in a closed vessel. When the FFFluid isolator is affected by vibrations, energy is absorbed, due to the elastic strain of the foam. As the foam is compressed, there is also movement of the fluid, which contributes to further energy absorption as the fluid shears. Also, and dependent on the design adopted, the packaging could also attenuate vibration through energy absorption via friction and/or elastic strain. The present study focuses on the advantages of the FFFluid concept over the dry polymeric foam in the role of vibration isolation. This comparative study between the performance of dry foam and the FFFluid was made according to experimental procedures. The paper concludes by evaluating the performance of the FFFluid isolator in the suspension system of a light vehicle. One outcome of this research is that the FFFluid may preferable over elastomer isolators in certain applications, as it enables a reduction in the effects of high temperatures and of ‘creep effects’, thereby increasing the reliability and load distribution. The stiffness coefficient of the system has increased about 60% by using an FFFluid sample. The technology represented by the FFFluid is therefore considered by this research suitable for application in the suspension system of a light vehicle.Keywords: Anti-vibration devices, dry foam, FFFluid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1896160 The Applications of Toyota Production System to Reduce Wastes in Agricultural Products Packing Process: A Study of Onion Packing Plant
Authors: Paisarn Larpsomboonchai
Abstract:
Agro-industry is one of major industries that have strong impacts on national economic incomes, growth, stability, and sustainable development. Moreover, this industry also has strong influences on social, cultural and political issues. Furthermore, this industry, as producing primary and secondary products, is facing challenges from such diverse factors such as demand inconsistency, intense international competition, technological advancements and new competitors. In order to maintain and to improve industry’s competitiveness in both domestics and international markets, science and technology are key factors. Besides hard sciences and technologies, modern industrial engineering concepts such as Just in Time (JIT) Total Quality Management (TQM), Quick Response (QR), Supply Chain Management (SCM) and Lean can be very effective to support to increase efficiency and effectiveness of these agricultural products on world stage. Onion is one of Thailand’s major export products which bring back national incomes. But, it is also facing challenges in many ways. This paper focused its interests in onion packing process and its related activities such as storage and shipment from one of major packing plant and storage in Mae Wang District, Chiang Mai, Thailand, by applying Toyota Production System (TPS) or Lean concepts, to improve process capability throughout the entire packing and distribution process which will be profitable for the whole onion supply chain. And it will be beneficial to other related agricultural products in Thailand and other ASEAN countries.Keywords: Lean Concepts, Lean in Agro-industries Activities, Packing Process, Toyota Production System (TPS), Waste Reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2224159 Data Privacy and Safety with Large Language Models
Authors: Ashly Joseph, Jithu Paulose
Abstract:
Large language models (LLMs) have revolutionized natural language processing capabilities, enabling applications such as chatbots, dialogue agents, image, and video generators. Nevertheless, their trainings on extensive datasets comprising personal information poses notable privacy and safety hazards. This study examines methods for addressing these challenges, specifically focusing on approaches to enhance the security of LLM outputs, safeguard user privacy, and adhere to data protection rules. We explore several methods including post-processing detection algorithms, content filtering, reinforcement learning from human and AI inputs, and the difficulties in maintaining a balance between model safety and performance. The study also emphasizes the dangers of unintentional data leakage, privacy issues related to user prompts, and the possibility of data breaches. We highlight the significance of corporate data governance rules and optimal methods for engaging with chatbots. In addition, we analyze the development of data protection frameworks, evaluate the adherence of LLMs to General Data Protection Regulation (GDPR), and examine privacy legislation in academic and business policies. We demonstrate the difficulties and remedies involved in preserving data privacy and security in the age of sophisticated artificial intelligence by employing case studies and real-life instances. This article seeks to educate stakeholders on practical strategies for improving the security and privacy of LLMs, while also assuring their responsible and ethical implementation.
Keywords: Data privacy, large language models, artificial intelligence, machine learning, cybersecurity, general data protection regulation, data safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101158 CFD Study of Subcooled Boiling Flow at Elevated Pressure Using a Mechanistic Wall Heat Partitioning Model
Authors: Machimontorn Promtong, Sherman C. P. Cheung, Guan H. Yeoh, Sara Vahaji, Jiyuan Tu
Abstract:
The wide range of industrial applications involved with boiling flows promotes the necessity of establishing fundamental knowledge in boiling flow phenomena. For this purpose, a number of experimental and numerical researches have been performed to elucidate the underlying physics of this flow. In this paper, the improved wall boiling models, implemented on ANSYS CFX 14.5, were introduced to study subcooled boiling flow at elevated pressure. At the heated wall boundary, the Fractal model, Force balance approach and Mechanistic frequency model are given for predicting the nucleation site density, bubble departure diameter, and bubble departure frequency. The presented wall heat flux partitioning closures were modified to consider the influence of bubble sliding along the wall before the lift-off, which usually happens in the flow boiling. The simulation was performed based on the Two-fluid model, where the standard k-ω SST model was selected for turbulence modelling. Existing experimental data at around 5 bars were chosen to evaluate the accuracy of the presented mechanistic approach. The void fraction and Interfacial Area Concentration (IAC) are in good agreement with the experimental data. However, the predicted bubble velocity and Sauter Mean Diameter (SMD) are over-predicted. This over-prediction may be caused by consideration of only dispersed and spherical bubbles in the simulations. In the future work, the important physical mechanisms of bubbles, such as merging and shrinking during sliding on the heated wall will be incorporated into this mechanistic model to enhance its capability for a wider range of flow prediction.
Keywords: CFD, mechanistic model, subcooled boiling flow, two-fluid model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1269157 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics
Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur
Abstract:
Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.Keywords: Human machine interface, industrial internet of things, internet of things, optical character recognition, video analytic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 738156 Lamb Wave Wireless Communication in Healthy Plates Using Coherent Demodulation
Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad
Abstract:
Guided ultrasonic waves are used in Non-Destructive Testing and Structural Health Monitoring for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average bit error percentage. Results has shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.
Keywords: Lamb Wave Communication, wireless communication, coherent demodulation, bit error percentage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 560155 Health Information Technology in Developing Countries: A Structured Literature Review with Reference to the Case of Libya
Authors: Haythem A. Nakkas, Philip J. Scott, Jim S. Briggs
Abstract:
This paper reports a structured literature review of the application of Health Information Technology in developing countries, defined as the World Bank categories Low-income countries, Lower-middle-income, and Upper-middle-income countries. The aim was to identify and classify the various applications of health information technology to assess its current state in developing countries and explore potential areas of research. We offer specific analysis and application of HIT in Libya as one of the developing countries. A structured literature review was conducted using the following online databases: IEEE, Science Direct, PubMed, and Google Scholar. Publication dates were set for 2000-2013. For the PubMed search, publications in English, French, and Arabic were specified. Using a content analysis approach, 159 papers were analyzed and a total number of 26 factors were identified that affect the adoption of health information technology. Of the 2681 retrieved articles, 159 met the inclusion criteria which were carefully analyzed and classified. The implementation of health information technology across developing countries is varied. Whilst it was initially expected financial constraints would have severely limited health information technology implementation, some developing countries like India have nevertheless dominated the literature and taken the lead in conducting scientific research. Comparing the number of studies to the number of countries in each category, we found that Low-income countries and Lower-middle-income had more studies carried out than Upper-middle-income countries. However, whilst IT has been used in various sectors of the economy, the healthcare sector in developing countries is still failing to benefit fully from the potential advantages that IT can offer.
Keywords: Developing Countries, Developed Countries, Factors, Failure, Implementation, Libya, Success.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2222154 Attribution Theory and Perceived Reliability of Cellphones for Teaching and Learning
Authors: Mayowa A. Sofowora, Seraphim D. Eyono Obono
Abstract:
The use of information and communication technologies such as computers, mobile phones and the Internet is becoming prevalent in today’s world; and it is facilitating access to a vast amount of data, services and applications for the improvement of people’s lives. However, this prevalence of ICTs is hampered by the problem of low income levels in developing countries to the point where people cannot timeously replace or repair their ICT devices when damaged or lost; and this problem serves as a motivation for this study whose aim is to examine the perceptions of teachers on the reliability of cellphones when used for teaching and learning purposes. The research objectives unfolding this aim are of two types: Objectives on the selection and design of theories and models, and objectives on the empirical testing of these theories and models. The first type of objectives is achieved using content analysis in an extensive literature survey: and the second type of objectives is achieved through a survey of high school teachers from the ILembe and UMgungundlovu districts in the KwaZulu-Natal province of South Africa. Data collected from this questionnaire based survey is analysed in SPSS using descriptive statistics and Pearson correlations after checking the reliability and validity of the questionnaires. The main hypothesis driving this study is that there is a relationship between the demographics and the attribution identity of teachers on one hand, and their perceptions on the reliability of cellphones on the other hand, as suggested by existing literature; except that attribution identities are considered in this study under three angles: intention, knowledge and ability, and action. The results of this study confirm that the perceptions of teachers on the reliability of cellphones for teaching and learning are affected by the school location of these teachers, and by their perceptions on learners’ cellphones usage intentions and actual use.
Keywords: Attribution, Cellphones, E-learning, Reliability
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1803