Search results for: Single-ended input differential amplifier
236 A Microcontroller Implementation of Constrained Model Predictive Control
Authors: Amira Kheriji Abbes, Faouzi Bouani, Mekki Ksouri
Abstract:
Model Predictive Control (MPC) is an established control technique in a wide range of process industries. The reason for this success is its ability to handle multivariable systems and systems having input, output or state constraints. Neverthless comparing to PID controller, the implementation of the MPC in miniaturized devices like Field Programmable Gate Arrays (FPGA) and microcontrollers has historically been very small scale due to its complexity in implementation and its computation time requirement. At the same time, such embedded technologies have become an enabler for future manufacturing enterprisers as well as a transformer of organizations and markets. In this work, we take advantage of these recent advances in this area in the deployment of one of the most studied and applied control technique in the industrial engineering. In this paper, we propose an efficient firmware for the implementation of constrained MPC in the performed STM32 microcontroller using interior point method. Indeed, performances study shows good execution speed and low computational burden. These results encourage to develop predictive control algorithms to be programmed in industrial standard processes. The PID anti windup controller was also implemented in the STM32 in order to make a performance comparison with the MPC. The main features of the proposed constrained MPC framework are illustrated through two examples.Keywords: Embedded software, microcontroller, constrainedModel Predictive Control, interior point method, PID antiwindup, Keil tool, C/Cµ language.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2802235 Analysis of Codebook Based Channel Feedback Techniques for MIMO-OFDM Systems
Authors: Muhammad Rehan Khalid, Ahmed Farhan Hanif, Adnan Ahmed Khan
Abstract:
This paper investigates the performance of Multiple- Input Multiple-Output (MIMO) feedback system combined with Orthogonal Frequency Division Multiplexing (OFDM). Two types of codebook based channel feedback techniques are used in this work. The first feedback technique uses a combination of both the long-term and short-term channel state information (CSI) at the transmitter, whereas the second technique uses only the short term CSI. The long-term and short-term CSI at the transmitter is used for efficient channel utilization. OFDM is a powerful technique employed in communication systems suffering from frequency selectivity. Combined with multiple antennas at the transmitter and receiver, OFDM proves to be robust against delay spread. Moreover, it leads to significant data rates with improved bit error performance over links having only a single antenna at both the transmitter and receiver. The effectiveness of these techniques has been demonstrated through the simulation of a MIMO-OFDM feedback system. The results have been evaluated for 4x4 MIMO channels. Simulation results indicate the benefits of the MIMO-OFDM channel feedback system over the one without incorporating OFDM. Performance gain of about 3 dB is observed for MIMO-OFDM feedback system as compared to the one without employing OFDM. Hence MIMO-OFDM becomes an attractive approach for future high speed wireless communication systems.
Keywords: MIMO systems, OFDM, Codebooks, Channel Feedback
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1677234 An Edge Detection and Filtering Mechanism of Two Dimensional Digital Objects Based on Fuzzy Inference
Authors: Ayman A. Aly, Abdallah A. Alshnnaway
Abstract:
The general idea behind the filter is to average a pixel using other pixel values from its neighborhood, but simultaneously to take care of important image structures such as edges. The main concern of the proposed filter is to distinguish between any variations of the captured digital image due to noise and due to image structure. The edges give the image the appearance depth and sharpness. A loss of edges makes the image appear blurred or unfocused. However, noise smoothing and edge enhancement are traditionally conflicting tasks. Since most noise filtering behaves like a low pass filter, the blurring of edges and loss of detail seems a natural consequence. Techniques to remedy this inherent conflict often encompass generation of new noise due to enhancement. In this work a new fuzzy filter is presented for the noise reduction of images corrupted with additive noise. The filter consists of three stages. (1) Define fuzzy sets in the input space to computes a fuzzy derivative for eight different directions (2) construct a set of IFTHEN rules by to perform fuzzy smoothing according to contributions of neighboring pixel values and (3) define fuzzy sets in the output space to get the filtered and edged image. Experimental results are obtained to show the feasibility of the proposed approach with two dimensional objects.Keywords: Additive noise, edge preserving filtering, fuzzy image filtering, noise reduction, two dimensional mechanical images.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568233 Multi-Stage Multi-Period Production Planning in Wire and Cable Industry
Authors: Mahnaz Hosseinzadeh, Shaghayegh Rezaee Amiri
Abstract:
This paper presents a methodology for serial production planning problem in wire and cable manufacturing process that addresses the problem of input-output imbalance in different consecutive stations, hoping to minimize the halt of machines in each stage. To this end, a linear Goal Programming (GP) model is developed, in which four main categories of constraints as per the number of runs per machine, machines’ sequences, acceptable inventories of machines at the end of each period, and the necessity of fulfillment of the customers’ orders are considered. The model is formulated based upon on the real data obtained from IKO TAK Company, an important supplier of wire and cable for oil and gas and automotive industries in Iran. By solving the model in GAMS software the optimal number of runs, end-of-period inventories, and the possible minimum idle time for each machine are calculated. The application of the numerical results in the target company has shown the efficiency of the proposed model and the solution in decreasing the lead time of the end product delivery to the customers by 20%. Accordingly, the developed model could be easily applied in wire and cable companies for the aim of optimal production planning to reduce the halt of machines in manufacturing stages.
Keywords: Serial manufacturing process, production planning, wire and cable industry, goal programming approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 934232 Static Analysis of Security Issues of the Python Packages Ecosystem
Authors: Adam Gorine, Faten Spondon
Abstract:
Python is considered the most popular programming language and offers its own ecosystem for archiving and maintaining open-source software packages. This system is called the Python Package Index (PyPI), the repository of this programming language. Unfortunately, one-third of these software packages have vulnerabilities that allow attackers to execute code automatically when a vulnerable or malicious package is installed. This paper contributes to large-scale empirical studies investigating security issues in the Python ecosystem by evaluating package vulnerabilities. These provide a series of implications that can help the security of software ecosystems by improving the process of discovering, fixing, and managing package vulnerabilities. The vulnerable dataset is generated using the NVD, the National Vulnerability Database, and the Snyk vulnerability dataset. In addition, we evaluated 807 vulnerability reports in the NVD and 3900 publicly known security vulnerabilities in Python Package Manager (Pip) from the Snyk database from 2002 to 2022. As a result, many Python vulnerabilities appear in high severity, followed by medium severity. The most problematic areas have been improper input validation and denial of service attacks. A hybrid scanning tool that combines the three scanners, Bandit, Snyk and Dlint, which provide a clear report of the code vulnerability, is also described.
Keywords: Python vulnerabilities, Bandit, Snyk, Dlint, Python Package Index, ecosystem, static analysis, malicious attacks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 249231 Application of Neural Network in User Authentication for Smart Home System
Authors: A. Joseph, D.B.L. Bong, D.A.A. Mat
Abstract:
Security has been an important issue and concern in the smart home systems. Smart home networks consist of a wide range of wired or wireless devices, there is possibility that illegal access to some restricted data or devices may happen. Password-based authentication is widely used to identify authorize users, because this method is cheap, easy and quite accurate. In this paper, a neural network is trained to store the passwords instead of using verification table. This method is useful in solving security problems that happened in some authentication system. The conventional way to train the network using Backpropagation (BPN) requires a long training time. Hence, a faster training algorithm, Resilient Backpropagation (RPROP) is embedded to the MLPs Neural Network to accelerate the training process. For the Data Part, 200 sets of UserID and Passwords were created and encoded into binary as the input. The simulation had been carried out to evaluate the performance for different number of hidden neurons and combination of transfer functions. Mean Square Error (MSE), training time and number of epochs are used to determine the network performance. From the results obtained, using Tansig and Purelin in hidden and output layer and 250 hidden neurons gave the better performance. As a result, a password-based user authentication system for smart home by using neural network had been developed successfully.Keywords: Neural Network, User Authentication, Smart Home, Security
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044230 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions
Authors: Hazem M. El-Bakry
Abstract:
In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.Keywords: Boolean Functions, Simplification, KarnoughMap, Implementation of Logic Functions, Modular NeuralNetworks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814229 General Regression Neural Network and Back Propagation Neural Network Modeling for Predicting Radial Overcut in EDM: A Comparative Study
Authors: Raja Das, M. K. Pradhan
Abstract:
This paper presents a comparative study between two neural network models namely General Regression Neural Network (GRNN) and Back Propagation Neural Network (BPNN) are used to estimate radial overcut produced during Electrical Discharge Machining (EDM). Four input parameters have been employed: discharge current (Ip), pulse on time (Ton), Duty fraction (Tau) and discharge voltage (V). Recently, artificial intelligence techniques, as it is emerged as an effective tool that could be used to replace time consuming procedures in various scientific or engineering applications, explicitly in prediction and estimation of the complex and nonlinear process. The both networks are trained, and the prediction results are tested with the unseen validation set of the experiment and analysed. It is found that the performance of both the networks are found to be in good agreement with average percentage error less than 11% and the correlation coefficient obtained for the validation data set for GRNN and BPNN is more than 91%. However, it is much faster to train GRNN network than a BPNN and GRNN is often more accurate than BPNN. GRNN requires more memory space to store the model, GRNN features fast learning that does not require an iterative procedure, and highly parallel structure. GRNN networks are slower than multilayer perceptron networks at classifying new cases.
Keywords: Electrical-discharge machining, General Regression Neural Network, Back-propagation Neural Network, Radial Overcut.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3116228 Application of ANN for Estimation of Power Demand of Villages in Sulaymaniyah Governorate
Abstract:
Before designing an electrical system, the estimation of load is necessary for unit sizing and demand-generation balancing. The system could be a stand-alone system for a village or grid connected or integrated renewable energy to grid connection, especially as there are non–electrified villages in developing countries. In the classical model, the energy demand was found by estimating the household appliances multiplied with the amount of their rating and the duration of their operation, but in this paper, information exists for electrified villages could be used to predict the demand, as villages almost have the same life style. This paper describes a method used to predict the average energy consumed in each two months for every consumer living in a village by Artificial Neural Network (ANN). The input data are collected using a regional survey for samples of consumers representing typical types of different living, household appliances and energy consumption by a list of information, and the output data are collected from administration office of Piramagrun for each corresponding consumer. The result of this study shows that the average demand for different consumers from four villages in different months throughout the year is approximately 12 kWh/day, this model estimates the average demand/day for every consumer with a mean absolute percent error of 11.8%, and MathWorks software package MATLAB version 7.6.0 that contains and facilitate Neural Network Toolbox was used.
Keywords: Artificial neural network, load estimation, regional survey, rural electrification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361227 A Sensorless Robust Tracking Control of an Implantable Rotary Blood Pump for Heart Failure Patients
Authors: Mohsen A. Bakouri, Andrey V. Savkin, Abdul-Hakeem H. Alomari, Robert F. Salamonsen, Einly Lim, Nigel H. Lovell
Abstract:
Physiological control of a left ventricle assist device (LVAD) is generally a complicated task due to diverse operating environments and patient variability. In this work, a tracking control algorithm based on sliding mode and feed forward control for a class of discrete-time single input single output (SISO) nonlinear uncertain systems is presented. The controller was developed to track the reference trajectory to a set operating point without inducing suction in the ventricle. The controller regulates the estimated mean pulsatile flow Qp and mean pulsatility index of pump rotational speed PIω that was generated from a model of the assist device. We recall the principle of the sliding mode control theory then we combine the feed-forward control design with the sliding mode control technique to follow the reference trajectory. The uncertainty is replaced by its upper and lower boundary. The controller was tested in a computer simulation covering two scenarios (preload and ventricular contractility). The simulation results prove the effectiveness and the robustness of the proposed controller
Keywords: robust control system, discrete-sliding mode, left ventricularle assist devicse, pulsatility index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871226 Influence of Paralleled Capacitance Effect in Well-defined Multiple Value Logical Level System with Active Load
Authors: Chih Chin Yang, Yen Chun Lin, Hsiao Hsuan Cheng
Abstract:
Three similar negative differential resistance (NDR) profiles with both high peak to valley current density ratio (PVCDR) value and high peak current density (PCD) value in unity resonant tunneling electronic circuit (RTEC) element is developed in this paper. The PCD values and valley current density (VCD) values of the three NDR curves are all about 3.5 A and 0.8 A, respectively. All PV values of NDR curves are 0.40 V, 0.82 V, and 1.35 V, respectively. The VV values are 0.61 V, 1.07 V, and 1.69 V, respectively. All PVCDR values reach about 4.4 in three NDR curves. The PCD value of 3.5 A in triple PVCDR RTEC element is better than other resonant tunneling devices (RTD) elements. The high PVCDR value is concluded the lower VCD value about 0.8 A. The low VCD value is achieved by suitable selection of resistors in triple PVCDR RTEC element. The low PV value less than 1.35 V possesses low power dispersion in triple PVCDR RTEC element. The designed multiple value logical level (MVLL) system using triple PVCDR RTEC element provides equidistant logical level. The logical levels of MVLL system are about 0.2 V, 0.8 V, 1.5 V, and 2.2 V from low voltage to high voltage and then 2.2 V, 1.3 V, 0.8 V, and 0.2 V from high voltage back to low voltage in half cycle of sinusoid wave. The output level of four levels MVLL system is represented in 0.3 V, 1.1 V, 1.7 V, and 2.6 V, which satisfies the NMP condition of traditional two-bit system. The remarkable logical characteristic of improved MVLL system with paralleled capacitor are with four significant stable logical levels about 220 mV, 223 mV, 228 mV, and 230 mV. The stability and articulation of logical levels of improved MVLL system are outstanding. The average holding time of improved MVLL system is approximately 0.14 μs. The holding time of improved MVLL system is fourfold than of basic MVLL system. The function of additional capacitor in the improved MVLL system is successfully discovered.Keywords: Capacitance, Logical level, Constant current source
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1395225 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions
Authors: Hazem M. El-Bakry
Abstract:
In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.
Keywords: Boolean functions, simplification, Karnough map, implementation of logic functions, modular neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2070224 Reduction Conditions of Briquetted Solid Wastes Generated by the Integrated Iron and Steel Plant
Authors: Gökhan Polat, Dicle Kocaoğlu Yılmazer, Muhlis Nezihi Sarıdede
Abstract:
Iron oxides are the main input to produce iron in integrated iron and steel plants. During production of iron from iron oxides, some wastes with high iron content occur. These main wastes can be classified as basic oxygen furnace (BOF) sludge, flue dust and rolling scale. Recycling of these wastes has a great importance for both environmental effects and reduction of production costs. In this study, recycling experiments were performed on basic oxygen furnace sludge, flue dust and rolling scale which contain 53.8%, 54.3% and 70.2% iron respectively. These wastes were mixed together with coke as reducer and these mixtures are pressed to obtain cylindrical briquettes. These briquettes were pressed under various compacting forces from 1 ton to 6 tons. Also, both stoichiometric and twice the stoichiometric cokes were added to investigate effect of coke amount on reduction properties of the waste mixtures. Then, these briquettes were reduced at 1000°C and 1100°C during 30, 60, 90, 120 and 150 min in a muffle furnace. According to the results of reduction experiments, the effect of compacting force, temperature and time on reduction ratio of the wastes were determined. It is found that 1 ton compacting force, 150 min reduction time and 1100°C are the optimum conditions to obtain reduction ratio higher than 75%.
Keywords: Iron oxide wastes, reduction, coke, recycling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1323223 Evaluation of Power Factor Corrected AC - DC Converters and Controllers to meet UPS Performance Index
Authors: A. Muthuramalingam, S. Himavathi
Abstract:
Harmonic pollution and low power factor in power systems caused by power converters have been of great concern. To overcome these problems several converter topologies using advanced semiconductor devices and control schemes have been proposed. This investigation is to identify a low cost, small size, efficient and reliable ac to dc converter to meet the input performance index of UPS. The performance of single phase and three phase ac to dc converter along with various control techniques are studied and compared. The half bridge converter topology with linear current control is identified as most suitable. It is simple, energy efficient because of single switch power loss and transformer-less operation of UPS. The results are validated practically using a prototype built using IGBT and analog controller. The performance for both single and three-phase system is verified. Digital implementation of closed loop control achieves higher reliability. Its cost largely depends on chosen bit precision. The minimal bit precision for optimum converter performance is identified as 16-bit with fixed-point operation. From the investigation and practical implementation it is concluded that half bridge ac – dc converter along with digital linear controller meets the performance index of UPS for single and three phase systems.Keywords: PFC, energy efficient, half bridge, ac-dc converter, boost topology, linear current control, digital bit precision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3036222 Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network
Authors: Zukisa Nante, Wang Zenghui
Abstract:
Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.
Keywords: Face recognition, Principal Component Analysis, PCA, Convolutional Neural Network, CNN, Rectified Linear Unit, ReLU, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 514221 Computer Simulation of Low Volume Roads Made from Recycled Materials
Authors: Aleš Florian, Lenka Ševelová
Abstract:
Low volume roads are widely used all over the world. To improve their quality the computer simulation of their behavior is proposed. The FEM model enables to determine stress and displacement conditions in the pavement and/or also in the particular material layers. Different variants of pavement layers, material used, humidity as well as loading conditions can be studied. Among others, the input information about material properties of individual layers made from recycled materials is crucial for obtaining results as exact as possible. For this purpose the cyclic-load triaxial test machine testing of cyclic-load performance of materials is a promising test method. The test is able to simulate the real traffic loading on particular materials taking into account the changes in the horizontal stress conditions produced in particular layers by crossings of vehicles. Also the test specimen can be prepared with different amount of water. Thus modulus of elasticity (Young modulus) of different materials including recycled ones can be measured under the different conditions of horizontal and vertical stresses as well as under the different humidity conditions. Using the proposed testing procedure the modulus of elasticity of recycled materials used in the newly built low volume road is obtained under different stress and humidity conditions set to standard, dry and fully saturated level. Obtained values of modulus of elasticity are used in FEA.
Keywords: FEA, FEM, geotechnical materials, low volume roads, pavement, triaxial test, Young modulus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630220 Ground Motion Modelling in Bangladesh Using Stochastic Method
Authors: Mizan Ahmed, Srikanth Venkatesan
Abstract:
Geological and tectonic framework indicates that Bangladesh is one of the most seismically active regions in the world. The Bengal Basin is at the junction of three major interacting plates: the Indian, Eurasian, and Burma Plates. Besides there are many active faults within the region, e.g. the large Dauki fault in the north. The country has experienced a number of destructive earthquakes due to the movement of these active faults. Current seismic provisions of Bangladesh are mostly based on earthquake data prior to the 1990. Given the record of earthquakes post 1990, there is a need to revisit the design provisions of the code. This paper compares the base shear demand of three major cities in Bangladesh: Dhaka (the capital city), Sylhet, and Chittagong for earthquake scenarios of magnitudes 7.0MW, 7.5MW, 8.0MW, and 8.5MW using a stochastic model. In particular, the stochastic model allows the flexibility to input region specific parameters such as shear wave velocity profile (that were developed from Global Crustal Model CRUST2.0) and include the effects of attenuation as individual components. Effects of soil amplification were analysed using the Extended Component Attenuation Model (ECAM). Results show that the estimated base shear demand is higher in comparison with code provisions leading to the suggestion of additional seismic design consideration in the study regions.Keywords: Attenuation, earthquake, ground motion, stochastic, seismic hazard.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039219 Valuation of Green Commercial Office Building: A Preliminary Study of Malaysian Valuers’ Insight
Authors: Tuti Haryati Jasimin, Hishamuddin Mohd Ali
Abstract:
Malaysia’s green building development is gaining momentum and green buildings have become a key focus area, especially within the commercial sector with the encouragement of government legislation and policy. Due to the emerging awareness among the market players’ views of the benefits associated with the ownership of green buildings in Malaysia, there is a need for valuers to incorporate consideration of sustainability into their assessments of property market value to ensure the green buildings continue to increase in the market. This paper analyses the valuers’ current perception on the valuation practices with regard to the green issues in Malaysia. The study was based on a survey of registered real estate valuers and the experts whose work related to valuation in the Klang Valley area to rate their view regarding the perception on valuation of green building. The findings present evidence that even though Malaysian valuers have limited knowledge of green buildings, they recognise the importance of incorporating the green features in the valuation process. The inclusion of incorporating the green features in valuations in practice was hindered by the inadequacy of sufficient transaction data in the market. Furthermore, valuers experienced difficulty in identifying what are the various input parameters of green building and how to adjust it in order to reflect the benefit of sustainability features correctly in the valuation process. This paper focuses on the present challenges confronted by Malaysian valuers with regards to incorporating the green features in their valuation.Keywords: Green commercial office building, Malaysia, valuers’ perception, valuation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2977218 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment
Authors: Isabela Moreira Queiroz
Abstract:
Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management.Keywords: Probabilistic methods, risk assessment, risk management, slope stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740217 Design of a Hand-Held, Clamp-on, Leakage Current Sensor for High Voltage Direct Current Insulators
Authors: Morné Roman, Robert van Zyl, Nishanth Parus, Nishal Mahatho
Abstract:
Leakage current monitoring for high voltage transmission line insulators is of interest as a performance indicator. Presently, to the best of our knowledge, there is no commercially available, clamp-on type, non-intrusive device for measuring leakage current on energised high voltage direct current (HVDC) transmission line insulators. The South African power utility, Eskom, is investigating the development of such a hand-held sensor for two important applications; first, for continuous real-time condition monitoring of HVDC line insulators and, second, for use by live line workers to determine if it is safe to work on energised insulators. In this paper, a DC leakage current sensor based on magnetic field sensing techniques is developed. The magnetic field sensor used in the prototype can also detect alternating current up to 5 MHz. The DC leakage current prototype detects the magnetic field associated with the current flowing on the surface of the insulator. Preliminary HVDC leakage current measurements are performed on glass insulators. The results show that the prototype can accurately measure leakage current in the specified current range of 1-200 mA. The influence of external fields from the HVDC line itself on the leakage current measurements is mitigated through a differential magnetometer sensing technique. Thus, the developed sensor can perform measurements on in-service HVDC insulators. The research contributes to the body of knowledge by providing a sensor to measure leakage current on energised HVDC insulators non-intrusively. This sensor can also be used by live line workers to inform them whether or not it is safe to perform maintenance on energized insulators.
Keywords: Direct current, insulator, leakage current, live line, magnetic field, sensor, transmission lines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913216 Design and Development of iLON Smart Server Based Remote Monitoring System for Induction Motors
Authors: G. S. Ayyappan, M. Raja Raghavan, R. Poonthalir, Kota Srinivas, B. Ramesh Babu
Abstract:
Electrical energy demand in the World and particularly in India, is increasing drastically more than its production over a period of time. In order to reduce the demand-supply gap, conserving energy becomes mandatory. Induction motors are the main driving force in the industries and contributes to about half of the total plant energy consumption. By effective monitoring and control of induction motors, huge electricity can be saved. This paper deals about the design and development of such a system, which employs iLON Smart Server and motor performance monitoring nodes. These nodes will monitor the performance of induction motors on-line, on-site and in-situ in the industries. The node monitors the performance of motors by simply measuring the electrical power input and motor shaft speed; coupled to genetic algorithm to estimate motor efficiency. The nodes are connected to the iLON Server through RS485 network. The web server collects the motor performance data from nodes, displays online, logs periodically, analyzes, alerts, and generates reports. The system could be effectively used to operate the motor around its Best Operating Point (BOP) as well as to perform the Life Cycle Assessment of Induction motors used in the industries in continuous operation.
Keywords: Best operating point, iLON smart server, motor asset management, LONWORKS, Modbus RTU, motor performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 723215 Tide Contribution in the Flood Event of Jeddah City: Mathematical Modelling and Different Field Measurements of the Groundwater Rise
Authors: Aïssa Rezzoug
Abstract:
This paper is aimed to bring new elements that demonstrate the tide caused the groundwater to rise in the shoreline band, on which the urban areas occurs, especially in the western coastal cities of the Kingdom of Saudi Arabia like Jeddah. The reason for the last events of Jeddah inundation was the groundwater rise in the city coupled at the same time to a strong precipitation event. This paper will illustrate the tide participation in increasing the groundwater level significantly. It shows that the reason for internal groundwater recharge within the urban area is not only the excess of the water supply coming from surrounding areas, due to the human activity, with lack of sufficient and efficient sewage system, but also due to tide effect. The research study follows a quantitative method to assess groundwater level rise risks through many in-situ measurements and mathematical modelling. The proposed approach highlights groundwater level, in the urban areas of the city on the shoreline band, reaching the high tide level without considering any input from precipitation. Despite the small tide in the Red Sea compared to other oceanic coasts, the groundwater level is considerably enhanced by the tide from the seaside and by the freshwater table from the landside of the city. In these conditions, the groundwater level becomes high in the city and prevents the soil to evacuate quickly enough the surface flow caused by the storm event, as it was observed in the last historical flood catastrophe of Jeddah in 2009.
Keywords: Flood, groundwater rise, Jeddah, tide.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 503214 Preparation and Characterization of Pure PVA and PVA/MMT Matrix: Effect of Thermal Treatment
Authors: Albana Hasimi, Edlira Tako, Partizan Malkaj, Elvin Çomo, Blerina Papajani, Mirela Ndrita, Ledjan Malaj
Abstract:
Many endeavors have been exerted during the last years for developing new artificial polymeric membranes, which fulfill the demanded conditions for biomedical uses. One of the most tested polymers is Poly(vinyl alcohol) [PVA]. Our teams are based on the possibility of using PVA for personal protective equipment against COVID-19. In personal protective equipment, we explore the possibility of modifying the properties of the polymer by adding Montmorillonite [MMT]. Heat-treatment above the glass transition temperature is used to improve mechanical properties mainly by increasing the crystallinity of the polymer, which acts as a physical network. Temperature-Modulated Differential Scanning Calorimetry (TMDSC) measurements indicated that the presence of 0.5% MMT in PVA causes a higher Tg value and shaped peak of crystallinity. Decomposition is observed at two of the melting points of the crystals during heating 25-240 oC and overlap of the recrystallization ridges during cooling 240-25 oC. This is indicative of the presence of two types (quality or structure) of polymer crystals. On the other hand, some indication of improvement of the quality of the crystals by heat-treatment is given by the distinct non-reversing contribution to melting. Data on sorption and transport of water in PVA films: PVA pure and PVA/MMT matrix, modified by thermal treatment are presented. The membranes become more rigid as a result of the heat treatment and because of this the water uptake is significantly lower in membranes. That is indicated by analysis of the resulting water uptake kinetics. The presence of 0.5% w/w of MMT has no significant impact on the properties of PVA membranes. Water uptake kinetics deviate from Fick’s law due to slow relaxation of glassy polymer matrix for all types of membranes.
Keywords: Crystallinity, montmorillonite, nanocomposite, poly(vinyl alcohol).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 231213 Nonlinear Sensitive Control of Centrifugal Compressor
Authors: F. Laaouad, M. Bouguerra, A. Hafaifa, A. Iratni
Abstract:
In this work, we treat the problems related to chemical and petrochemical plants of a certain complex process taking the centrifugal compressor as an example, a system being very complex by its physical structure as well as its behaviour (surge phenomenon). We propose to study the application possibilities of the recent control approaches to the compressor behaviour, and consequently evaluate their contribution in the practical and theoretical fields. Facing the studied industrial process complexity, we choose to make recourse to fuzzy logic for analysis and treatment of its control problem owing to the fact that these techniques constitute the only framework in which the types of imperfect knowledge can jointly be treated (uncertainties, inaccuracies, etc..) offering suitable tools to characterise them. In the particular case of the centrifugal compressor, these imperfections are interpreted by modelling errors, the neglected dynamics, no modelisable dynamics and the parametric variations. The purpose of this paper is to produce a total robust nonlinear controller design method to stabilize the compression process at its optimum steady state by manipulating the gas rate flow. In order to cope with both the parameter uncertainty and the structured non linearity of the plant, the proposed method consists of a linear steady state regulation that ensures robust optimal control and of a nonlinear compensation that achieves the exact input/output linearization.
Keywords: Compressor, Fuzzy logic, Surge control, Bilinearcontroller, Stability analysis, Nonlinear plant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2147212 A Software Framework for Predicting Oil-Palm Yield from Climate Data
Authors: Mohd. Noor Md. Sap, A. Majid Awan
Abstract:
Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1982211 AJcFgraph - AspectJ Control Flow Graph Builder for Aspect-Oriented Software
Authors: Reza Meimandi Parizi, Abdul Azim Abdul Ghani
Abstract:
The ever-growing usage of aspect-oriented development methodology in the field of software engineering requires tool support for both research environments and industry. So far, tool support for many activities in aspect-oriented software development has been proposed, to automate and facilitate their development. For instance, the AJaTS provides a transformation system to support aspect-oriented development and refactoring. In particular, it is well established that the abstract interpretation of programs, in any paradigm, pursued in static analysis is best served by a high-level programs representation, such as Control Flow Graph (CFG). This is why such analysis can more easily locate common programmatic idioms for which helpful transformation are already known as well as, association between the input program and intermediate representation can be more closely maintained. However, although the current researches define the good concepts and foundations, to some extent, for control flow analysis of aspectoriented programs but they do not provide a concrete tool that can solely construct the CFG of these programs. Furthermore, most of these works focus on addressing the other issues regarding Aspect- Oriented Software Development (AOSD) such as testing or data flow analysis rather than CFG itself. Therefore, this study is dedicated to build an aspect-oriented control flow graph construction tool called AJcFgraph Builder. The given tool can be applied in many software engineering tasks in the context of AOSD such as, software testing, software metrics, and so forth.Keywords: Aspect-Oriented Software Development, AspectJ, Control Flow Graph, Data Flow Analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2101210 Online Signature Verification Using Angular Transformation for e-Commerce Services
Authors: Peerapong Uthansakul, Monthippa Uthansakul
Abstract:
The rapid growth of e-Commerce services is significantly observed in the past decade. However, the method to verify the authenticated users still widely depends on numeric approaches. A new search on other verification methods suitable for online e-Commerce is an interesting issue. In this paper, a new online signature-verification method using angular transformation is presented. Delay shifts existing in online signatures are estimated by the estimation method relying on angle representation. In the proposed signature-verification algorithm, all components of input signature are extracted by considering the discontinuous break points on the stream of angular values. Then the estimated delay shift is captured by comparing with the selected reference signature and the error matching can be computed as a main feature used for verifying process. The threshold offsets are calculated by two types of error characteristics of the signature verification problem, False Rejection Rate (FRR) and False Acceptance Rate (FAR). The level of these two error rates depends on the decision threshold chosen whose value is such as to realize the Equal Error Rate (EER; FAR = FRR). The experimental results show that through the simple programming, employed on Internet for demonstrating e-Commerce services, the proposed method can provide 95.39% correct verifications and 7% better than DP matching based signature-verification method. In addition, the signature verification with extracting components provides more reliable results than using a whole decision making.Keywords: Online signature verification, e-Commerce services, Angular transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586209 Modeling Spatial Distributions of Point and Nonpoint Source Pollution Loadings in the Great Lakes Watersheds
Authors: Chansheng He, Carlo DeMarchi
Abstract:
A physically based, spatially-distributed water quality model is being developed to simulate spatial and temporal distributions of material transport in the Great Lakes Watersheds of the U.S. Multiple databases of meteorology, land use, topography, hydrography, soils, agricultural statistics, and water quality were used to estimate nonpoint source loading potential in the study watersheds. Animal manure production was computed from tabulations of animals by zip code area for the census years of 1987, 1992, 1997, and 2002. Relative chemical loadings for agricultural land use were calculated from fertilizer and pesticide estimates by crop for the same periods. Comparison of these estimates to the monitored total phosphorous load indicates that both point and nonpoint sources are major contributors to the total nutrient loads in the study watersheds, with nonpoint sources being the largest contributor, particularly in the rural watersheds. These estimates are used as the input to the distributed water quality model for simulating pollutant transport through surface and subsurface processes to Great Lakes waters. Visualization and GIS interfaces are developed to visualize the spatial and temporal distribution of the pollutant transport in support of water management programs.
Keywords: Distributed Large Basin Runoff Model, Great LakesWatersheds, nonpoint source pollution, and point sources.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534208 Effect of Sensory Manipulations on Human Joint Stiffness Strategy and Its Adaptation for Human Dynamic Stability
Authors: Aizreena Azaman, Mai Ishibashi, Masanori Ishizawa, Shin-Ichiroh Yamamoto
Abstract:
Sensory input plays an important role to human posture control system to initiate strategy in order to counterpart any unbalance condition and thus, prevent fall. In previous study, joint stiffness was observed able to describe certain issues regarding to movement performance. But, correlation between balance ability and joint stiffness is still remains unknown. In this study, joint stiffening strategy at ankle and hip were observed under different sensory manipulations and its correlation with conventional clinical test (Functional Reach Test) for balance ability was investigated. In order to create unstable condition, two different surface perturbations (tilt up-tilt (TT) down and forward-backward (FB)) at four different frequencies (0.2, 0.4, 0.6 and 0.8 Hz) were introduced. Furthermore, four different sensory manipulation conditions (include vision and vestibular system) were applied to the subject and they were asked to maintain their position as possible. The results suggested that joint stiffness were high during difficult balance situation. Less balance people generated high average joint stiffness compared to balance people. Besides, adaptation of posture control system under repetitive external perturbation also suggested less during sensory limited condition. Overall, analysis of joint stiffening response possible to predict unbalance situation faced by human
Keywords: Balance ability, joint stiffness, sensory, adaptation, dynamic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1954207 An Experimental Study on the Effect of Operating Parameters during the Micro-Electro-Discharge Machining of Ni Based Alloy
Authors: Asma Perveen, M. P. Jahan
Abstract:
Ni alloys have managed to cover wide range of applications such as automotive industries, oil gas industries, and aerospace industries. However, these alloys impose challenges while using conventional machining technologies. On the other hand, Micro-Electro-Discharge machining (micro-EDM) is a non-conventional machining method that uses controlled sparks energy to remove material irrespective of the materials hardness. There has been always a huge interest from the industries for developing optimum methodology and parameters in order to enhance the productivity of micro-EDM in terms of reducing machining time and tool wear for different alloys. Therefore, the aims of this study are to investigate the effects of the micro-EDM process parameters, in order to find their optimal values. The input process parameters include voltage, capacitance, and electrode rotational speed, whereas the output parameters considered are machining time, entrance diameter of hole, overcut, tool wear, and crater size. The surface morphology and element characterization are also investigated with the use of SEM and EDX analysis. The experimental result indicates the reduction of machining time with the increment of discharge energy. Discharge energy also contributes to the enlargement of entrance diameter as well as overcut. In addition, tool wears show reduction with the increase of discharge energy. Moreover, crater size is found to be increased in size along with the increment of discharge energy.Keywords: Micro EDM, Ni alloy, discharge energy, micro-holes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1336