Search results for: UAV noise
227 Crab Shell Waste Chitosan-Based Thin Film for Acoustic Sensor Applications
Authors: Maydariana Ayuningtyas, Bambang Riyanto, Akhiruddin Maddu
Abstract:
Industrial waste of crustacean shells, such as shrimp and crab, has been considered as one of the major issues contributing to environmental pollution. The waste processing mechanisms to form new, practical substances with added value have been developed. Chitosan, a derived matter from chitin, which is obtained from crab and shrimp shells, performs prodigiously in broad range applications. A chitosan composite-based diaphragm is a new inspiration in fiber optic acoustic sensor advancement. Elastic modulus, dynamic response, and sensitivity to acoustic wave of chitosan-based composite film contribute great potentials of organic-based sound-detecting material. The objective of this research was to develop chitosan diaphragm application in fiber optic microphone system. The formulation was conducted by blending 5% polyvinyl alcohol (PVA) solution with dissolved chitosan at 0%, 1% and 2% in 1:1 ratio, respectively. Composite diaphragms were characterized for the morphological and mechanical properties to predict the desired acoustic sensor sensitivity. The composite with 2% chitosan indicated optimum performance with 242.55 µm thickness, 67.9% relative humidity, and 29-76% light transmittance. The Young’s modulus of 2%-chitosan composite material was 4.89×104 N/m2, which generated the voltage amplitude of 0.013V and performed sensitivity of 3.28 mV/Pa at 1 kHz. Based on the results above, chitosan from crustacean shell waste can be considered as a viable alternative material for fiber optic acoustic sensor sensing pad development. Further, the research in chitosan utilisation is proposed as novel optical microphone development in anthropogenic noise controlling effort for environmental and biodiversity conservation.Keywords: acoustic sensor, chitosan, composite, crab shell, diaphragm, waste utilisation
Procedia PDF Downloads 257226 Enhancement of Primary User Detection in Cognitive Radio by Scattering Transform
Authors: A. Moawad, K. C. Yao, A. Mansour, R. Gautier
Abstract:
The detecting of an occupied frequency band is a major issue in cognitive radio systems. The detection process becomes difficult if the signal occupying the band of interest has faded amplitude due to multipath effects. These effects make it hard for an occupying user to be detected. This work mitigates the missed-detection problem in the context of cognitive radio in frequency-selective fading channel by proposing blind channel estimation method that is based on scattering transform. By initially applying conventional energy detection, the missed-detection probability is evaluated, and if it is greater than or equal to 50%, channel estimation is applied on the received signal followed by channel equalization to reduce the channel effects. In the proposed channel estimator, we modify the Morlet wavelet by using its first derivative for better frequency resolution. A mathematical description of the modified function and its frequency resolution is formulated in this work. The improved frequency resolution is required to follow the spectral variation of the channel. The channel estimation error is evaluated in the mean-square sense for different channel settings, and energy detection is applied to the equalized received signal. The simulation results show improvement in reducing the missed-detection probability as compared to the detection based on principal component analysis. This improvement is achieved at the expense of increased estimator complexity, which depends on the number of wavelet filters as related to the channel taps. Also, the detection performance shows an improvement in detection probability for low signal-to-noise scenarios over principal component analysis- based energy detection.Keywords: channel estimation, cognitive radio, scattering transform, spectrum sensing
Procedia PDF Downloads 196225 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks
Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang
Abstract:
The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.Keywords: femtocell networks, game theory, interference mitigation, spectrum allocation
Procedia PDF Downloads 156224 A Proposal of a Method to Measure the Satisfaction Indicator of the Local Community Concerning Tourism: A Case Study of Jalapão State Park, Tocantins
Authors: Veruska C. Dutra, Mary L. G. S. Senna, Afonso R. Aquino
Abstract:
Tourists bring many benefits to a local community, encouraging it to be involved in that activity; however, it may also have detrimental effects like garbage, noise, violence, external culture and the damaging of the natural environment among others, which may promote community dissatisfaction. The contact between the tourist and the local community is a concern, especially when the community is located near protected areas. In this case, the community must know the tourist destination well, so it can collaborate in the tourism development without harming the environment. In this context, the present article aims to demonstrate the results of a research study conducted as part of a doctorate program in Sciences from the University of Sao Paulo, Brazil. It had as an objective to elaborate a methodology proposal to measure the local community satisfaction indicator, with applicability on a case study in the Mateiros community located in the surrounding area of the Parque Estadual do Jalapão –PEJ conservation unit in the state of Tocantins, Brazil. This is a study of an interdisciplinary nature that had the deductive method as its guide. The indicator result is going to be presented in this study. It pointed out as negative factors: there is no involvement between the local community and the tourism sector, and there is also dissatisfaction with regard to the town’s basic services. The study showed as positive the local community knowledge about the various attractions in the surrounding area and that the group recognizes the importance of the tourism for the town and life. Concerning the methodology that was used, the results showed that it can collaborate in seeking actions of improvement and involvement of the community in the planning and development of the local tourism. It comes out as an efficient analysis tool, thus enabling the perceiving of the local community point of view.Keywords: satisfaction indicator, tourism, community, Jalapão
Procedia PDF Downloads 335223 Solar-Blind Ni-Schottky Photodetector Based on MOCVD Grown ZnGa₂O₄
Authors: Taslim Khan, Ray Hua Horng, Rajendra Singh
Abstract:
This study presents a comprehensive analysis of the design, fabrication, and performance evaluation of a solar-blind Schottky photodetector based on ZnGa₂O₄ grown via MOCVD, utilizing Ni/Au as the Schottky electrode. ZnGa₂O₄, with its wide bandgap of 5.2 eV, is well-suited for high-performance solar-blind photodetection applications. The photodetector demonstrates an impressive responsivity of 280 A/W, indicating its exceptional sensitivity within the solar-blind ultraviolet band. One of the device's notable attributes is its high rejection ratio of 10⁵, which effectively filters out unwanted background signals, enhancing its reliability in various environments. The photodetector also boasts a photodetector responsivity contrast ratio (PDCR) of 10⁷, showcasing its ability to detect even minor changes in incident UV light. Additionally, the device features an outstanding detective of 10¹⁸ Jones, underscoring its capability to precisely detect faint UV signals. It exhibits a fast response time of 80 ms and an ON/OFF ratio of 10⁵, making it suitable for real-time UV sensing applications. The noise-equivalent power (NEP) of 10^-17 W/Hz further highlights its efficiency in detecting low-intensity UV signals. The photodetector also achieves a high forward-to-backward current rejection ratio of 10⁶, ensuring high selectivity. Furthermore, the device maintains an extremely low dark current of approximately 0.1 pA. These findings position the ZnGa₂O₄-based Schottky photodetector as a leading candidate for solar-blind UV detection applications. It offers a compelling combination of sensitivity, selectivity, and operational efficiency, making it a highly promising tool for environments requiring precise and reliable UV detection.Keywords: wideband gap, solar blind photodetector, MOCVD, zinc gallate
Procedia PDF Downloads 39222 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization
Authors: Subhajit Das, Nirjhar Dhang
Abstract:
Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization
Procedia PDF Downloads 215221 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks
Authors: Tugba Bayoglu
Abstract:
Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification
Procedia PDF Downloads 279220 Similar Script Character Recognition on Kannada and Telugu
Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy
Abstract:
This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN
Procedia PDF Downloads 53219 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 86218 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 89217 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm
Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi
Abstract:
To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm
Procedia PDF Downloads 237216 Geospatial Curve Fitting Methods for Disease Mapping of Tuberculosis in Eastern Cape Province, South Africa
Authors: Davies Obaromi, Qin Yongsong, James Ndege
Abstract:
To interpolate scattered or regularly distributed data, there are imprecise or exact methods. However, there are some of these methods that could be used for interpolating data in a regular grid and others in an irregular grid. In spatial epidemiology, it is important to examine how a disease prevalence rates are distributed in space, and how they relate with each other within a defined distance and direction. In this study, for the geographic and graphic representation of the disease prevalence, linear and biharmonic spline methods were implemented in MATLAB, and used to identify, localize and compare for smoothing in the distribution patterns of tuberculosis (TB) in Eastern Cape Province. The aim of this study is to produce a more “smooth” graphical disease map for TB prevalence patterns by a 3-D curve fitting techniques, especially the biharmonic splines that can suppress noise easily, by seeking a least-squares fit rather than exact interpolation. The datasets are represented generally as a 3D or XYZ triplets, where X and Y are the spatial coordinates and Z is the variable of interest and in this case, TB counts in the province. This smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function, and it has also become the conventional method for its high precision, simplicity and flexibility. Surface and contour plots are produced for the TB prevalence at the provincial level for 2012 – 2015. From the results, the general outlook of all the fittings showed a systematic pattern in the distribution of TB cases in the province and this is consistent with some spatial statistical analyses carried out in the province. This new method is rarely used in disease mapping applications, but it has a superior advantage to be assessed at subjective locations rather than only on a rectangular grid as seen in most traditional GIS methods of geospatial analyses.Keywords: linear, biharmonic splines, tuberculosis, South Africa
Procedia PDF Downloads 238215 Evaluation of Mechanical Behavior of Laser Cladding in Various Tilting Pad Bearing Materials
Authors: Si-Geun Choi, Hoon-Jae Park, Jung-Woo Cho, Jin-Ho Lim, Jin-Young Park, Joo-Young Oh, Jae-Il Jeong Seock-Sam Kim, Young Tae Cho, Chan Gyu Kim, Jong-Hyoung Kim
Abstract:
The tilting pad bearing is a kind of the fluid film bearing and it can contribute to the high speed and the high load performance compared to other bearings including the rolling element bearing. Furthermore, the tilting bearing has many advantages such as high stability at high-speed performance, long life, high damping, high impact resistance and low noise. Therefore, it mostly used in mid to large size turbomachines, despite the high price disadvantage. Recently, manufacture and process employing laser techniques advancing at a fast-growing rate in mechanical industry, the dissimilar metal weld process employing laser techniques is actively studied. Moreover, also, Industry fields try to apply for welding the white metal and the back metal using laser cladding method for high durability. Furthermore, it has followed that laser cladding method has a lot better bond strength, toughness, anti-abrasion and environment-friendly than centrifugal casting method through preceding research. Therefore, the laser cladding method has a lot better quality, cost reduction, eco-friendliness and permanence of technology than the centrifugal casting method or the gravity casting method. In this study, we compare the mechanical properties of different bearing materials by evaluating the behavior of laser cladding layer with various materials (i.e. SS400, SCM440, S20C) under the same parameters. Furthermore, we analyze the porosity of various tilting pad bearing materials which white metal treated on samples. SEM, EDS analysis and hardness tests of three materials are shown to understand the mechanical properties and tribological behavior. W/D ratio, surface roughness results with various materials are performed in this study.Keywords: laser cladding, tilting pad bearing, white metal, mechanical properties
Procedia PDF Downloads 379214 Analysis of Real Time Seismic Signal Dataset Using Machine Learning
Authors: Sujata Kulkarni, Udhav Bhosle, Vijaykumar T.
Abstract:
Due to the closeness between seismic signals and non-seismic signals, it is vital to detect earthquakes using conventional methods. In order to distinguish between seismic events and non-seismic events depending on their amplitude, our study processes the data that come from seismic sensors. The authors suggest a robust noise suppression technique that makes use of a bandpass filter, an IIR Wiener filter, recursive short-term average/long-term average (STA/LTA), and Carl short-term average (STA)/long-term average for event identification (LTA). The trigger ratio used in the proposed study to differentiate between seismic and non-seismic activity is determined. The proposed work focuses on significant feature extraction for machine learning-based seismic event detection. This serves as motivation for compiling a dataset of all features for the identification and forecasting of seismic signals. We place a focus on feature vector dimension reduction techniques due to the temporal complexity. The proposed notable features were experimentally tested using a machine learning model, and the results on unseen data are optimal. Finally, a presentation using a hybrid dataset (captured by different sensors) demonstrates how this model may also be employed in a real-time setting while lowering false alarm rates. The planned study is based on the examination of seismic signals obtained from both individual sensors and sensor networks (SN). A wideband seismic signal from BSVK and CUKG station sensors, respectively located near Basavakalyan, Karnataka, and the Central University of Karnataka, makes up the experimental dataset.Keywords: Carl STA/LTA, features extraction, real time, dataset, machine learning, seismic detection
Procedia PDF Downloads 124213 Spatial and Time Variability of Ambient Vibration H/V Frequency Peak
Authors: N. Benkaci, E. Oubaiche, J.-L. Chatelain, R. Bensalem, K. Abbes
Abstract:
The ambient vibration H/V technique is widely used nowadays in microzonation studies, because of its easy field handling and its low cost, compared to other geophysical methods. However, in presence of complex geology or lateral heterogeneity evidenced by more than one peak frequency in the H/V curve, it is difficult to interpret the results, especially when soil information is lacking. In this work, we focus on the construction site of the Baraki 40000=place stadium, located in the north-east side of the Mitidja basin (Algeria), to identify the seismic wave amplification zones. H/V curve analysis leads to the observation of spatial and time variability of the H/V frequency peaks. The spatial variability allows dividing the studied area into three main zones: (1) one with a predominant frequency around 1,5 Hz showing an important amplification level, (2) the second exhibits two peaks at 1,5 Hz and in the 4 Hz – 10 Hz range, and (3) the third zone is characterized by a plateau between 2 Hz and 3 Hz. These H/V curve categories reveal a consequent lateral heterogeneity dividing the stadium site roughly in the middle. Furthermore, a continuous ambient vibration recording during several weeks allows showing that the first peak at 1,5 Hz in the second zone, completely disappears between 2 am and 4 am, and reaching its maximum amplitude around 12 am. Consequently, the anthropogenic noise source generating these important variations could be the Algiers Rocade Sud highway, located in the maximum amplification azimuth direction of the H/V curves. This work points out that the H/V method is an important tool to perform nano-zonation studies prior to geotechnical and geophysical investigations, and that, in some cases, the H/V technique fails to reveal the resonance frequency in the absence of strong anthropogenic source.Keywords: ambient vibrations, amplification, fundamental frequency, lateral heterogeneity, site effect
Procedia PDF Downloads 237212 Weight Estimation Using the K-Means Method in Steelmaking’s Overhead Cranes in Order to Reduce Swing Error
Authors: Seyedamir Makinejadsanij
Abstract:
One of the most important factors in the production of quality steel is to know the exact weight of steel in the steelmaking area. In this study, a calculation method is presented to estimate the exact weight of the melt as well as the objects transported by the overhead crane. Iran Alloy Steel Company's steelmaking area has three 90-ton cranes, which are responsible for transferring the ladles and ladle caps between 34 areas in the melt shop. Each crane is equipped with a Disomat Tersus weighing system that calculates and displays real-time weight. The moving object has a variable weight due to swinging, and the weighing system has an error of about +-5%. This means that when the object is moving by a crane, which weighs about 80 tons, the device (Disomat Tersus system) calculates about 4 tons more or 4 tons less, and this is the biggest problem in calculating a real weight. The k-means algorithm is an unsupervised clustering method that was used here. The best result was obtained by considering 3 centers. Compared to the normal average(one) or two, four, five, and six centers, the best answer is with 3 centers, which is logically due to the elimination of noise above and below the real weight. Every day, the standard weight is moved with working cranes to test and calibrate cranes. The results are shown that the accuracy is about 40 kilos per 60 tons (standard weight). As a result, with this method, the accuracy of moving weight is calculated as 99.95%. K-means is used to calculate the exact mean of objects. The stopping criterion of the algorithm is also the number of 1000 repetitions or not moving the points between the clusters. As a result of the implementation of this system, the crane operator does not stop while moving objects and continues his activity regardless of weight calculations. Also, production speed increased, and human error decreased.Keywords: k-means, overhead crane, melt weight, weight estimation, swing problem
Procedia PDF Downloads 90211 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems
Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer
Abstract:
This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.Keywords: cascade control, multi-Loop control systems, multiobjective optimization, optimal control
Procedia PDF Downloads 152210 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning
Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho
Abstract:
Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning
Procedia PDF Downloads 96209 A Compact Extended Laser Diode Cavity Centered at 780 nm for Use in High-Resolution Laser Spectroscopy
Authors: J. Alvarez, J. Pimienta, R. Sarmiento
Abstract:
Diode lasers working in free mode present different shifting and broadening determined by external factors such as temperature, current or mechanical vibrations, and they are not more useful in applications such as spectroscopy, metrology, and cooling of atoms, among others. Different configurations can reduce the spectral width of a laser; one of the most effective is to extend the optical resonator of the laser diode and use optical feedback either with the help of a partially reflective mirror or with a diffraction grating; this latter configuration is not only allowed to reduce the spectral width of the laser line but also to coarsely adjust its working wavelength, within a wide range typically ~ 10nm by slightly varying the angle of the diffraction grating. Two settings are commonly used for this purpose, the Littrow configuration and the Littmann Metcalf. In this paper, we present the design, construction, and characterization of a compact extended laser cavity in Littrow configuration. The designed cavity is compact and was machined on an aluminum block using computer numerical control (CNC); it has a mass of only 380 g. The design was tested on laser diodes with different wavelengths, 650nm, 780nm, and 795 nm, but can be equally efficient at other wavelengths. This report details the results obtained from the extended cavity working at a wavelength of 780 nm, with an output power of around 35mW and a line width of less than 1Mhz. The cavity was used to observe the spectrum of the corresponding Rubidium D2 line. By modulating the current and with the help of phase detection techniques, a dispersion signal with an excellent signal-to-noise ratio was generated that allowed the stabilization of the laser to a transition of the hyperfine structure of Rubidium with an integral proportional controller (PI) circuit made with precision operational amplifiers.Keywords: Littrow, Littman-Metcalf, line width, laser stabilization, hyperfine structure
Procedia PDF Downloads 227208 Marker-Controlled Level-Set for Segmenting Breast Tumor from Thermal Images
Authors: Swathi Gopakumar, Sruthi Krishna, Shivasubramani Krishnamoorthy
Abstract:
Contactless, painless and radiation-free thermal imaging technology is one of the preferred screening modalities for detection of breast cancer. However, poor signal to noise ratio and the inexorable need to preserve edges defining cancer cells and normal cells, make the segmentation process difficult and hence unsuitable for computer-aided diagnosis of breast cancer. This paper presents key findings from a research conducted on the appraisal of two promising techniques, for the detection of breast cancer: (I) marker-controlled, Level-set segmentation of anisotropic diffusion filtered preprocessed image versus (II) Segmentation using marker-controlled level-set on a Gaussian-filtered image. Gaussian-filtering processes the image uniformly, whereas anisotropic filtering processes only in specific areas of a thermographic image. The pre-processed (Gaussian-filtered and anisotropic-filtered) images of breast samples were then applied for segmentation. The segmentation of breast starts with initial level-set function. In this study, marker refers to the position of the image to which initial level-set function is applied. The markers are generally placed on the left and right side of the breast, which may vary with the breast size. The proposed method was carried out on images from an online database with samples collected from women of varying breast characteristics. It was observed that the breast was able to be segmented out from the background by adjustment of the markers. From the results, it was observed that as a pre-processing technique, anisotropic filtering with level-set segmentation, preserved the edges more effectively than Gaussian filtering. Segmented image, by application of anisotropic filtering was found to be more suitable for feature extraction, enabling automated computer-aided diagnosis of breast cancer.Keywords: anisotropic diffusion, breast, Gaussian, level-set, thermograms
Procedia PDF Downloads 380207 Improvement of Thermal Comfort Conditions in an Urban Space "Case Study: The Square of Independence, Setif, Algeria"
Authors: Ballout Amor, Yasmina Bouchahm, Lacheheb Dhia Eddine Zakaria
Abstract:
Several studies all around the world were conducted on the phenomenon of the urban heat island, and referring to the results obtained, one of the most important factors that influence this phenomenon is the mineralization of the cities which means the reducing of evaporative urban surfaces, replacing vegetation and wetlands with concrete and asphalt. The use of vegetation and water can change the urban environment and improve comfort, thus reduce the heat island. The trees act as a mask to the sun, wind, and sound, and also as a source of humidity which reduces air temperature and surrounding surfaces. Water also acts as a buffer to noise; it is also a source of moisture and regulates temperature not to mention the psychological effect on humans. Our main objective in this paper is to determine the impact of vegetation, ponds and fountains on the urban micro climate in general and on the thermal comfort of people along the Independence square in the Algerian city of Sétif, which is a semi-arid climate, in particularly. In order to reach this objective, a comparative study between different scenarios has been done; the use of the Envi-met program enabled us to model the urban environment of the Independence Square and to study the possibility of improving the conditions of comfort by adding an amount of vegetation and water ponds. After studying the results obtained (temperature, relative humidity, wind speed, PMV and PPD indicators), the efficiency of the additions we've made on the square was confirmed and this is what helped us to confirm our assumptions regarding the terms of comfort in the studied site, and in the end we are trying to develop recommendations and solutions which may contribute to improve the conditions for greater comfort in the Independence square.Keywords: comfort in outer space, urban environment, scenarisation, vegetation, water ponds, public square, simulation
Procedia PDF Downloads 454206 Building Transparent Supply Chains through Digital Tracing
Authors: Penina Orenstein
Abstract:
In today’s world, particularly with COVID-19 a constant worldwide threat, organizations need greater visibility over their supply chains more than ever before, in order to find areas for improvement and greater efficiency, reduce the chances of disruption and stay competitive. The concept of supply chain mapping is one where every process and route is mapped in detail between each vendor and supplier. The simplest method of mapping involves sourcing publicly available data including news and financial information concerning relationships between suppliers. An additional layer of information would be disclosed by large, direct suppliers about their production and logistics sites. While this method has the advantage of not requiring any input from suppliers, it also doesn’t allow for much transparency beyond the first supplier tier and may generate irrelevant data—noise—that must be filtered out to find the actionable data. The primary goal of this research is to build data maps of supply chains by focusing on a layered approach. Using these maps, the secondary goal is to address the question as to whether the supply chain is re-engineered to make improvements, for example, to lower the carbon footprint. Using a drill-down approach, the end result is a comprehensive map detailing the linkages between tier-one, tier-two, and tier-three suppliers super-imposed on a geographical map. The driving force behind this idea is to be able to trace individual parts to the exact site where they’re manufactured. In this way, companies can ensure sustainability practices from the production of raw materials through the finished goods. The approach allows companies to identify and anticipate vulnerabilities in their supply chain. It unlocks predictive analytics capabilities and enables them to act proactively. The research is particularly compelling because it unites network science theory with empirical data and presents the results in a visual, intuitive manner.Keywords: data mining, supply chain, empirical research, data mapping
Procedia PDF Downloads 174205 Introduction of Mass Rapid Transit System and Its Impact on Para-Transit
Authors: Khalil Ahmad Kakar
Abstract:
In developing countries increasing the automobile and low capacity public transport (para-transit) which are creating congestion, pollution, noise, and traffic accident are the most critical quandary. These issues are under the analysis of assessors to break down the puzzle and propose sustainable urban public transport system. Kabul city is one of those urban areas that the inhabitants are suffering from lack of tolerable and friendly public transport system. The city is the most-populous and overcrowded with around 4.5 million population. The para-transit is the only dominant public transit system with a very poor level of services and low capacity vehicles (6-20 passengers). Therefore, this study after detailed investigations suggests bus rapid transit (BRT) system in Kabul City. It is aimed to mitigate the role of informal transport and decreases congestion. The research covers three parts. In the first part, aggregated travel demand modelling (four-step) is applied to determine the number of users for para-transit and assesses BRT network based on higher passenger demand for public transport mode. In the second part, state preference (SP) survey and binary logit model are exerted to figure out the utility of existing para-transit mode and planned BRT system. Finally, the impact of predicted BRT system on para-transit is evaluated. The extracted outcome based on high travel demand suggests 10 km network for the proposed BRT system, which is originated from the district tenth and it is ended at Kabul International Airport. As well as, the result from the disaggregate travel mode-choice model, based on SP and logit model indicates that the predicted mass rapid transit system has higher utility with the significant impact regarding the reduction of para-transit.Keywords: BRT, para-transit, travel demand modelling, Kabul City, logit model
Procedia PDF Downloads 183204 The Carbon Footprint Model as a Plea for Cities towards Energy Transition: The Case of Algiers Algeria
Authors: Hachaichi Mohamed Nour El-Islem, Baouni Tahar
Abstract:
Environmental sustainability rather than a trans-disciplinary and a scientific issue, is the main problem that characterizes all modern cities nowadays. In developing countries, this concern is expressed in a plethora of critical urban ills: traffic congestion, air pollution, noise, urban decay, increase in energy consumption and CO2 emissions which blemish cities’ landscape and might threaten citizens’ health and welfare. As in the same manner as developing world cities, the rapid growth of Algiers’ human population and increasing in city scale phenomena lead eventually to increase in daily trips, energy consumption and CO2 emissions. In addition, the lack of proper and sustainable planning of the city’s infrastructure is one of the most relevant issues from which Algiers suffers. The aim of this contribution is to estimate the carbon deficit of the City of Algiers, Algeria, using the Ecological Footprint Model (carbon footprint). In order to achieve this goal, the amount of CO2 from fuel combustion has been calculated and aggregated into five sectors (agriculture, industry, residential, tertiary and transportation); as well, Algiers’ biocapacity (CO2 uptake land) has been calculated to determine the ecological overshoot. This study shows that Algiers’ transport system is not sustainable and is generating more than 50% of Algiers total carbon footprint which cannot be sequestered by the local forest land. The aim of this research is to show that the Carbon Footprint Assessment might be a relevant indicator to design sustainable strategies/policies striving to reduce CO2 by setting in motion the energy consumption in the transportation sector and reducing the use of fossil fuels as the main energy input.Keywords: biocapacity, carbon footprint, ecological footprint assessment, energy consumption
Procedia PDF Downloads 145203 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models
Authors: Ainouna Bouziane
Abstract:
The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.Keywords: electron tomography, supported catalysts, nanometrology, error assessment
Procedia PDF Downloads 85202 Comparing Occupants’ Satisfaction in LEED Certified Office Buildings and Non-LEED Certified Office Buildings: A Case Study of Office Buildings in Egypt and Turkey
Authors: Amgad A. Farghal, Dina I. El Desouki
Abstract:
Energy consumption and users’ satisfaction were compared in three LEED certified office buildings in turkey and an office building in Egypt. The field studies were conducted in summer 2012. The measured environmental parameters in the four buildings were indoor air temperature, relative humidity, CO2 percentage and light intensity. The traditional building is located in Smart Village in Abu Rawash, Cairo, Egypt. The building was studied for 7 days resulting in 84 responds. The three rated buildings are in Istanbul; Turkey. A Platinum LEED certified office building is owned by BASF and gained a platinum certificate for new construction and major renovation. The building was studied for 3 days resulting in 13 responds. A Gold LEED certified office building is owned by BASF and gained a gold certificate for new construction and major renovation. The building was studied for 2 days resulting in 10 responds. A silver LEED certified office building is owned by Unilever and gained a silver certificate for commercial interiors. The building was studied for 7 days resulting in 84 responds. The results showed that all buildings had no significant difference regarding occupants’ satisfaction with the amount of lighting, noise level, odor and access to the outdoor view. There was significant difference between occupants’ satisfaction in LEED certified buildings and the traditional building regarding the thermal environment and the perception of the general environment (colors, carpet and decoration. The findings suggest that careful design could lead to a certified building that enhances the thermal environment and the perception of the indoor environment leading to energy consumption without scarifying occupants’ satisfaction.Keywords: energy consumption, occupants’ satisfaction, rating systems, office buildings
Procedia PDF Downloads 419201 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach
Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar
Abstract:
Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.Keywords: artificial neural networks, ANN, discrete wavelet transform, DWT, gray-level co-occurrence matrix, GLCM, k-nearest neighbor, KNN, region of interest, ROI
Procedia PDF Downloads 153200 Impact of Ventilation Systems on Indoor Air Quality in Swedish Primary School Classrooms
Authors: Sarka Langer, Despoina Teli, Blanka Cabovska, Jan-Olof Dalenbäck, Lars Ekberg, Gabriel Bekö, Pawel Wargocki, Natalia Giraldo Vasquez
Abstract:
The aim of the study was to investigate the impact of various ventilation systems on indoor climate, air pollution, chemistry, and perception. Measurements of thermal environment and indoor air quality were performed in 45 primary school classrooms in Gothenburg, Sweden. The classrooms were grouped into three categories according to their ventilation system: category A) natural or exhaust ventilation or automated window opening; category B) balanced mechanical ventilation systems with constant air volume (CAV); and category C) balanced mechanical ventilation systems with variable air volume (VAV). A questionnaire survey about indoor air quality, perception of temperature, odour, noise and light, and sensation of well-being, alertness focus, etc., was distributed among the 10-12 years old children attending the classrooms. The results (medians) showed statistically significant differences between ventilation category A and categories B and C, but not between categories B and C in air change rates, median concentrations of carbon dioxide, individual volatile organic compounds formaldehyde and isoprene, in-door-to-outdoor ozone ratios and products of ozonolysis of squalene, a constituent of human skin oils, 6-methyl-5-hepten-2-one and decanal. Median ozone concentration, ozone loss -a difference between outdoor and indoor ozone concentrations- were different only between categories A and C. Median concentration of total VOCs and a perception index based on survey responses on perceptions and sensations indoors were not significantly different. In conclusion, ventilation systems have an impact on air change rates, indoor air quality, and chemistry, but the Swedish primary school children’s perception did not differ with the ventilation systems of the classrooms.Keywords: indoor air pollutants, indoor climate, indoor chemistry, air change rate, perception
Procedia PDF Downloads 62199 An Intelligent Controller Augmented with Variable Zero Lag Compensation for Antilock Braking System
Authors: Benjamin Chijioke Agwah, Paulinus Chinaenye Eze
Abstract:
Antilock braking system (ABS) is one of the important contributions by the automobile industry, designed to ensure road safety in such way that vehicles are kept steerable and stable when during emergency braking. This paper presents a wheel slip-based intelligent controller with variable zero lag compensation for ABS. It is required to achieve a very fast perfect wheel slip tracking during hard braking condition and eliminate chattering with improved transient and steady state performance, while shortening the stopping distance using effective braking torque less than maximum allowable torque to bring a braking vehicle to a stop. The dynamic of a vehicle braking with a braking velocity of 30 ms⁻¹ on a straight line was determined and modelled in MATLAB/Simulink environment to represent a conventional ABS system without a controller. Simulation results indicated that system without a controller was not able to track desired wheel slip and the stopping distance was 135.2 m. Hence, an intelligent control based on fuzzy logic controller (FLC) was designed with a variable zero lag compensator (VZLC) added to enhance the performance of FLC control variable by eliminating steady state error, provide improve bandwidth to eliminate the effect of high frequency noise such as chattering during braking. The simulation results showed that FLC- VZLC provided fast tracking of desired wheel slip, eliminate chattering, and reduced stopping distance by 70.5% (39.92 m), 63.3% (49.59 m), 57.6% (57.35 m) and 50% (69.13 m) on dry, wet, cobblestone and snow road surface conditions respectively. Generally, the proposed system used effective braking torque that is less than the maximum allowable braking torque to achieve efficient wheel slip tracking and overall robust control performance on different road surfaces.Keywords: ABS, fuzzy logic controller, variable zero lag compensator, wheel slip tracking
Procedia PDF Downloads 146198 Development and Validation of Work Movement Task Analysis: Part 1
Authors: Mohd Zubairy Bin Shamsudin
Abstract:
Work-related Musculoskeletal Disorder (WMSDs) is one of the occupational health problems encountered by workers over the world. In Malaysia, there is increasing in trend over the years, particularly in the manufacturing sectors. Current method to observe workplace WMSDs is self-report questionnaire, observation and direct measurement. Observational method is most frequently used by the researcher and practitioner because of the simplified, quick and versatile when it applies to the worksite. However, there are some limitations identified e.g. some approach does not cover a wide spectrum of biomechanics activity and not sufficiently sensitive to assess the actual risks. This paper elucidates the development of Work Movement Task Analysis (WMTA), which is an observational tool for industrial practitioners’ especially untrained personnel to assess WMSDs risk factors and provide a basis for suitable intervention. First stage of the development protocol involved literature reviews, practitioner survey, tool validation and reliability. A total of six themes/comments were received in face validity stage. New revision of WMTA consisted of four sections of postural (neck, back, shoulder, arms, and legs) and associated risk factors; movement, load, coupling and basic environmental factors (lighting, noise, odorless, heat and slippery floor). For inter-rater reliability study shows substantial agreement among rater with K = 0.70. Meanwhile, WMTA validation shows significant association between WMTA score and self-reported pain or discomfort for the back, shoulder&arms and knee&legs with p<0.05. This tool is expected to provide new workplace ergonomic observational tool to assess WMSDs for the next stage of the case study.Keywords: assessment, biomechanics, musculoskeletal disorders, observational tools
Procedia PDF Downloads 469