Search results for: adaptive and non-adaptive spectral estimation
2455 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 992454 Hybrid Localization Schemes for Wireless Sensor Networks
Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir
Abstract:
This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks
Procedia PDF Downloads 4672453 Real-Time Finger Tracking: Evaluating YOLOv8 and MediaPipe for Enhanced HCI
Authors: Zahra Alipour, Amirreza Moheb Afzali
Abstract:
In the field of human-computer interaction (HCI), hand gestures play a crucial role in facilitating communication by expressing emotions and intentions. The precise tracking of the index finger and the estimation of joint positions are essential for developing effective gesture recognition systems. However, various challenges, such as anatomical variations, occlusions, and environmental influences, hinder optimal functionality. This study investigates the performance of the YOLOv8m model for hand detection using the EgoHands dataset, which comprises diverse hand gesture images captured in various environments. Over three training processes, the model demonstrated significant improvements in precision (from 88.8% to 96.1%) and recall (from 83.5% to 93.5%), achieving a mean average precision (mAP) of 97.3% at an IoU threshold of 0.7. We also compared YOLOv8m with MediaPipe and an integrated YOLOv8 + MediaPipe approach. The combined method outperformed the individual models, achieving an accuracy of 99% and a recall of 99%. These findings underscore the benefits of model integration in enhancing gesture recognition accuracy and localization for real-time applications. The results suggest promising avenues for future research in HCI, particularly in augmented reality and assistive technologies, where improved gesture recognition can significantly enhance user experience.Keywords: YOLOv8, mediapipe, finger tracking, joint estimation, human-computer interaction (HCI)
Procedia PDF Downloads 52452 Digital Twin of Real Electrical Distribution System with Real Time Recursive Load Flow Calculation and State Estimation
Authors: Anosh Arshad Sundhu, Francesco Giordano, Giacomo Della Croce, Maurizio Arnone
Abstract:
Digital Twin (DT) is a technology that generates a virtual representation of a physical system or process, enabling real-time monitoring, analysis, and simulation. DT of an Electrical Distribution System (EDS) can perform online analysis by integrating the static and real-time data in order to show the current grid status and predictions about the future status to the Distribution System Operator (DSO), producers and consumers. DT technology for EDS also offers the opportunity to DSO to test hypothetical scenarios. This paper discusses the development of a DT of an EDS by Smart Grid Controller (SGC) application, which is developed using open-source libraries and languages. The developed application can be integrated with Supervisory Control and Data Acquisition System (SCADA) of any EDS for creating the DT. The paper shows the performance of developed tools inside the application, tested on real EDS for grid observability, Smart Recursive Load Flow (SRLF) calculation and state estimation of loads in MV feeders.Keywords: digital twin, distributed energy resources, remote terminal units, supervisory control and data acquisition system, smart recursive load flow
Procedia PDF Downloads 1102451 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 1502450 Boosting Profits and Enhancement of Environment through Adsorption of Methane during Upstream Processes
Authors: Sudipt Agarwal, Siddharth Verma, S. M. Iqbal, Hitik Kalra
Abstract:
Natural gas as a fuel has created wonders, but on the contrary, the ill-effects of methane have been a great worry for professionals. The largest source of methane emission is the oil and gas industry among all industries. Methane depletes groundwater and being a greenhouse gas has devastating effects on the atmosphere too. Methane remains for a decade or two in the atmosphere and later breaks into carbon dioxide and thus damages it immensely, as it warms up the atmosphere 72 times more than carbon dioxide in those two decades and keeps on harming after breaking into carbon dioxide afterward. The property of a fluid to adhere to the surface of a solid, better known as adsorption, can be a great boon to minimize the hindrance caused by methane. Adsorption of methane during upstream processes can save the groundwater and atmospheric depletion around the site which can be hugely lucrative to earn profits which are reduced due to environmental degradation leading to project cancellation. The paper would deal with reasons why casing and cementing are not able to prevent leakage and would suggest methods to adsorb methane during upstream processes with mathematical explanation using volumetric analysis of adsorption of methane on the surface of activated carbon doped with copper oxides (which increases the absorption by 54%). The paper would explain in detail (through a cost estimation) how the proposed idea can be hugely beneficial not only to environment but also to the profits earned.Keywords: adsorption, casing, cementing, cost estimation, volumetric analysis
Procedia PDF Downloads 1912449 On the Fourth-Order Hybrid Beta Polynomial Kernels in Kernel Density Estimation
Authors: Benson Ade Eniola Afere
Abstract:
This paper introduces a family of fourth-order hybrid beta polynomial kernels developed for statistical analysis. The assessment of these kernels' performance centers on two critical metrics: asymptotic mean integrated squared error (AMISE) and kernel efficiency. Through the utilization of both simulated and real-world datasets, a comprehensive evaluation was conducted, facilitating a thorough comparison with conventional fourth-order polynomial kernels. The evaluation procedure encompassed the computation of AMISE and efficiency values for both the proposed hybrid kernels and the established classical kernels. The consistently observed trend was the superior performance of the hybrid kernels when compared to their classical counterparts. This trend persisted across diverse datasets, underscoring the resilience and efficacy of the hybrid approach. By leveraging these performance metrics and conducting evaluations on both simulated and real-world data, this study furnishes compelling evidence in favour of the superiority of the proposed hybrid beta polynomial kernels. The discernible enhancement in performance, as indicated by lower AMISE values and higher efficiency scores, strongly suggests that the proposed kernels offer heightened suitability for statistical analysis tasks when compared to traditional kernels.Keywords: AMISE, efficiency, fourth-order Kernels, hybrid Kernels, Kernel density estimation
Procedia PDF Downloads 702448 Adaptive Responses of Carum copticum to in vitro Salt Stress
Authors: R. Razavizadeh, F. Adabavazeh, M. Rezaee Chermahini
Abstract:
Salinity is one of the most widespread agricultural problems in arid and semi-arid areas that limits the plant growth and crop productivity. In this study, the salt stress effects on protein, reducing sugar, proline contents and antioxidant enzymes activities of Carum copticum L. under in vitro conditions were studied. Seeds of C. copticum were cultured in Murashige and Skoog (MS) medium containing 0, 25, 50, 100 and 150 mM NaCl and calli were cultured in MS medium containing 1 μM 2, 4-dichlorophenoxyacetic acid, 4 μM benzyl amino purine and different levels of NaCl (0, 25, 50, 100 and 150 mM). After NaCl treatment for 28 days, the proline and reducing sugar contents of shoots, roots and calli increased significantly in relation to the severity of the salt stress. The highest amount of proline and carbohydrate were observed at 150 and 100 mM NaCl, respectively. The reducing sugar accumulation in shoots was the highest as compared to roots, whereas, proline contents did not show any significant difference in roots and shoots under salt stress. The results showed significant reduction of protein contents in seedlings and calli. Based on these results, proteins extracted from the shoots, roots and calli of C. copticum treated with 150 mM NaCl showed the lowest contents. The positive relationships were observed between activity of antioxidant enzymes and the increase in stress levels. Catalase, ascorbate peroxidase and superoxide dismutase activity increased significantly under salt concentrations in comparison to the control. These results suggest that the accumulation of proline and sugars, and activation of antioxidant enzymes play adaptive roles in the adaptation of seedlings and callus of C. copticum to saline conditions.Keywords: antioxidant enzymes, Carum copticum, organic solutes, salt stress
Procedia PDF Downloads 2812447 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137
Authors: Abdulsalam M. Alhawsawi
Abstract:
Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137
Procedia PDF Downloads 1162446 Advanced Mouse Cursor Control and Speech Recognition Module
Authors: Prasad Kalagura, B. Veeresh kumar
Abstract:
We constructed an interface system that would allow a similarly paralyzed user to interact with a computer with almost full functional capability. A real-time tracking algorithm is implemented based on adaptive skin detection and motion analysis. The clicking of the mouse is activated by the user's eye blinking through a sensor. The keyboard function is implemented by voice recognition kit.Keywords: embedded ARM7 processor, mouse pointer control, voice recognition
Procedia PDF Downloads 5782445 On the Solution of Fractional-Order Dynamical Systems Endowed with Block Hybrid Methods
Authors: Kizito Ugochukwu Nwajeri
Abstract:
This paper presents a distinct approach to solving fractional dynamical systems using hybrid block methods (HBMs). Fractional calculus extends the concept of derivatives and integrals to non-integer orders and finds increasing application in fields such as physics, engineering, and finance. However, traditional numerical techniques often struggle to accurately capture the complex behaviors exhibited by these systems. To address this challenge, we develop HBMs that integrate single-step and multi-step methods, enabling the simultaneous computation of multiple solution points while maintaining high accuracy. Our approach employs polynomial interpolation and collocation techniques to derive a system of equations that effectively models the dynamics of fractional systems. We also directly incorporate boundary and initial conditions into the formulation, enhancing the stability and convergence properties of the numerical solution. An adaptive step-size mechanism is introduced to optimize performance based on the local behavior of the solution. Extensive numerical simulations are conducted to evaluate the proposed methods, demonstrating significant improvements in accuracy and efficiency compared to traditional numerical approaches. The results indicate that our hybrid block methods are robust and versatile, making them suitable for a wide range of applications involving fractional dynamical systems. This work contributes to the existing literature by providing an effective numerical framework for analyzing complex behaviors in fractional systems, thereby opening new avenues for research and practical implementation across various disciplines.Keywords: fractional calculus, numerical simulation, stability and convergence, Adaptive step-size mechanism, collocation methods
Procedia PDF Downloads 432444 Enhancing Power System Resilience: An Adaptive Under-Frequency Load Shedding Scheme Incorporating PV Generation and Fast Charging Stations
Authors: Sami M. Alshareef
Abstract:
In the rapidly evolving energy landscape, the integration of renewable energy sources and the electrification of transportation are essential steps toward achieving sustainability goals. However, these advancements introduce new challenges, particularly in maintaining frequency stability due to variable photovoltaic (PV) generation and the growing demand for fast charging stations. The variability of photovoltaic (PV) generation due to weather conditions can disrupt the balance between generation and load, resulting in frequency deviations. To ensure the stability of power systems, it is imperative to develop effective under frequency load-shedding schemes. This research proposal presents an adaptive under-frequency load shedding scheme based on the power swing equation, designed explicitly for the IEEE-9 Bus Test System, that includes PV generation and fast charging stations. This research aims to address these challenges by developing an advanced scheme that dynamically disconnects fast charging stations based on power imbalances. The scheme prioritizes the disconnection of stations near affected areas to expedite system frequency stabilization. To achieve these goals, the research project will leverage the power swing equation, a widely recognized model for analyzing system dynamics during under-frequency events. By utilizing this equation, the proposed scheme will adaptively adjust the load-shedding process in real-time to maintain frequency stability and prevent power blackouts. The research findings will support the transition towards sustainable energy systems by ensuring a reliable and uninterrupted electricity supply while enhancing the resilience and stability of power systems during under-frequency events.Keywords: load shedding, fast charging stations, pv generation, power system resilience
Procedia PDF Downloads 812443 Study the Dynamic Behavior of Irregular Buildings by the Analysis Method Accelerogram
Authors: Beciri Mohamed Walid
Abstract:
Some architectural conditions required some shapes often lead to an irregular distribution of masses, rigidities and resistances. The main object of the present study consists in estimating the influence of the irregularity both in plan and in elevation which presenting some structures on the dynamic characteristics and his influence on the behavior of this structures. To do this, it is necessary to make apply both dynamic methods proposed by the RPA99 (spectral modal method and method of analysis by accelerogram) on certain similar prototypes and to analyze the parameters measuring the answer of these structures and to proceed to a comparison of the results.Keywords: structure, irregular, code, seismic, method, force, period
Procedia PDF Downloads 3102442 Brainwave Classification for Brain Balancing Index (BBI) via 3D EEG Model Using k-NN Technique
Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan
Abstract:
In this paper, the comparison between k-Nearest Neighbor (kNN) algorithms for classifying the 3D EEG model in brain balancing is presented. The EEG signal recording was conducted on 51 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, maximum PSD values were extracted as features from the model. There are three indexes for the balanced brain; index 3, index 4 and index 5. There are significant different of the EEG signals due to the brain balancing index (BBI). Alpha-α (8–13 Hz) and beta-β (13–30 Hz) were used as input signals for the classification model. The k-NN classification result is 88.46% accuracy. These results proved that k-NN can be used in order to predict the brain balancing application.Keywords: power spectral density, 3D EEG model, brain balancing, kNN
Procedia PDF Downloads 4872441 Crosslinking of Unsaturated Elastomers in Presence of Aromatic Chlorine-Containing Compounds
Authors: Shiraz M. Mammadov, Elvin M. Aliyev, Adil A. Garibov
Abstract:
The role of the disulfochloride benzene in unsaturated rubbers (SKIN, SKN-26) which is in the systems of SKIN+disulfochloride benzene and SKN-26+disulfochloride benzene was studied by the radiation exposure. By the usage of physical, chemical and spectral methods the changes in the molecular structure of the rubber were shown after irradiation by y-rays at 300 kGy. The outputs and the emergence of the crosslinking in the elastomers for each system depending on absorbed dose were defined. It is suggested that the mechanism of radiation occurs by the heterogeneous transformation of elastomers in the presence of disulfochloride benzene.Keywords: acrylonitrile-butadiene rubber, crosslinking, polyfunctional monomers, radiation, sensitizier, vulcanization
Procedia PDF Downloads 4492440 Understanding and Explaining Urban Resilience and Vulnerability: A Framework for Analyzing the Complex Adaptive Nature of Cities
Authors: Richard Wolfel, Amy Richmond
Abstract:
Urban resilience and vulnerability are critical concepts in the modern city due to the increased sociocultural, political, economic, demographic, and environmental stressors that influence current urban dynamics. Urban scholars need help explaining urban resilience and vulnerability. First, cities are dominated by people, which is challenging to model, both from an explanatory and a predictive perspective. Second, urban regions are highly recursive in nature, meaning they not only influence human action, but the structures of cities are constantly changing due to human actions. As a result, explanatory frameworks must continuously evolve as humans influence and are influenced by the urban environment in which they operate. Finally, modern cities have populations, sociocultural characteristics, economic flows, and environmental impacts on order of magnitude well beyond the cities of the past. As a result, the frameworks that seek to explain the various functions of a city that influence urban resilience and vulnerability must address the complex adaptive nature of cities and the interaction of many distinct factors that influence resilience and vulnerability in the city. This project develops a taxonomy and framework for organizing and explaining urban vulnerability. The framework is built on a well-established political development model that includes six critical classes of urban dynamics: political presence, political legitimacy, political participation, identity, production, and allocation. In addition, the framework explores how environmental security and technology influence and are influenced by the six elements of political development. The framework aims to identify key tipping points in society that act as influential agents of urban vulnerability in a region. This will help analysts and scholars predict and explain the influence of both physical and human geographical stressors in a dense urban area.Keywords: urban resilience, vulnerability, sociocultural stressors, political stressors
Procedia PDF Downloads 1162439 Three-Dimensional CFD Modeling of Flow Field and Scouring around Bridge Piers
Authors: P. Deepak Kumar, P. R. Maiti
Abstract:
In recent years, sediment scour near bridge piers and abutment is a serious problem which causes nationwide concern because it has resulted in more bridge failures than other causes. Scour is the formation of scour hole around the structure mounted on and embedded in erodible channel bed due to the erosion of soil by flowing water. The formation of scour hole around the structures depends upon shape and size of the pier, depth of flow as well as angle of attack of flow and sediment characteristics. The flow characteristics around these structures change due to man-made obstruction in the natural flow path which changes the kinetic energy of the flow around these structures. Excessive scour affects the stability of the foundation of the structure by the removal of the bed material. The accurate estimation of scour depth around bridge pier is very difficult. The foundation of bridge piers have to be taken deeper and to provide sufficient anchorage length required for stability of the foundation. In this study, computational model simulations using a 3D Computational Fluid Dynamics (CFD) model were conducted to examine the mechanism of scour around a cylindrical pier. Subsequently, the flow characteristics around these structures are presented for different flow conditions. Mechanism of scouring phenomenon, the formation of vortex and its consequent effect is discussed for a straight channel. Effort was made towards estimation of scour depth around bridge piers under different flow conditions.Keywords: bridge pier, computational fluid dynamics, multigrid, pier shape, scour
Procedia PDF Downloads 2962438 Proactive Change or Adaptive Response: A Study on the Impact of Digital Transformation Strategy Modes on Enterprise Profitability From a Configuration Perspective
Authors: Jing-Ma
Abstract:
Digital transformation (DT) is an important way for manufacturing enterprises to shape new competitive advantages, and how to choose an effective DT strategy is crucial for enterprise growth and sustainable development. Rooted in strategic change theory, this paper incorporates the dimensions of managers' digital cognition, organizational conditions, and external environment into the same strategic analysis framework and integrates the dynamic QCA method and PSM method to study the antecedent grouping of the DT strategy mode of manufacturing enterprises and its impact on corporate profitability based on the data of listed manufacturing companies in China from 2015 to 2019. We find that the synergistic linkage of different dimensional elements can form six equivalent paths of high-level DT, which can be summarized as the proactive change mode of resource-capability dominated as well as adaptive response mode such as industry-guided resource replenishment. Capacity building under complex environments, market-industry synergy-driven, forced adaptation under peer pressure, and the managers' digital cognition play a non-essential but crucial role in this process. Except for individual differences in the market industry collaborative driving mode, other modes are more stable in terms of individual and temporal changes. However, it is worth noting that not all paths that result in high levels of DT can contribute to enterprise profitability, but only high levels of DT that result from matching the optimization of internal conditions with the external environment, such as industry technology and macro policies, can have a significant positive impact on corporate profitability.Keywords: digital transformation, strategy mode, enterprise profitability, dynamic QCA, PSM approach
Procedia PDF Downloads 242437 Adaptive Certificate-Based Mutual Authentication Protocol for Mobile Grid Infrastructure
Authors: H. Parveen Begam, M. A. Maluk Mohamed
Abstract:
Mobile Grid Computing is an environment that allows sharing and coordinated use of diverse resources in dynamic, heterogeneous and distributed environment using different types of electronic portable devices. In a grid environment the security issues are like authentication, authorization, message protection and delegation handled by GSI (Grid Security Infrastructure). Proving better security between mobile devices and grid infrastructure is a major issue, because of the open nature of wireless networks, heterogeneous and distributed environments. In a mobile grid environment, the individual computing devices may be resource-limited in isolation, as an aggregated sum, they have the potential to play a vital role within the mobile grid environment. Some adaptive methodology or solution is needed to solve the issues like authentication of a base station, security of information flowing between a mobile user and a base station, prevention of attacks within a base station, hand-over of authentication information, communication cost of establishing a session key between mobile user and base station, computing complexity of achieving authenticity and security. The sharing of resources of the devices can be achieved only through the trusted relationships between the mobile hosts (MHs). Before accessing the grid service, the mobile devices should be proven authentic. This paper proposes the dynamic certificate based mutual authentication protocol between two mobile hosts in a mobile grid environment. The certificate generation process is done by CA (Certificate Authority) for all the authenticated MHs. Security (because of validity period of the certificate) and dynamicity (transmission time) can be achieved through the secure service certificates. Authentication protocol is built on communication services to provide cryptographically secured mechanisms for verifying the identity of users and resources.Keywords: mobile grid computing, certificate authority (CA), SSL/TLS protocol, secured service certificates
Procedia PDF Downloads 3052436 Optical Properties of TlInSe₂<AU> Si̇ngle Crystals
Authors: Gulshan Mammadova
Abstract:
This paper presents the results of studying the surface microrelief in 2D and 3D models and analyzing the spectroscopy of a three-junction TlInSe₂Keywords: optical properties, dielectric permittivity, real and imaginary dielectric permittivity, optical electrical conductivity
Procedia PDF Downloads 632435 Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation
Authors: Seynabou Toure, Oumar Diop, Kidiyo Kpalma, Amadou S. Maiga
Abstract:
Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature.Keywords: classification, coastline, color, sea-land segmentation
Procedia PDF Downloads 2472434 Poverty Dynamics in Thailand: Evidence from Household Panel Data
Authors: Nattabhorn Leamcharaskul
Abstract:
This study aims to examine determining factors of the dynamics of poverty in Thailand by using panel data of 3,567 households in 2007-2017. Four techniques of estimation are employed to analyze the situation of poverty across households and time periods: the multinomial logit model, the sequential logit model, the quantile regression model, and the difference in difference model. Households are categorized based on their experiences into 5 groups, namely chronically poor, falling into poverty, re-entering into poverty, exiting from poverty and never poor households. Estimation results emphasize the effects of demographic and socioeconomic factors as well as unexpected events on the economic status of a household. It is found that remittances have positive impact on household’s economic status in that they are likely to lower the probability of falling into poverty or trapping in poverty while they tend to increase the probability of exiting from poverty. In addition, not only receiving a secondary source of household income can raise the probability of being a never poor household, but it also significantly increases household income per capita of the chronically poor and falling into poverty households. Public work programs are recommended as an important tool to relieve household financial burden and uncertainty and thus consequently increase a chance for households to escape from poverty.Keywords: difference in difference, dynamic, multinomial logit model, panel data, poverty, quantile regression, remittance, sequential logit model, Thailand, transfer
Procedia PDF Downloads 1122433 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 1522432 Item-Trait Pattern Recognition of Replenished Items in Multidimensional Computerized Adaptive Testing
Authors: Jianan Sun, Ziwen Ye
Abstract:
Multidimensional computerized adaptive testing (MCAT) is a popular research topic in psychometrics. It is important for practitioners to clearly know the item-trait patterns of administered items when a test like MCAT is operated. Item-trait pattern recognition refers to detecting which latent traits in a psychological test are measured by each of the specified items. If the item-trait patterns of the replenished items in MCAT item pool are well detected, the interpretability of the items can be improved, which can further promote the abilities of the examinees who attending the MCAT to be accurately estimated. This research explores to solve the item-trait pattern recognition problem of the replenished items in MCAT item pool from the perspective of statistical variable selection. The popular multidimensional item response theory model, multidimensional two-parameter logistic model, is assumed to fit the response data of MCAT. The proposed method uses the least absolute shrinkage and selection operator (LASSO) to detect item-trait patterns of replenished items based on the essential information of item responses and ability estimates of examinees collected from a designed MCAT procedure. Several advantages of the proposed method are outlined. First, the proposed method does not strictly depend on the relative order between the replenished items and the selected operational items, so it allows the replenished items to be mixed into the operational items in reasonable order such as considering content constraints or other test requirements. Second, the LASSO used in this research improves the interpretability of the multidimensional replenished items in MCAT. Third, the proposed method can exert the advantage of shrinkage method idea for variable selection, so it can help to check item quality and key dimension features of replenished items and saves more costs of time and labors in response data collection than traditional factor analysis method. Moreover, the proposed method makes sure the dimensions of replenished items are recognized to be consistent with the dimensions of operational items in MCAT item pool. Simulation studies are conducted to investigate the performance of the proposed method under different conditions for varying dimensionality of item pool, latent trait correlation, item discrimination, test lengths and item selection criteria in MCAT. Results show that the proposed method can accurately detect the item-trait patterns of the replenished items in the two-dimensional and the three-dimensional item pool. Selecting enough operational items from the item pool consisting of high discriminating items by Bayesian A-optimality in MCAT can improve the recognition accuracy of item-trait patterns of replenished items for the proposed method. The pattern recognition accuracy for the conditions with correlated traits is better than those with independent traits especially for the item pool consisting of comparatively low discriminating items. To sum up, the proposed data-driven method based on the LASSO can accurately and efficiently detect the item-trait patterns of replenished items in MCAT.Keywords: item-trait pattern recognition, least absolute shrinkage and selection operator, multidimensional computerized adaptive testing, variable selection
Procedia PDF Downloads 1302431 Instant Location Detection of Objects Moving at High Speed in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data off the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as 'signaling parameters' (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of C-OTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as a rule. This report contains describing the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems
Procedia PDF Downloads 4702430 Developing Allometric Equations for More Accurate Aboveground Biomass and Carbon Estimation in Secondary Evergreen Forests, Thailand
Authors: Titinan Pothong, Prasit Wangpakapattanawong, Stephen Elliott
Abstract:
Shifting cultivation is an indigenous agricultural practice among upland people and has long been one of the major land-use systems in Southeast Asia. As a result, fallows and secondary forests have come to cover a large part of the region. However, they are increasingly being replaced by monocultures, such as corn cultivation. This is believed to be a main driver of deforestation and forest degradation, and one of the reasons behind the recurring winter smog crisis in Thailand and around Southeast Asia. Accurate biomass estimation of trees is important to quantify valuable carbon stocks and changes to these stocks in case of land use change. However, presently, Thailand lacks proper tools and optimal equations to quantify its carbon stocks, especially for secondary evergreen forests, including fallow areas after shifting cultivation and smaller trees with a diameter at breast height (DBH) of less than 5 cm. Developing new allometric equations to estimate biomass is urgently needed to accurately estimate and manage carbon storage in tropical secondary forests. This study established new equations using a destructive method at three study sites: approximately 50-year-old secondary forest, 4-year-old fallow, and 7-year-old fallow. Tree biomass was collected by harvesting 136 individual trees (including coppiced trees) from 23 species, with a DBH ranging from 1 to 31 cm. Oven-dried samples were sent for carbon analysis. Wood density was calculated from disk samples and samples collected with an increment borer from 79 species, including 35 species currently missing from the Global Wood Densities database. Several models were developed, showing that aboveground biomass (AGB) was strongly related to DBH, height (H), and wood density (WD). Including WD in the model was found to improve the accuracy of the AGB estimation. This study provides insights for reforestation management, and can be used to prepare baseline data for Thailand’s carbon stocks for the REDD+ and other carbon trading schemes. These may provide monetary incentives to stop illegal logging and deforestation for monoculture.Keywords: aboveground biomass, allometric equation, carbon stock, secondary forest
Procedia PDF Downloads 2842429 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm
Authors: Zachary Huffman, Joana Rocha
Abstract:
Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations
Procedia PDF Downloads 1352428 Performance Analysis of SAC-OCDMA System using Different Detectors
Authors: Somaya A. Abd El Mottaleb, Ahmed Abd El Aziz, Heba A. Fayed, Moustafa H. Aly
Abstract:
In this paper, we present the performance of spectral amplitude coding optical code division multiple access using different detectors at different transmission distances using single photodiode detection technique. Modified double weight codes are used as signature codes. Simulation results show that the system using avalanche photo detector can move distance longer than that using positive intrinsic negative photo detector.Keywords: avalanche photodiode, modified double weight, multiple access technique, single photodiode.
Procedia PDF Downloads 6052427 The Role of Human Capital in the Evolution of Inequality and Economic Growth in Latin-America
Authors: Luis Felipe Brito-Gaona, Emma M. Iglesias
Abstract:
There is a growing literature that studies the main determinants and drivers of inequality and economic growth in several countries, using panel data and different estimation methods (fixed effects, Generalized Methods of Moments (GMM) and Two Stages Least Squares (TSLS)). Recently, it was studied the evolution of these variables in the period 1980-2009 in the 18 countries of Latin-America and it was found that one of the main variables that explained their evolution was Foreign Direct Investment (FDI). We extend this study to the year 2015 in the same 18 countries in Latin-America, and we find that FDI does not have a significant role anymore, while we find a significant negative and positive effect of schooling levels on inequality and economic growth respectively. We also find that the point estimates associated with human capital are the largest ones of the variables included in the analysis, and this means that an increase in human capital (measured by schooling levels of secondary education) is the main determinant that can help to reduce inequality and to increase economic growth in Latin-America. Therefore, we advise that economic policies in Latin-America should be directed towards increasing the level of education. We use the methodologies of estimating by fixed effects, GMM and TSLS to check the robustness of our results. Our conclusion is the same regardless of the estimation method we choose. We also find that the international recession in the Latin-American countries in 2008 reduced significantly their economic growth.Keywords: economic growth, human capital, inequality, Latin-America
Procedia PDF Downloads 2262426 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes
Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani
Abstract:
The development of the method to annotate unknown gene functions is an important task in bioinformatics. One of the approaches for the annotation is The identification of the metabolic pathway that genes are involved in. Gene expression data have been utilized for the identification, since gene expression data reflect various intracellular phenomena. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.Keywords: metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning
Procedia PDF Downloads 403