Search results for: hybrid quantum algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4030

Search results for: hybrid quantum algorithms

610 Towards a Critical Disentanglement of the ‘Religion’ Nexus in the Global East

Authors: Daan F. Oostveen

Abstract:

‘Religion’ as a term is not native to the Global East. The concept ‘religion’ is both understood in its meaning of ‘religious traditions’, commonly referring to the ‘World Religions’ and in its adjective meaning ‘the religious’ or ‘religiosity’ as a separate domain of human culture, commonly contrasted to the secular. Though neither of these understandings are native to the historical worldviews of East Asia, their development in modern Western scholarship has had an enormous impact on the self-understanding of cultural diversity in the Global East as well. One example is the identification and therefore elevation to the status of World Religion of ‘Buddhism’ which connected formerly dispersed religious practices throughout the Global East and subsumed them under this powerful label. On the other hand, we see how popular religiosity, shamanism and hybrid cultural expressions have become excluded from genuine religion; this had an immense impact on the sense of legitimacy of these practices, which became sometimes labeled as superstition are rejected as magic. Our theoretical frameworks on religion in the Global East do not always consider the complex power dynamics between religious actors, both elites and lay expressions of religion in everyday life, governments and religious studies scholars. In order to get a clear image of how religiosity functions in the context of the Global East, we have to take into account these power dynamics. What is important in particular is the issue of religious identity or absence of religious identity. The self-understanding of religious actors in the Global East is often very different from what scholars of religion observe. Religious practice, from an etic perspective, is often unrelated to religious identification from an emic perspective. But we also witness the rise of Christian churches in the Global East, in which religious identity and belonging does play a pivotal role. Finally, religion in the Global East has since the beginning of the 20th Century been conceptualized as the ‘other’ or republicanism or Marxist-Maoist ideology. It is important not to deny the key role of colonial thinking in the process of religion formation in the Global East. In this paper, it is argued that religious realities constituted emerging as a result from our theory of religion, and that these religious realities in turn inform our theory. Therefore, the relationship between phenomenology of religion and theory of religion can never be disentangled. In fact, we have to acknowledge that our conceptualizations of religious diversity are always already influenced by our valuation of those cultural expressions that we have come to call ‘religious’.

Keywords: global east, religion, religious belonging, secularity

Procedia PDF Downloads 108
609 The Introduction of the Revolution Einstein’s Relative Energy Equations in Even 2n and Odd 3n Light Dimension Energy States Systems

Authors: Jiradeach Kalayaruan, Tosawat Seetawan

Abstract:

This paper studied the energy of the nature systems by looking at the overall image throughout the universe. The energy of the nature systems was developed from the Einstein’s energy equation. The researcher used the new ideas called even 2n and odd 3n light dimension energy states systems, which were developed from Einstein’s relativity energy theory equation. In this study, the major methodology the researchers used was the basic principle ideas or beliefs of some religions such as Buddhism, Christianity, Hinduism, Islam, or Tao in order to get new discoveries. The basic beliefs of each religion - Nivara, God, Ether, Atman, and Tao respectively, were great influential ideas on the researchers to use them greatly in the study to form new ideas from philosophy. Since the philosophy of each religion was alive with deep insight of the physical nature relative energy, it connected the basic beliefs to light dimension energy states systems. Unfortunately, Einstein’s original relative energy equation showed only even 2n light dimension energy states systems (if n = 1,…,∞). But in advance ideas, the researchers multiplied light dimension energy by Einstein’s original relative energy equation and get new idea of theoritical physics in odd 3n light dimension energy states systems (if n = 1,…,∞). Because from basic principle ideas or beliefs of some religions philosophy of each religion, you had to add the media light dimension energy into Einstein’s original relative energy equation. Consequently, the simple meaning picture in deep insight showed that you could touch light dimension energy of Nivara, God, Ether, Atman, and Tao by light dimension energy. Since light dimension energy was transferred by Nivara, God, Ether, Atman and Tao, the researchers got the new equation of odd 3n light dimension energy states systems. Moreover, the researchers expected to be able to solve overview problems of all light dimension energy in all nature relative energy, which are developed from Eistein’s relative energy equation.The finding of the study was called 'super nature relative energy' ( in odd 3n light dimension energy states systems (if n = 1,…,∞)). From the new ideas above you could do the summation of even 2n and odd 3n light dimension energy states systems in all of nature light dimension energy states systems. In the future time, the researchers will expect the new idea to be used in insight theoretical physics, which is very useful to the development of quantum mechanics, all engineering, medical profession, transportation, communication, scientific inventions, and technology, etc.

Keywords: 2n light dimension energy states systems effect, Ether, even 2n light dimension energy states systems, nature relativity, Nivara, odd 3n light dimension energy states systems, perturbation points energy, relax point energy states systems, stress perturbation energy states systems effect, super relative energy

Procedia PDF Downloads 321
608 Modelling Insider Attacks in Public Cloud

Authors: Roman Kulikov, Svetlana Kolesnikova

Abstract:

Last decade Cloud Computing technologies have been rapidly becoming ubiquitous. Each year more and more organizations, corporations, internet services and social networks trust their business sensitive information to Public Cloud. The data storage in Public Cloud is protected by security mechanisms such as firewalls, cryptography algorithms, backups, etc.. In this way, however, only outsider attacks can be prevented, whereas virtualization tools can be easily compromised by insider. The protection of Public Cloud’s critical elements from internal intruder remains extremely challenging. A hypervisor, also called a virtual machine manager, is a program that allows multiple operating systems (OS) to share a single hardware processor in Cloud Computing. One of the hypervisor's functions is to enforce access control policies. Furthermore, it prevents guest OS from disrupting each other and from accessing each other's memory or disk space. Hypervisor is the one of the most critical and vulnerable elements in Cloud Computing infrastructure. Nevertheless, it has been poorly protected from being compromised by insider. By exploiting certain vulnerabilities, privilege escalation can be easily achieved in insider attacks on hypervisor. In this way, an internal intruder, who has compromised one process, is able to gain control of the entire virtual machine. Thereafter, the consequences of insider attacks in Public Cloud might be more catastrophic and significant to virtual tools and sensitive data than of outsider attacks. So far, almost no preventive security countermeasures have been developed. There has been little attention paid for developing models to assist risks mitigation strategies. In this paper formal model of insider attacks on hypervisor is designed. Our analysis identifies critical hypervisor`s vulnerabilities that can be easily compromised by internal intruder. Consequently, possible conditions for successful attacks implementation are uncovered. Hence, development of preventive security countermeasures can be improved on the basis of the proposed model.

Keywords: insider attack, public cloud, cloud computing, hypervisor

Procedia PDF Downloads 347
607 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association

Authors: Jacky Liu

Abstract:

This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.

Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation

Procedia PDF Downloads 80
606 A Multi-Modal Virtual Walkthrough of the Virtual Past and Present Based on Panoramic View, Crowd Simulation and Acoustic Heritage on Mobile Platform

Authors: Lim Chen Kim, Tan Kian Lam, Chan Yi Chee

Abstract:

This research presents a multi-modal simulation in the reconstruction of the past and the construction of present in digital cultural heritage on mobile platform. In bringing the present life, the virtual environment is generated through a presented scheme for rapid and efficient construction of 360° panoramic view. Then, acoustical heritage model and crowd model are presented and improvised into the 360° panoramic view. For the reconstruction of past life, the crowd is simulated and rendered in an old trading port. However, the keystone of this research is in a virtual walkthrough that shows the virtual present life in 2D and virtual past life in 3D, both in an environment of virtual heritage sites in George Town through mobile device. Firstly, the 2D crowd is modelled and simulated using OpenGL ES 1.1 on mobile platform. The 2D crowd is used to portray the present life in 360° panoramic view of a virtual heritage environment based on the extension of Newtonian Laws. Secondly, the 2D crowd is animated and rendered into 3D with improved variety and incorporated into the virtual past life using Unity3D Game Engine. The behaviours of the 3D models are then simulated based on the enhancement of the classical model of Boid algorithm. Finally, a demonstration system is developed and integrated with the models, techniques and algorithms of this research. The virtual walkthrough is demonstrated to a group of respondents and is evaluated through the user-centred evaluation by navigating around the demonstration system. The results of the evaluation based on the questionnaires have shown that the presented virtual walkthrough has been successfully deployed through a multi-modal simulation and such a virtual walkthrough would be particularly useful in a virtual tour and virtual museum applications.

Keywords: Boid Algorithm, Crowd Simulation, Mobile Platform, Newtonian Laws, Virtual Heritage

Procedia PDF Downloads 257
605 Inverted Geometry Ceramic Insulators in High Voltage Direct Current Electron Guns for Accelerators

Authors: C. Hernandez-Garcia, P. Adderley, D. Bullard, J. Grames, M. A. Mamun, G. Palacios-Serrano, M. Poelker, M. Stutzman, R. Suleiman, Y. Wang, , S. Zhang

Abstract:

High-energy nuclear physics experiments performed at the Jefferson Lab (JLab) Continuous Electron Beam Accelerator Facility require a beam of spin-polarized ps-long electron bunches. The electron beam is generated when a circularly polarized laser beam illuminates a GaAs semiconductor photocathode biased at hundreds of kV dc inside an ultra-high vacuum chamber. The photocathode is mounted on highly polished stainless steel electrodes electrically isolated by means of a conical-shape ceramic insulator that extends into the vacuum chamber, serving as the cathode electrode support structure. The assembly is known as a dc photogun, which has to simultaneously meet the following criteria: high voltage to manage space charge forces within the electron bunch, ultra-high vacuum conditions to preserve the photocathode quantum efficiency, no field emission to prevent gas load when field emitted electrons impact the vacuum chamber, and finally no voltage breakdown for robust operation. Over the past decade, JLab has tested and implemented the use of inverted geometry ceramic insulators connected to commercial high voltage cables to operate a photogun at 200kV dc with a 10 cm long insulator, and a larger version at 300kV dc with 20 cm long insulator. Plans to develop a third photogun operating at 400kV dc to meet the stringent requirements of the proposed International Linear Collider are underway at JLab, utilizing even larger inverted insulators. This contribution describes approaches that have been successful in solving challenging problems related to breakdown and field emission, such as triple-point junction screening electrodes, mechanical polishing to achieve mirror-like surface finish and high voltage conditioning procedures with Kr gas to extinguish field emission.

Keywords: electron guns, high voltage techniques, insulators, vacuum insulation

Procedia PDF Downloads 101
604 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: gendered grammar, misogynistic language, natural language processing, neural networks

Procedia PDF Downloads 99
603 Readability of Trauma-Related Patient Education Materials from the AAOS and OTA Websites

Authors: Diane Ghanem, Oscar Covarrubias, Ridge Maxson, Samir Sabharwal, Babar Shafiq

Abstract:

Introduction: Web-based resources serve as a fundamental educational platform for orthopaedic trauma patients; however, they are notoriously written at a high grade reading level and are often too complicated for patients to benefit from them. The aim of this study is to perform an updated assessment of the readability of the AAOS trauma-related educational articles and compare their readability with that of injury-specific patient education materials developed by the OTA. Methods: All forty-six trauma-related articles on the AAOS patient education website were analyzed for readability. Two independent reviewers used the (1) Flesch-Kincaid Grade Level (FKGL) and the (2) Flesch Reading Ease (FRE) algorithms to calculate the readability level. Mean readability scores were compared across body part categories. One-sample t-test was done to compare mean FKGL with the recommended 6th-grade readability level and the average American adult reading level. Two-sample t-test was used to compare the readability scores of the AAOS trauma-related articles to those of the OTA. Results: The average FKGL and FRE for the AAOS articles were 8.9±0.74 and 57.2±5.8, respectively. All articles were written above the 6th-grade reading level. The average readability of the AAOS articles was significantly greater than the recommended 6th-grade and average American adult reading level. The average FKGL (8.9±0.74 vs 8.1±1.14) and FRE (57.2±5.8 vs 65.6±6.6) for all AAOS articles was significantly greater compared to that of OTA articles. Excellent agreement was observed between raters for the FKGL 0.956 (95%CI 0.922 - 0.975) and FRE 0.993 (95%CI 0.987 – 0.996). Discussion: Our findings suggest that, after almost a decade, the readability of the AAOS trauma-related articles remains unchanged. The AAOS and OTA trauma patient education materials have high readability levels and may be too difficult for patient comprehension. A need remains to improve the readability of these commonly used trauma education materials.

Keywords: american ocademy of orthopaedic surgeons, FKGL, FRE, orthopaedic trauma association, patient education, readability

Procedia PDF Downloads 50
602 High-Pressure Steam Turbine for Medium-Scale Concentrated Solar Power Plants

Authors: Ambra Giovannelli, Coriolano Salvini

Abstract:

Many efforts have been spent in the design and development of Concentrated Solar Power (CPS) Plants worldwide. Most of them are for on-grid electricity generation and they are large plants which can benefit from the economies of scale. Nevertheless, several potential applications for Small and Medium-Scale CSP plants can be relevant in the industrial sector as well as for off-grid purposes (i.e. in rural contexts). In a wide range of industrial processes, CSP technologies can be used for heat generation replacing conventional primary sources. For such market, proven technologies (usually hybrid solutions) already exist: more than 100 installations, especially in developing countries, are in operation and performance can be verified. On the other hand, concerning off-grid applications, solar technologies are not so mature. Even if the market offers a potential deployment of such systems, especially in countries where the access to grid is strongly limited, optimized solutions have not been developed yet. In this context, steam power plants can be taken into consideration for medium scale installations, due to the recent results achieved with direct steam generation systems based on paraboloidal dish or Fresnel lens solar concentrators. Steam at 4.0-4.5 MPa and 500°C can be produced directly by means of innovative solar receivers (some prototypes already exist). Although it could seem a promising technology, presently, steam turbines commercially available do not cover the required cycle specifications. In particular, while low-pressure turbines already exist on the market, high-pressure groups, necessary for the abovementioned applications, are not available. The present paper deals with the preliminary design of a high-pressure steam turbine group for a medium-scale CSP plant (200-1000 kWe). Such a group is arranged in a single geared package composed of four radial expander wheels. Such wheels have been chosen on the basis of automotive turbocharging technology and then modified to take the new requirements into account. Results related to the preliminary geometry selection and to the analysis of the high-pressure turbine group performance are reported and widely discussed.

Keywords: concentrated solar power (CSP) plants, steam turbine, radial turbine, medium-scale power plants

Procedia PDF Downloads 201
601 Towards a Robust Patch Based Multi-View Stereo Technique for Textureless and Occluded 3D Reconstruction

Authors: Ben Haines, Li Bai

Abstract:

Patch based reconstruction methods have been and still are one of the top performing approaches to 3D reconstruction to date. Their local approach to refining the position and orientation of a patch, free of global minimisation and independent of surface smoothness, make patch based methods extremely powerful in recovering fine grained detail of an objects surface. However, patch based approaches still fail to faithfully reconstruct textureless or highly occluded surface regions thus though performing well under lab conditions, deteriorate in industrial or real world situations. They are also computationally expensive. Current patch based methods generate point clouds with holes in texturesless or occluded regions that require expensive energy minimisation techniques to fill and interpolate a high fidelity reconstruction. Such shortcomings hinder the adaptation of the methods for industrial applications where object surfaces are often highly textureless and the speed of reconstruction is an important factor. This paper presents on-going work towards a multi-resolution approach to address the problems, utilizing particle swarm optimisation to reconstruct high fidelity geometry, and increasing robustness to textureless features through an adapted approach to the normalised cross correlation. The work also aims to speed up the reconstruction using advances in GPU technologies and remove the need for costly initialization and expansion. Through the combination of these enhancements, it is the intention of this work to create denser patch clouds even in textureless regions within a reasonable time. Initial results show the potential of such an approach to construct denser point clouds with a comparable accuracy to that of the current top-performing algorithms.

Keywords: 3D reconstruction, multiview stereo, particle swarm optimisation, photo consistency

Procedia PDF Downloads 186
600 Evaluation of Cultural Landscape Perception in Waterfront Historic Districts Based on Multi-source Data - Taking Venice and Suzhou as Examples

Authors: Shuyu Zhang

Abstract:

The waterfront historical district, as a type of historical districts on the verge of waters such as the sea, lake, and river, have a relatively special urban form. In the past preservation and renewal of traditional historic districts, there have been many discussions on the land range, and the waterfront and marginal spaces are easily overlooked. However, the waterfront space of the historic districts, as a cultural landscape heritage combining historical buildings and landscape elements, has strong ecological and sustainable values. At the same time, Suzhou and Venice, as sister water cities in history, have more waterfront spaces that can be compared in urban form and other levels. Therefore, this paper focuses on the waterfront historic districts in Venice and Suzhou, establishes quantitative evaluation indicators for environmental perception, makes analogies, and promotes the renewal and activation of the entire historical district by improving the spatial quality and vitality of the waterfront area. First, this paper uses multi-source data for analysis, such as Baidu Maps and Google Maps API to crawl the street view of the waterfront historic districts, uses machine learning algorithms to analyze the proportion of cultural landscape elements such as green viewing rate in the street view pictures, and uses space syntax software to make quantitative selectivity analysis, so as to establish environmental perception evaluation indicators for the waterfront historic districts. Finally, by comparing and summarizing the waterfront historic districts in Venice and Suzhou, it reveals their similarities and differences, characteristics and conclusions, and hopes to provide a reference for the heritage preservation and renewal of other waterfront historic districts.

Keywords: waterfront historical district, cultural landscape, perception, multi-source Data

Procedia PDF Downloads 176
599 Heliport Remote Safeguard System Based on Real-Time Stereovision 3D Reconstruction Algorithm

Authors: Ł. Morawiński, C. Jasiński, M. Jurkiewicz, S. Bou Habib, M. Bondyra

Abstract:

With the development of optics, electronics, and computers, vision systems are increasingly used in various areas of life, science, and industry. Vision systems have a huge number of applications. They can be used in quality control, object detection, data reading, e.g., QR-code, etc. A large part of them is used for measurement purposes. Some of them make it possible to obtain a 3D reconstruction of the tested objects or measurement areas. 3D reconstruction algorithms are mostly based on creating depth maps from data that can be acquired from active or passive methods. Due to the specific appliance in airfield technology, only passive methods are applicable because of other existing systems working on the site, which can be blinded on most spectral levels. Furthermore, reconstruction is required to work long distances ranging from hundreds of meters to tens of kilometers with low loss of accuracy even with harsh conditions such as fog, rain, or snow. In response to those requirements, HRESS (Heliport REmote Safeguard System) was developed; which main part is a rotational head with a two-camera stereovision rig gathering images around the head in 360 degrees along with stereovision 3D reconstruction and point cloud combination. The sub-pixel analysis introduced in the HRESS system makes it possible to obtain an increased distance measurement resolution and accuracy of about 3% for distances over one kilometer. Ultimately, this leads to more accurate and reliable measurement data in the form of a point cloud. Moreover, the program algorithm introduces operations enabling the filtering of erroneously collected data in the point cloud. All activities from the programming, mechanical and optical side are aimed at obtaining the most accurate 3D reconstruction of the environment in the measurement area.

Keywords: airfield monitoring, artificial intelligence, stereovision, 3D reconstruction

Procedia PDF Downloads 104
598 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 303
597 Visualization Tool for EEG Signal Segmentation

Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh

Abstract:

This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.

Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation

Procedia PDF Downloads 379
596 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 73
595 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm

Authors: Annalakshmi G., Sakthivel Murugan S.

Abstract:

This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.

Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization

Procedia PDF Downloads 146
594 Spectroscopic Studies and Reddish Luminescence Enhancement with the Increase in Concentration of Europium Ions in Oxy-Fluoroborate Glasses

Authors: Mahamuda Sk, Srinivasa Rao Allam, Vijaya Prakash G.

Abstract:

The different concentrations of Eu3+ ions doped in Oxy-fluoroborate glasses of composition 60 B2O3-10 BaF2-10 CaF2-15 CaF2- (5-x) Al2O3 -x Eu2O3 where x = 0.1, 0.5, 1.0 and 2.0 mol%, have been prepared by conventional melt quenching technique and are characterized through absorption and photoluminescence (PL), decay, color chromaticity and Confocal measurements. The absorption spectra of all the glasses consists of six peaks corresponding to the transitions 7F0→5D2, 7F0→5D1, 7F1→5D1, 7F1→5D0, 7F0→7F6 and 7F1→7F6 respectively. The experimental oscillator strengths with and without thermal corrections have been evaluated using absorption spectra. Judd-Ofelt (JO) intensity parameters (Ω2 and Ω4) have been evaluated from the photoluminescence spectra of all the glasses. PL spectra of all the glasses have been recorded at excitation wavelengths 395 nm (conventional excitation source) and 410 nm (diode laser) to observe the intensity variation in the PL spectra. All the spectra consists of five emission peaks corresponding to the transitions 5D0→7FJ (J = 0, 1, 2, 3 and 4). Surprisingly no concentration quenching is observed on PL spectra. Among all the glasses the glass with 2.0 mol% of Eu3+ ion concentration possesses maximum intensity for the transition 5D0→7F2 (612 nm) in bright red region. The JO parameters derived from the photoluminescence spectra have been used to evaluate the essential radiative properties such as transition probability (A), radiative lifetime (τR), branching ratio (βR) and peak stimulated emission cross-section (σse) for the 5D0→7FJ (J = 0, 1, 2, 3 and 4) transitions of the Eu3+ ions. The decay rates of the 5D0 fluorescent level of Eu3+ ions in the title glasses are found to be single exponential for all the studied Eu3+ ion concentrations. A marginal increase in lifetime of the 5D0 level has been noticed with increase in Eu3+ ion concentration from 0.1 mol% to 2.0 mol%. Among all the glasses, the glass with 2.0 mol% of Eu3+ ion concentration possesses maximum values of branching ratio, stimulated emission cross-section and quantum efficiency for the transition 5D0→7F2 (612 nm) in bright red region. The color chromaticity coordinates are also evaluated to confirm the reddish luminescence from these glasses. These color coordinates exactly fall in the bright red region. Confocal images also recorded to confirm reddish luminescence from these glasses. From all the obtained results in the present study, it is suggested that the glass with 2.0 mol% of Eu3+ ion concentration is suitable to emit bright red color laser.

Keywords: Europium, Judd-Ofelt parameters, laser, luminescence

Procedia PDF Downloads 226
593 A Machine Learning Approach for Detecting and Locating Hardware Trojans

Authors: Kaiwen Zheng, Wanting Zhou, Nan Tang, Lei Li, Yuanhang He

Abstract:

The integrated circuit industry has become a cornerstone of the information society, finding widespread application in areas such as industry, communication, medicine, and aerospace. However, with the increasing complexity of integrated circuits, Hardware Trojans (HTs) implanted by attackers have become a significant threat to their security. In this paper, we proposed a hardware trojan detection method for large-scale circuits. As HTs introduce physical characteristic changes such as structure, area, and power consumption as additional redundant circuits, we proposed a machine-learning-based hardware trojan detection method based on the physical characteristics of gate-level netlists. This method transforms the hardware trojan detection problem into a machine-learning binary classification problem based on physical characteristics, greatly improving detection speed. To address the problem of imbalanced data, where the number of pure circuit samples is far less than that of HTs circuit samples, we used the SMOTETomek algorithm to expand the dataset and further improve the performance of the classifier. We used three machine learning algorithms, K-Nearest Neighbors, Random Forest, and Support Vector Machine, to train and validate benchmark circuits on Trust-Hub, and all achieved good results. In our case studies based on AES encryption circuits provided by trust-hub, the test results showed the effectiveness of the proposed method. To further validate the method’s effectiveness for detecting variant HTs, we designed variant HTs using open-source HTs. The proposed method can guarantee robust detection accuracy in the millisecond level detection time for IC, and FPGA design flows and has good detection performance for library variant HTs.

Keywords: hardware trojans, physical properties, machine learning, hardware security

Procedia PDF Downloads 126
592 Feeling Sorry for Some Creditors

Authors: Hans Tjio, Wee Meng Seng

Abstract:

The interaction of contract and property has always been a concern in corporate and commercial law, where there are internal structures created that may not match the externally perceived image generated by the labels attached to those structures. We will focus, in particular, on the priority structures created by affirmative asset partitioning, which have increasingly come under challenge by those attempting to negotiate around them. The most prominent has been the AT1 bonds issued by Credit Suisse which were wiped out before its equity when the troubled bank was acquired by UBS. However, this should not have come as a surprise to those whose “bonds” had similarly been “redeemed” upon the occurrence of certain reference events in countries like Singapore, Hong Kong and Taiwan during their Minibond crisis linked to US sub-prime defaults. These were derivatives classified as debentures and sold as such. At the same time, we are again witnessing “liabilities” seemingly ranking higher up the balance sheet ladder, finding themselves lowered in events of default. We will examine the mechanisms holders of perpetual securities or preference shares have tried to use to protect themselves. This is happening against a backdrop that sees a rise in the strength of private credit and inter-creditor conflicts. The restructuring regime of the hybrid scheme in Singapore now, while adopting the absolute priority rule in Chapter 11 as the quid pro quo for creditor cramdown, does not apply to shareholders and so exempts them from cramdown. Complicating the picture further, shareholders are not exempted from cramdown in the Dutch scheme, but it adopts a relative priority rule. At the same time, the important UK Supreme Court decision in BTI 2014 LLC v Sequana [2022] UKSC 25 has held that directors’ duties to take account of creditor interests are activated only when a company is almost insolvent. All this has been complicated by digital assets created by businesses. Investors are quite happy to have them classified as property (like a thing) when it comes to their transferability, but then when the issuer defaults to have them seen as a claim on the business (as a choice in action), that puts them at the level of a creditor. But these hidden interests will not show themselves on an issuer’s balance sheet until it is too late to be considered and yet if accepted, may also prevent any meaningful restructuring.

Keywords: asset partitioning, creditor priority, restructuring, BTI v Sequana, digital assets

Procedia PDF Downloads 62
591 Open Innovation for Crowdsourced Product Development: The Case Study of Quirky.com

Authors: Ana Bilandzic, Marcus Foth, Greg Hearn

Abstract:

In a narrow sense, innovation is the invention and commercialisation of a new product or service in the marketplace. The literature suggests places that support knowledge exchange and social interaction, e.g. coffee shops, to nurture innovative ideas. With the widespread success of Internet, interpersonal communication and interaction changed. Online platforms complement physical places for idea exchange and innovation – the rise of hybrid, ‘net localities.’ Further, since its introduction in 2003 by Chesbrough, the concept of open innovation received increased attention as a topic in academic research as well as an innovation strategy applied by companies. Open innovation allows companies to seek and release intellectual property and new ideas from outside of their own company. As a consequence, the innovation process is no longer only managed within the company, but it is pursued in a co-creation process with customers, suppliers, and other stakeholders. Quirky.com (Quirky), a company founded by Ben Kaufman in 2009, recognised the opportunity given by the Internet for knowledge exchange and open innovation. Quirky developed an online platform that makes innovation available to everyone. This paper reports on a study that analysed Quirky’s business process in an extended event-driven process chain (eEPC). The aim was to determine how the platform enabled crowdsourced innovation for physical products on the Internet. The analysis reveals that key elements of the business model are based on open innovation. Quirky is an example of how open innovation can support crowdsourced and crowdfunded product ideation, development and selling. The company opened up various stages in the innovation process to its members to contribute in the product development, e.g. product ideation, design, and market research. Throughout the process, members earn influence through participating in the product development. Based on the influence they receive, shares on the product’s turnover. The outcomes of the study’s analysis highlighted certain benefits of open innovation for product development. The paper concludes with recommendations for future research to look into opportunities of open innovation approaches to be adopted by tertiary institutions as a novel way to commercialise research intellectual property.

Keywords: business process, crowdsourced innovation, open innovation, Quirky

Procedia PDF Downloads 209
590 Improved Visible Light Activities for Degrading Pollutants on ZnO-TiO2 Nanocomposites Decorated with C and Fe Nanoparticles

Authors: Yuvraj S. Malghe, Atul B. Lavand

Abstract:

In recent years, semiconductor photocatalytic degradation processes have attracted a lot of attention and are used widely for the destruction of organic pollutants present in waste water. Among various semiconductors, titanium dioxide (TiO2) is the most popular photocatalyst due to its excellent chemical stability, non-toxicity, relatively low cost and high photo-oxidation power. It has been known that zinc oxide (ZnO) with band gap energy 3.2 eV is a suitable alternative to TiO2 due to its high quantum efficiency, however it corrodes in acidic medium. Unfortunately TiO2 and ZnO both are active only in UV light due to their wide band gaps. Sunlight consist about 5-7% UV light, 46% visible light and 47% infrared radiation. In order to utilize major portion of sunlight (visible spectrum), it is necessary to modify the band gap of TiO2 as well as ZnO. This can be done by several ways such as semiconductor coupling, doping the material with metals/non metals. Doping of TiO2 using transition metals like Fe, Co and non-metals such as N, C or S extends its absorption wavelengths from UV to visible region. In the present work, we have synthesized ZnO-TiO2 nanocomposite using reverse microemulsion method. Visible light photocatalytic activity of synthesized nanocomposite was investigated for degradation of aqueous solution of malachite green (MG). To increase the photocatalytic activity of ZnO-TiO2 nanocomposite, it is decorated with C and Fe. Pure, carbon (C) doped and carbon, iron(C, Fe) co-doped nanosized ZnO-TiO2 nanocomposites were synthesized using reverse microemulsion method. These composites were characterized using, X-ray diffraction (XRD), Energy dispersive X-ray spectroscopy (EDX), Scanning electron microscopy (SEM), UV visible spectrophotometery and X-ray photoelectron spectroscopy (XPS). Visible light photocatalytic activities of synthesized nanocomposites were investigated for degradation of aqueous malachite green (MG) solution. C, Fe co-doped ZnO-TiO2 nanocomposite exhibit better photocatalytic activity and showed threefold increase in photocatalytic activity. Effect of amount of catalyst, pH and concentration of MG solution on the photodegradation rate is studied. Stability and reusability of photocatalyst is also studied. C, Fe decorated ZnO-TiO2 nanocomposite shows threefold increase in photocatalytic activity.

Keywords: malachite green, nanocomposite, photocatalysis, titanium dioxide, zinc oxide

Procedia PDF Downloads 272
589 Yield and Physiological Evaluation of Coffee (Coffea arabica L.) in Response to Biochar Applications

Authors: Alefsi D. Sanchez-Reinoso, Leonardo Lombardini, Hermann Restrepo

Abstract:

Colombian coffee is recognized worldwide for its mild flavor and aroma. Its cultivation generates a large amount of waste, such as fresh pulp, which leads to environmental, health, and economic problems. Obtaining biochar (BC) by pyrolysis of coffee pulp and its incorporation to the soil can be a complement to the crop mineral nutrition. The objective was to evaluate the effect of the application of BC obtained from coffee pulp on the physiology and agronomic performance of the Castillo variety coffee crop (Coffea arabica L.). The research was developed in field condition experiment, using a three-year-old commercial coffee crop, carried out in Tolima. Four doses of BC (0, 4, 8 and 16 t ha-1) and four levels of chemical fertilization (CF) (0%, 33%, 66% and 100% of the nutritional requirements) were evaluated. Three groups of variables were recorded during the experiment: i) physiological parameters such as Gas exchange, the maximum quantum yield of PSII (Fv/Fm), biomass, and water status were measured; ii) physical and chemical characteristics of the soil in a commercial coffee crop, and iii) physiochemical and sensorial parameters of roasted beans and coffee beverages. The results indicated that a positive effect was found in plants with 8 t ha-1 BC and fertilization levels of 66 and 100%. Also, a positive effect was observed in coffee trees treated with 8 t ha-1 BC and 100%. In addition, the application of 16 t ha-1 BC increased the soil pHand microbial respiration; reduced the apparent density and state of aggregation of the soil compared to 0 t ha-1 BC. Applications of 8 and 16 t ha-1 BC and 66%-100% chemical fertilization registered greater sensitivity to the aromatic compounds of roasted coffee beans in the electronic nose. Amendments of BC between 8 and 16 t ha-1 and CF between 66% and 100% increased the content of total soluble solids (TSS), reduced the pH, and increased the titratable acidity in beverages of roasted coffee beans. In conclusion, 8 t ha-1 BC of the coffee pulp can be an alternative to supplement the nutrition of coffee seedlings and trees. Applications between 8 and 16 t ha-1 BC support coffee soil management strategies and help the use of solid waste. BC as a complement to chemical fertilization showed a positive effect on the aromatic profile obtained for roasted coffee beans and cup quality attributes.

Keywords: crop yield, cup quality, mineral nutrition, pyrolysis, soil amendment

Procedia PDF Downloads 86
588 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble

Authors: Jaehong Yu, Seoung Bum Kim

Abstract:

Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.

Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking

Procedia PDF Downloads 314
587 From Creativity to Innovation: Tracking Rejected Ideas

Authors: Lisete Barlach, Guilherme Ary Plonski

Abstract:

Innovative ideas are not always synonymous with business opportunities. Any idea can be creative and not recognized as a potential project in which money and time will be invested, among other resources. Even in firms that promote and enhance innovation, there are two 'check-points', the first corresponding to the acknowledgment of the idea as creative and the second, its consideration as a business opportunity. Both the recognition of new business opportunities or new ideas involve cognitive and psychological frameworks which provide individuals with a basis for noticing connections between seemingly independent events or trends as if they were 'connecting the dots'. It also involves prototypes-representing the most typical member of a certain category–functioning as 'templates' for this recognition. There is a general assumption that these kinds of evaluation processes develop through experience, explaining why expertise plays a central role in this process: the more experienced a professional, the easier for him (her) to identify new opportunities in business. But, paradoxically, an increase in expertise can lead to the inflexibility of thought due to automation of procedures. And, besides this, other cognitive biases can also be present, because new ideas or business opportunities generally depend on heuristics, rather than on established algorithms. The paper presents a literature review about the Einstellung effect by tracking famous cases of rejected ideas, extracted from historical records. It also presents the results of empirical research, with data upon rejected ideas gathered from two different environments: projects rejected during first semester of 2017 at a large incubator center in Sao Paulo and ideas proposed by employees that were rejected by a well-known business company, at its Brazilian headquarter. There is an implicit assumption that Einstellung effect tends to be more and more present in contemporaneity, due to time pressure upon decision-making and idea generation process. The analysis discusses desirability, viability, and feasibility as elements that affect decision-making.

Keywords: cognitive biases, Einstellung effect, recognition of business opportunities, rejected ideas

Procedia PDF Downloads 181
586 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: statistical slope stability analysis, skew distributions, probability of failure, functions of random variables

Procedia PDF Downloads 321
585 Conservation Planning of Paris Polyphylla Smith, an Important Medicinal Herb of the Indian Himalayan Region Using Predictive Distribution Modelling

Authors: Mohd Tariq, Shyamal K. Nandi, Indra D. Bhatt

Abstract:

Paris polyphylla Smith (Family- Liliaceae; English name-Love apple: Local name- Satuwa) is an important folk medicinal herb of the Indian subcontinent, being a source of number of bioactive compounds for drug formulation. The rhizomes are widely used as antihelmintic, antispasmodic, digestive stomachic, expectorant and vermifuge, antimicrobial, anti-inflammatory, heart and vascular malady, anti-fertility and sedative. Keeping in view of this, the species is being constantly removed from nature for trade and various pharmaceuticals purpose, as a result, the availability of the species in its natural habitat is decreasing. In this context, it would be pertinent to conserve this species and reintroduce them in its natural habitat. Predictive distribution modelling of this species was performed in Western Himalayan Region. One such recent method is Ecological Niche Modelling, also popularly known as Species distribution modelling, which uses computer algorithms to generate predictive maps of species distributions in a geographic space by correlating the point distributional data with a set of environmental raster data. In case of P. polyphylla, and to understand its potential distribution zones and setting up of artificial introductions, or selecting conservation sites, and conservation and management of their native habitat. Among the different districts of Uttarakhand (28°05ˈ-31°25ˈ N and 77°45ˈ-81°45ˈ E) Uttarkashi, Rudraprayag, Chamoli, Pauri Garhwal and some parts of Bageshwar, 'Maximum Entropy' (Maxent) has predicted wider potential distribution of P. polyphylla Smith. Distribution of P. polyphylla is mainly governed by Precipitation of Driest Quarter and Mean Diurnal Range i.e., 27.08% and 18.99% respectively which indicates that humidity (27%) and average temperature (19°C) might be suitable for better growth of Paris polyphylla.

Keywords: biodiversity conservation, Indian Himalayan region, Paris polyphylla, predictive distribution modelling

Procedia PDF Downloads 313
584 Modified Weibull Approach for Bridge Deterioration Modelling

Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight

Abstract:

State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.

Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models

Procedia PDF Downloads 709
583 Solving LWE by Pregressive Pumps and Its Optimization

Authors: Leizhang Wang, Baocang Wang

Abstract:

General Sieve Kernel (G6K) is considered as currently the fastest algorithm for the shortest vector problem (SVP) and record holder of open SVP challenge. We study the lattice basis quality improvement effects of the Workout proposed in G6K, which is composed of a series of pumps to solve SVP. Firstly, we use a low-dimensional pump output basis to propose a predictor to predict the quality of high-dimensional Pumps output basis. Both theoretical analysis and experimental tests are performed to illustrate that it is more computationally expensive to solve the LWE problems by using a G6K default SVP solving strategy (Workout) than these lattice reduction algorithms (e.g. BKZ 2.0, Progressive BKZ, Pump, and Jump BKZ) with sieving as their SVP oracle. Secondly, the default Workout in G6K is optimized to achieve a stronger reduction and lower computational cost. Thirdly, we combine the optimized Workout and the Pump output basis quality predictor to further reduce the computational cost by optimizing LWE instances selection strategy. In fact, we can solve the TU LWE challenge (n = 65, q = 4225, = 0:005) 13.6 times faster than the G6K default Workout. Fourthly, we consider a combined two-stage (Preprocessing by BKZ- and a big Pump) LWE solving strategy. Both stages use dimension for free technology to give new theoretical security estimations of several LWE-based cryptographic schemes. The security estimations show that the securities of these schemes with the conservative Newhope’s core-SVP model are somewhat overestimated. In addition, in the case of LAC scheme, LWE instances selection strategy can be optimized to further improve the LWE-solving efficiency even by 15% and 57%. Finally, some experiments are implemented to examine the effects of our strategies on the Normal Form LWE problems, and the results demonstrate that the combined strategy is four times faster than that of Newhope.

Keywords: LWE, G6K, pump estimator, LWE instances selection strategy, dimension for free

Procedia PDF Downloads 47
582 Effect of Concentration Level and Moisture Content on the Detection and Quantification of Nickel in Clay Agricultural Soil in Lebanon

Authors: Layan Moussa, Darine Salam, Samir Mustapha

Abstract:

Heavy metal contamination in agricultural soils in Lebanon poses serious environmental and health problems. Intensive efforts are employed to improve existing quantification methods of heavy metals in contaminated environments since conventional detection techniques have shown to be time-consuming, tedious, and costly. The implication of hyperspectral remote sensing in this field is possible and promising. However, factors impacting the efficiency of hyperspectral imaging in detecting and quantifying heavy metals in agricultural soils were not thoroughly studied. This study proposes to assess the use of hyperspectral imaging for the detection of Ni in agricultural clay soil collected from the Bekaa Valley, a major agricultural area in Lebanon, under different contamination levels and soil moisture content. Soil samples were contaminated with Ni, with concentrations ranging from 150 mg/kg to 4000 mg/kg. On the other hand, soil with background contamination was subjected to increased moisture levels varying from 5 to 75%. Hyperspectral imaging was used to detect and quantify Ni contamination in the soil at different contamination levels and moisture content. IBM SPSS statistical software was used to develop models that predict the concentration of Ni and moisture content in agricultural soil. The models were constructed using linear regression algorithms. The spectral curves obtained reflected an inverse correlation between both Ni concentration and moisture content with respect to reflectance. On the other hand, the models developed resulted in high values of predicted R2 of 0.763 for Ni concentration and 0.854 for moisture content. Those predictions stated that Ni presence was well expressed near 2200 nm and that of moisture was at 1900 nm. The results from this study would allow us to define the potential of using the hyperspectral imaging (HSI) technique as a reliable and cost-effective alternative for heavy metal pollution detection in contaminated soils and soil moisture prediction.

Keywords: heavy metals, hyperspectral imaging, moisture content, soil contamination

Procedia PDF Downloads 79
581 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 117