Search results for: algorithms and data structure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31403

Search results for: algorithms and data structure

30983 An ANN Approach for Detection and Localization of Fatigue Damage in Aircraft Structures

Authors: Reza Rezaeipour Honarmandzad

Abstract:

In this paper we propose an ANN for detection and localization of fatigue damage in aircraft structures. We used network of piezoelectric transducers for Lamb-wave measurements in order to calculate damage indices. Data gathered by the sensors was given to neural network classifier. A set of neural network electors of different architecture cooperates to achieve consensus concerning the state of each monitored path. Sensed signal variations in the ROI, detected by the networks at each path, were used to assess the state of the structure as well as to localize detected damage and to filter out ambient changes. The classifier has been extensively tested on large data sets acquired in the tests of specimens with artificially introduced notches as well as the results of numerous fatigue experiments. Effect of the classifier structure and test data used for training on the results was evaluated.

Keywords: ANN, fatigue damage, aircraft structures, piezoelectric transducers, lamb-wave measurements

Procedia PDF Downloads 407
30982 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 94
30981 Interpretable Deep Learning Models for Medical Condition Identification

Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji

Abstract:

Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.

Keywords: deep learning, interpretability, attention, big data, medical conditions

Procedia PDF Downloads 84
30980 Analytical Comparison of Conventional Algorithms with Vedic Algorithm for Digital Multiplier

Authors: Akhilesh G. Naik, Dipankar Pal

Abstract:

In today’s scenario, the complexity of digital signal processing (DSP) applications and various microcontroller architectures have been increasing to such an extent that the traditional approaches to multiplier design in most processors are becoming outdated for being comparatively slow. Modern processing applications require suitable pipelined approaches, and therefore, algorithms that are friendlier with pipelined architectures. Traditional algorithms like Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda architectures have been proven to be comparatively slow for pipelined architectures. These architectures, therefore, need to be optimized or combined with other architectures amongst them to enhance its performances and to be made suitable for pipelined hardware/architectures. Recently, Vedic algorithm mathematically has proven to be efficient by appearing to be less complex and with fewer steps for its output establishment and have assumed renewed importance. This paper describes and shows how the Vedic algorithm can be better suited for pipelined architectures and also can be combined with traditional architectures and algorithms for enhancing its ability even further. In this paper, we also established that for complex applications on DSP and other microcontroller architectures, using Vedic approach for multiplication proves to be the best available and efficient option.

Keywords: Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda, Vedic, Single-Stage Karatsuba (SSK), Looped Karatsuba (LK)

Procedia PDF Downloads 162
30979 Numerical Analysis of the Response of Thin Flexible Membranes to Free Surface Water Flow

Authors: Mahtab Makaremi Masouleh, Günter Wozniak

Abstract:

This work is part of a major research project concerning the design of a light temporary installable textile flood control structure. The motivation for this work is the great need of applying light structures for the protection of coastal areas from detrimental effects of rapid water runoff. The prime objective of the study is the numerical analysis of the interaction among free surface water flow and slender shaped pliable structures, playing a key role in safety performance of the intended system. First, the behavior of down scale membrane is examined under hydrostatic pressure by the Abaqus explicit solver, which is part of the finite element based commercially available SIMULIA software. Then the procedure to achieve a stable and convergent solution for strongly coupled media including fluids and structures is explained. A partitioned strategy is imposed to make both structures and fluids be discretized and solved with appropriate formulations and solvers. In this regard, finite element method is again selected to analyze the structural domain. Moreover, computational fluid dynamics algorithms are introduced for solutions in flow domains by means of a commercial package of Star CCM+. Likewise, SIMULIA co-simulation engine and an implicit coupling algorithm, which are available communication tools in commercial package of the Star CCM+, enable powerful transmission of data between two applied codes. This approach is discussed for two different cases and compared with available experimental records. In one case, the down scale membrane interacts with open channel flow, where the flow velocity increases with time. The second case illustrates, how the full scale flexible flood barrier behaves when a massive flotsam is accelerated towards it.

Keywords: finite element formulation, finite volume algorithm, fluid-structure interaction, light pliable structure, VOF multiphase model

Procedia PDF Downloads 173
30978 Improved Whale Algorithm Based on Information Entropy and Its Application in Truss Structure Optimization Design

Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong, Gan Lei, Xu Liqun

Abstract:

Given the limitations of the original whale optimization algorithm (WAO) in local optimum and low convergence accuracy in truss structure optimization problems, based on the fundamental whale algorithm, an improved whale optimization algorithm (SWAO) based on information entropy is proposed. The information entropy itself is an uncertain measure. It is used to control the range of whale searches in path selection. It can overcome the shortcomings of the basic whale optimization algorithm (WAO) and can improve the global convergence speed of the algorithm. Taking truss structure as the optimization research object, the mathematical model of truss structure optimization is established; the cross-sectional area of truss is taken as the design variable; the objective function is the weight of truss structure; and an improved whale optimization algorithm (SWAO) is used for optimization design, which provides a new idea and means for its application in large and complex engineering structure optimization design.

Keywords: information entropy, structural optimization, truss structure, whale algorithm

Procedia PDF Downloads 237
30977 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 327
30976 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 278
30975 Towards a Balancing Medical Database by Using the Least Mean Square Algorithm

Authors: Kamel Belammi, Houria Fatrim

Abstract:

imbalanced data set, a problem often found in real world application, can cause seriously negative effect on classification performance of machine learning algorithms. There have been many attempts at dealing with classification of imbalanced data sets. In medical diagnosis classification, we often face the imbalanced number of data samples between the classes in which there are not enough samples in rare classes. In this paper, we proposed a learning method based on a cost sensitive extension of Least Mean Square (LMS) algorithm that penalizes errors of different samples with different weight and some rules of thumb to determine those weights. After the balancing phase, we applythe different classifiers (support vector machine (SVM), k- nearest neighbor (KNN) and multilayer neuronal networks (MNN)) for balanced data set. We have also compared the obtained results before and after balancing method.

Keywords: multilayer neural networks, k- nearest neighbor, support vector machine, imbalanced medical data, least mean square algorithm, diabetes

Procedia PDF Downloads 519
30974 THz Phase Extraction Algorithms for a THz Modulating Interferometric Doppler Radar

Authors: Shaolin Allen Liao, Hual-Te Chien

Abstract:

Various THz phase extraction algorithms have been developed for a novel THz Modulating Interferometric Doppler Radar (THz-MIDR) developed recently by the author. The THz-MIDR differs from the well-known FTIR technique in that it introduces a continuously modulating reference branch, compared to the time-consuming discrete FTIR stepping reference branch. Such change allows real-time tracking of a moving object and capturing of its Doppler signature. The working principle of the THz-MIDR is similar to the FTIR technique: the incoming THz emission from the scene is split by a beam splitter/combiner; one of the beams is continuously modulated by a vibrating mirror or phase modulator and the other split beam is reflected by a reflection mirror; finally both the modulated reference beam and reflected beam are combined by the same beam splitter/combiner and detected by a THz intensity detector (for example, a pyroelectric detector). In order to extract THz phase from the single intensity measurement signal, we have derived rigorous mathematical formulas for 3 Frequency Banded (FB) signals: 1) DC Low-Frequency Banded (LFB) signal; 2) Fundamental Frequency Banded (FFB) signal; and 3) Harmonic Frequency Banded (HFB) signal. The THz phase extraction algorithms are then developed based combinations of 2 or all of these 3 FB signals with efficient algorithms such as Levenberg-Marquardt nonlinear fitting algorithm. Numerical simulation has also been performed in Matlab with simulated THz-MIDR interferometric signal of various Signal to Noise Ratio (SNR) to verify the algorithms.

Keywords: algorithm, modulation, THz phase, THz interferometry doppler radar

Procedia PDF Downloads 330
30973 Helping the Development of Public Policies with Knowledge of Criminal Data

Authors: Diego De Castro Rodrigues, Marcelo B. Nery, Sergio Adorno

Abstract:

The project aims to develop a framework for social data analysis, particularly by mobilizing criminal records and applying descriptive computational techniques, such as associative algorithms and extraction of tree decision rules, among others. The methods and instruments discussed in this work will enable the discovery of patterns, providing a guided means to identify similarities between recurring situations in the social sphere using descriptive techniques and data visualization. The study area has been defined as the city of São Paulo, with the structuring of social data as the central idea, with a particular focus on the quality of the information. Given this, a set of tools will be validated, including the use of a database and tools for visualizing the results. Among the main deliverables related to products and the development of articles are the discoveries made during the research phase. The effectiveness and utility of the results will depend on studies involving real data, validated both by domain experts and by identifying and comparing the patterns found in this study with other phenomena described in the literature. The intention is to contribute to evidence-based understanding and decision-making in the social field.

Keywords: social data analysis, criminal records, computational techniques, data mining, big data

Procedia PDF Downloads 69
30972 Development of a Decision-Making Method by Using Machine Learning Algorithms in the Early Stage of School Building Design

Authors: Pegah Eshraghi, Zahra Sadat Zomorodian, Mohammad Tahsildoost

Abstract:

Over the past decade, energy consumption in educational buildings has steadily increased. The purpose of this research is to provide a method to quickly predict the energy consumption of buildings using separate evaluation of zones and decomposing the building to eliminate the complexity of geometry at the early design stage. To produce this framework, machine learning algorithms such as Support vector regression (SVR) and Artificial neural network (ANN) are used to predict energy consumption and thermal comfort metrics in a school as a case. The database consists of more than 55000 samples in three climates of Iran. Cross-validation evaluation and unseen data have been used for validation. In a specific label, cooling energy, it can be said the accuracy of prediction is at least 84% and 89% in SVR and ANN, respectively. The results show that the SVR performed much better than the ANN.

Keywords: early stage of design, energy, thermal comfort, validation, machine learning

Procedia PDF Downloads 72
30971 A Study on Golden Ratio (ф) and Its Implications on Seismic Design Using ETABS

Authors: Vishal A. S. Salelkar, Sumitra S. Kandolkar

Abstract:

Golden ratio (ф) or Golden mean or Golden section, as it is often referred to, is a proportion or a mean, which is often used by architects while conceiving the aesthetics of a structure. Golden Ratio (ф) is an irrational number that can be roughly rounded to 1.618 and is derived out of quadratic equation x2-x-1=0. The use of Golden Ratio (ф) can be observed throughout history, as far as ancient Egyptians, which later peaked during the Greek golden age. The use of this design technique is very much prevalent. At present, architects around the world prefer this as one of the primary techniques to decide aesthetics. In this study, an analysis has been performed to investigate whether the use of the golden ratio while planning a structure has any effects on the seismic behavior of the structure. The structure is modeled and analyzed on ETABS (by Computers and Structures, Inc.) for Seismic requirements equivalent to Zone III (Region: Goa-India) as per Indian Standard Code IS-1893. The results were compared to that of an identical structure modeled along the lines of normal design philosophy, not using the Golden Ratio tools. The results were then compared for Story Shear, Story Drift, and Story Displacement Readings. Improvement in performance, although slight, but was observed. Similar improvements were also observed in subsequent iterations, performed using time-acceleration data of previous major earthquakes matched to Zone III as per IS-1893.

Keywords: ETABS, golden ratio, seismic design, structural behavior

Procedia PDF Downloads 167
30970 Collision Detection Algorithm Based on Data Parallelism

Authors: Zhen Peng, Baifeng Wu

Abstract:

Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.

Keywords: data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability

Procedia PDF Downloads 278
30969 Control of a Stewart Platform for Minimizing Impact Energy in Simulating Spacecraft Docking Operations

Authors: Leonardo Herrera, Shield B. Lin, Stephen J. Montgomery-Smith, Ziraguen O. Williams

Abstract:

Three control algorithms: Proportional-Integral-Derivative, Linear-Quadratic-Gaussian, and Linear-Quadratic-Gaussian with the shift, were applied to the computer simulation of a one-directional dynamic model of a Stewart Platform. The goal was to compare the dynamic system responses under the three control algorithms and to minimize the impact energy when simulating spacecraft docking operations. Equations were derived for the control algorithms and the input and output of the feedback control system. Using MATLAB, Simulink diagrams were created to represent the three control schemes. A switch selector was used for the convenience of changing among different controllers. The simulation demonstrated the controller using the algorithm of Linear-Quadratic-Gaussian with the shift resulting in the lowest impact energy.

Keywords: controller, Stewart platform, docking operation, spacecraft

Procedia PDF Downloads 34
30968 A Hybrid Distributed Algorithm for Multi-Objective Dynamic Flexible Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk

Abstract:

In this paper, a hybrid distributed algorithm has been suggested for multi-objective dynamic flexible job shop scheduling problem. The proposed algorithm is high level, in which several algorithms search the space on different machines simultaneously also it is a hybrid algorithm that takes advantages of the artificial intelligence, evolutionary and optimization methods. Distribution is done at different levels and new approaches are used for design of the algorithm. Apache spark and Hadoop frameworks have been used for the distribution of the algorithm. The Pareto optimality approach is used for solving the multi-objective benchmarks. The suggested algorithm that is able to solve large-size problems in short times has been compared with the successful algorithms of the literature. The results prove high speed and efficiency of the algorithm.

Keywords: distributed algorithms, apache-spark, Hadoop, flexible dynamic job shop scheduling, multi-objective optimization

Procedia PDF Downloads 342
30967 Glycoside Hydrolase Clan GH-A-like Structure Complete Evaluation

Authors: Narin Salehiyan

Abstract:

The three iodothyronine selenodeiodinases catalyze the start and end of thyroid hormone impacts in vertebrates. Auxiliary examinations of these proteins have been prevented by their indispensably film nature and the wasteful eukaryotic-specific pathway for selenoprotein blend. Hydrophobic cluster examination utilized in combination with Position-specific Iterated Impact uncovers that their extramembrane parcel has a place to the thioredoxin-fold superfamily for which test structure data exists. Besides, a expansive deiodinase locale imbedded within the thioredoxin overlay offers solid similitudes with the dynamic location of iduronidase, a part of the clan GH-A-fold of glycoside hydrolases. This show can clarify a number of comes about from past mutagenesis examinations and grants unused irrefutable experiences into the auxiliary and utilitarian properties of these proteins.

Keywords: glycoside, hydrolase, GH-A-like structure, catalyze

Procedia PDF Downloads 58
30966 Clustering of Extremes in Financial Returns: A Comparison between Developed and Emerging Markets

Authors: Sara Ali Alokley, Mansour Saleh Albarrak

Abstract:

This paper investigates the dependency or clustering of extremes in the financial returns data by estimating the extremal index value θ∈[0,1]. The smaller the value of θ the more clustering we have. Here we apply the method of Ferro and Segers (2003) to estimate the extremal index for a range of threshold values. We compare the dependency structure of extremes in the developed and emerging markets. We use the financial returns of the stock market index in the developed markets of US, UK, France, Germany and Japan and the emerging markets of Brazil, Russia, India, China and Saudi Arabia. We expect that more clustering occurs in the emerging markets. This study will help to understand the dependency structure of the financial returns data.

Keywords: clustring, extremes, returns, dependency, extermal index

Procedia PDF Downloads 390
30965 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join

Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel

Abstract:

Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.

Keywords: map reduce, hadoop, semi join, two way join

Procedia PDF Downloads 501
30964 Enhancing Precision Agriculture through Object Detection Algorithms: A Study of YOLOv5 and YOLOv8 in Detecting Armillaria spp.

Authors: Christos Chaschatzis, Chrysoula Karaiskou, Pantelis Angelidis, Sotirios K. Goudos, Igor Kotsiuba, Panagiotis Sarigiannidis

Abstract:

Over the past few decades, the rapid growth of the global population has led to the need to increase agricultural production and improve the quality of agricultural goods. There is a growing focus on environmentally eco-friendly solutions, sustainable production, and biologically minimally fertilized products in contemporary society. Precision agriculture has the potential to incorporate a wide range of innovative solutions with the development of machine learning algorithms. YOLOv5 and YOLOv8 are two of the most advanced object detection algorithms capable of accurately recognizing objects in real time. Detecting tree diseases is crucial for improving the food production rate and ensuring sustainability. This research aims to evaluate the efficacy of YOLOv5 and YOLOv8 in detecting the symptoms of Armillaria spp. in sweet cherry trees and determining their health status, with the goal of enhancing the robustness of precision agriculture. Additionally, this study will explore Computer Vision (CV) techniques with machine learning algorithms to improve the detection process’s efficiency.

Keywords: Armillaria spp., machine learning, precision agriculture, smart farming, sweet cherries trees, YOLOv5, YOLOv8

Procedia PDF Downloads 102
30963 Diffusion Adaptation Strategies for Distributed Estimation Based on the Family of Affine Projection Algorithms

Authors: Mohammad Shams Esfand Abadi, Mohammad Ranjbar, Reza Ebrahimpour

Abstract:

This work presents the distributed processing solution problem in a diffusion network based on the adapt then combine (ATC) and combine then adapt (CTA)selective partial update normalized least mean squares (SPU-NLMS) algorithms. Also, we extend this approach to dynamic selection affine projection algorithm (DS-APA) and ATC-DS-APA and CTA-DS-APA are established. The purpose of ATC-SPU-NLMS and CTA-SPU-NLMS algorithm is to reduce the computational complexity by updating the selected blocks of weight coefficients at every iteration. In CTA-DS-APA and ATC-DS-APA, the number of the input vectors is selected dynamically. Diffusion cooperation strategies have been shown to provide good performance based on these algorithms. The good performance of introduced algorithm is illustrated with various experimental results.

Keywords: selective partial update, affine projection, dynamic selection, diffusion, adaptive distributed networks

Procedia PDF Downloads 691
30962 Electronic Structure Calculation of AsSiTeB/SiAsBTe Nanostructures Using Density Functional Theory

Authors: Ankit Kargeti, Ravikant Shrivastav, Tabish Rasheed

Abstract:

The electronic structure calculation for the nanoclusters of AsSiTeB/SiAsBTe quaternary semiconductor alloy belonging to the III-V Group elements was performed. Motivation for this research work was to look for accurate electronic and geometric data of small nanoclusters of AsSiTeB/SiAsBTe in the gaseous form. The two clusters, one in the linear form and the other in the bent form, were studied under the framework of Density Functional Theory (DFT) using the B3LYP functional and LANL2DZ basis set with the software packaged Gaussian 16. We have discussed the Optimized Energy, Frontier Orbital Energy Gap in terms of HOMO-LUMO, Dipole Moment, Ionization Potential, Electron Affinity, Binding Energy, Embedding Energy, Density of States (DoS) spectrum for both structures. The important findings of the predicted nanostructures are that these structures have wide band gap energy, where linear structure has band gap energy (Eg) value is 2.375 eV and bent structure (Eg) value is 2.778 eV. Therefore, these structures can be utilized as wide band gap semiconductors. These structures have high electron affinity value of 4.259 eV for the linear structure and electron affinity value of 3.387 eV for the bent structure form. It shows that electron acceptor capability is high for both forms. The widely known application of these compounds is in the light emitting diodes due to their wide band gap nature.

Keywords: density functional theory, DFT, density functional theory, nanostructures, HOMO-LUMO, density of states

Procedia PDF Downloads 103
30961 Brain-Computer Interfaces That Use Electroencephalography

Authors: Arda Ozkurt, Ozlem Bozkurt

Abstract:

Brain-computer interfaces (BCIs) are devices that output commands by interpreting the data collected from the brain. Electroencephalography (EEG) is a non-invasive method to measure the brain's electrical activity. Since it was invented by Hans Berger in 1929, it has led to many neurological discoveries and has become one of the essential components of non-invasive measuring methods. Despite the fact that it has a low spatial resolution -meaning it is able to detect when a group of neurons fires at the same time-, it is a non-invasive method, making it easy to use without possessing any risks. In EEG, electrodes are placed on the scalp, and the voltage difference between a minimum of two electrodes is recorded, which is then used to accomplish the intended task. The recordings of EEGs include, but are not limited to, the currents along dendrites from synapses to the soma, the action potentials along the axons connecting neurons, and the currents through the synaptic clefts connecting axons with dendrites. However, there are some sources of noise that may affect the reliability of the EEG signals as it is a non-invasive method. For instance, the noise from the EEG equipment, the leads, and the signals coming from the subject -such as the activity of the heart or muscle movements- affect the signals detected by the electrodes of the EEG. However, new techniques have been developed to differentiate between those signals and the intended ones. Furthermore, an EEG device is not enough to analyze the data from the brain to be used by the BCI implication. Because the EEG signal is very complex, to analyze it, artificial intelligence algorithms are required. These algorithms convert complex data into meaningful and useful information for neuroscientists to use the data to design BCI devices. Even though for neurological diseases which require highly precise data, invasive BCIs are needed; non-invasive BCIs - such as EEGs - are used in many cases to help disabled people's lives or even to ease people's lives by helping them with basic tasks. For example, EEG is used to detect before a seizure occurs in epilepsy patients, which can then prevent the seizure with the help of a BCI device. Overall, EEG is a commonly used non-invasive BCI technique that has helped develop BCIs and will continue to be used to detect data to ease people's lives as more BCI techniques will be developed in the future.

Keywords: BCI, EEG, non-invasive, spatial resolution

Procedia PDF Downloads 62
30960 Improving Security in Healthcare Applications Using Federated Learning System With Blockchain Technology

Authors: Aofan Liu, Qianqian Tan, Burra Venkata Durga Kumar

Abstract:

Data security is of the utmost importance in the healthcare area, as sensitive patient information is constantly sent around and analyzed by many different parties. The use of federated learning, which enables data to be evaluated locally on devices rather than being transferred to a central server, has emerged as a potential solution for protecting the privacy of user information. To protect against data breaches and unauthorized access, federated learning alone might not be adequate. In this context, the application of blockchain technology could provide the system extra protection. This study proposes a distributed federated learning system that is built on blockchain technology in order to enhance security in healthcare. This makes it possible for a wide variety of healthcare providers to work together on data analysis without raising concerns about the confidentiality of the data. The technical aspects of the system, including as the design and implementation of distributed learning algorithms, consensus mechanisms, and smart contracts, are also investigated as part of this process. The technique that was offered is a workable alternative that addresses concerns about the safety of healthcare while also fostering collaborative research and the interchange of data.

Keywords: data privacy, distributed system, federated learning, machine learning

Procedia PDF Downloads 107
30959 Scene Classification Using Hierarchy Neural Network, Directed Acyclic Graph Structure, and Label Relations

Authors: Po-Jen Chen, Jian-Jiun Ding, Hung-Wei Hsu, Chien-Yao Wang, Jia-Ching Wang

Abstract:

A more accurate scene classification algorithm using label relations and the hierarchy neural network was developed in this work. In many classification algorithms, it is assumed that the labels are mutually exclusive. This assumption is true in some specific problems, however, for scene classification, the assumption is not reasonable. Because there are a variety of objects with a photo image, it is more practical to assign multiple labels for an image. In this paper, two label relations, which are exclusive relation and hierarchical relation, were adopted in the classification process to achieve more accurate multiple label classification results. Moreover, the hierarchy neural network (hierarchy NN) is applied to classify the image and the directed acyclic graph structure is used for predicting a more reasonable result which obey exclusive and hierarchical relations. Simulations show that, with these techniques, a much more accurate scene classification result can be achieved.

Keywords: convolutional neural network, label relation, hierarchy neural network, scene classification

Procedia PDF Downloads 448
30958 Research on Architectural Steel Structure Design Based on BIM

Authors: Tianyu Gao

Abstract:

Digital architectures use computer-aided design, programming, simulation, and imaging to create virtual forms and physical structures. Today's customers want to know more about their buildings. They want an automatic thermostat to learn their behavior and contact them, such as the doors and windows they want to open with a mobile app. Therefore, the architectural display form is more closely related to the customer's experience. Based on the purpose of building informationization, this paper studies the steel structure design based on BIM. Taking the Zigan office building in Hangzhou as an example, it is divided into four parts, namely, the digital design modulus of the steel structure, the node analysis of the steel structure, the digital production and construction of the steel structure. Through the application of BIM software, the architectural design can be synergized, and the building components can be informationized. Not only can the architectural design be feedback in the early stage, but also the stability of the construction can be guaranteed. In this way, the monitoring of the entire life cycle of the building and the meeting of customer needs can be realized.

Keywords: digital architectures, BIM, steel structure, architectural design

Procedia PDF Downloads 181
30957 Nano-Texturing of Single Crystalline Silicon via Cu-Catalyzed Chemical Etching

Authors: A. A. Abaker Omer, H. B. Mohamed Balh, W. Liu, A. Abas, J. Yu, S. Li, W. Ma, W. El Kolaly, Y. Y. Ahmed Abuker

Abstract:

We have discovered an important technical solution that could make new approaches in the processing of wet silicon etching, especially in the production of photovoltaic cells. During its inferior light-trapping and structural properties, the inverted pyramid structure outperforms the conventional pyramid textures and black silicone. The traditional pyramid textures and black silicon can only be accomplished with more advanced lithography, laser processing, etc. Importantly, our data demonstrate the feasibility of an inverted pyramidal structure of silicon via one-step Cu-catalyzed chemical etching (CCCE) in Cu (NO3)2/HF/H2O2/H2O solutions. The effects of etching time and reaction temperature on surface geometry and light trapping were systematically investigated. The conclusion shows that the inverted pyramid structure has ultra-low reflectivity of ~4.2% in the wavelength of 300~1000 nm; introduce of Cu particles can significantly accelerate the dissolution of the silicon wafer. The etching and the inverted pyramid structure formation mechanism are discussed. Inverted pyramid structure with outstanding anti-reflectivity includes useful applications throughout the manufacture of semi-conductive industry-compatible solar cells, and can have significant impacts on industry colleagues and populations.

Keywords: Cu-catalyzed chemical etching, inverted pyramid nanostructured, reflection, solar cells

Procedia PDF Downloads 144
30956 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core

Authors: Yashas Bedre Raghavendra, Pim Vullers

Abstract:

This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.

Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction

Procedia PDF Downloads 60
30955 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning

Authors: Saahith M. S., Sivakami R.

Abstract:

In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.

Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis

Procedia PDF Downloads 26
30954 Efficient Bargaining versus Right to Manage in the Era of Liberalization

Authors: Panagiota Koliousi, Natasha Miaouli

Abstract:

We compare product and labour market liberalization under the two trade union bargaining models: the Right-to-Manage (RTM) model and the Efficient Bargaining (EB) model. The vehicle is a dynamic general equilibrium (DGE) model that incorporates two types of agents (capitalists and workers), imperfectly competitive product and labour markets. The model is solved numerically employing common parameter values and data from the euro area. A key message is that product market deregulation is favourable under any labour market structure while opting for labour market deregulation one should provide special attention to the structure of the labour market such as the bargaining system of unions. If the prevailing way of bargaining is the RTM model then restructuring both markets is beneficial for all agents.

Keywords: market structure, structural reforms, trade unions, unemployment

Procedia PDF Downloads 191