Search results for: approximate computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1333

Search results for: approximate computing

703 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band

Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant K. Srivastava

Abstract:

An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input-output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986, and 0.9214, respectively at HH-polarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373, and 0.9428, respectively.

Keywords: bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE

Procedia PDF Downloads 428
702 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning

Authors: Yangzhi Li

Abstract:

Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.

Keywords: robotic construction, robotic assembly, visual guidance, machine learning

Procedia PDF Downloads 86
701 Graph-Oriented Summary for Optimized Resource Description Framework Graphs Streams Processing

Authors: Amadou Fall Dia, Maurras Ulbricht Togbe, Aliou Boly, Zakia Kazi Aoul, Elisabeth Metais

Abstract:

Existing RDF (Resource Description Framework) Stream Processing (RSP) systems allow continuous processing of RDF data issued from different application domains such as weather station measuring phenomena, geolocation, IoT applications, drinking water distribution management, and so on. However, processing window phase often expires before finishing the entire session and RSP systems immediately delete data streams after each processed window. Such mechanism does not allow optimized exploitation of the RDF data streams as the most relevant and pertinent information of the data is often not used in a due time and almost impossible to be exploited for further analyzes. It should be better to keep the most informative part of data within streams while minimizing the memory storage space. In this work, we propose an RDF graph summarization system based on an explicit and implicit expressed needs through three main approaches: (1) an approach for user queries (SPARQL) in order to extract their needs and group them into a more global query, (2) an extension of the closeness centrality measure issued from Social Network Analysis (SNA) to determine the most informative parts of the graph and (3) an RDF graph summarization technique combining extracted user query needs and the extended centrality measure. Experiments and evaluations show efficient results in terms of memory space storage and the most expected approximate query results on summarized graphs compared to the source ones.

Keywords: centrality measures, RDF graphs summary, RDF graphs stream, SPARQL query

Procedia PDF Downloads 203
700 Eight-Week Exercise for Women: Impact on Anomalies in Width Depth and Environmental Dimension

Authors: Yalcin Kaya, Fatma Arslan, Ahmet Selim Kaya

Abstract:

This study aimed to determine the undesirable hypertrophic anomalies in the body of females and to investigate how they can be affected by the exercise program according to the applied 8 week individual conditions. The research was carried out on 35 women who did not have any regular previous sports practice and had an approximate age of 30 ± 5.0 at the gymnasium because of their asymmetric structure and weight gain of the body. Measurements of width, depth, and periphery were taken from the participants' body, and the exercise protocol was applied for 8 weeks according to the individual measurements in accordance with the obtained measurements. After 8 weeks, the same measurements were applied again. Measurements were made by using ruler and paper tape. The findings were evaluated and differences were analyzed by paired sample t test. According to the findings obtained, ulnae distal proiecturas width averages were 44.77 ± 3.65 and 43.52 ± 3.47 pre- and post-exercise respectively. Bithorachanteric width averages were 29.3 ± 3.12 before exercise and 26.67 ± 3.27 after exercise. Average abdominal widths were observed as 18.64 ± 4.14 (before exercise) and 18.01 ± 6.27 (after exercise). The distances between the malleolus were measured as 16.98 ± 1.62 (before exercise) and 16.70 ± 1.64 (after exercise). The results were statistically significant (p < 0.05). The mean of pre-exercise Externus abdominis circumference was 93.97 ± 8.91, and the mean of post-exercise mean was 90.82 ± 8.24. The results are statistically significant (p < 0.05). In conclusion, findings of the study show that inactivity, daily uncontrolled activities or erroneous postural postures, malnutrition cause some anomalies in the human body. However, with consciously standardized and regular exercises, these abnormalities are reduced by an eight-week exercise protocol in parallel with the expulsion of excess kilos and can be removed when working much longer and fitter, it is proposed to be healthier and more beautiful in appearance.

Keywords: women, body, circumference-width and depth measurements, hypertrophy, exercise

Procedia PDF Downloads 384
699 Case-Based Reasoning Approach for Process Planning of Internal Thread Cold Extrusion

Authors: D. Zhang, H. Y. Du, G. W. Li, J. Zeng, D. W. Zuo, Y. P. You

Abstract:

For the difficult issues of process selection, case-based reasoning technology is applied to computer aided process planning system for cold form tapping of internal threads on the basis of similarity in the process. A model is established based on the analysis of process planning. Case representation and similarity computing method are given. Confidence degree is used to evaluate the case. Rule-based reuse strategy is presented. The scheme is illustrated and verified by practical application. The case shows the design results with the proposed method are effective.

Keywords: case-based reasoning, internal thread, cold extrusion, process planning

Procedia PDF Downloads 510
698 Two-Dimensional Observation of Oil Displacement by Water in a Petroleum Reservoir through Numerical Simulation and Application to a Petroleum Reservoir

Authors: Ahmad Fahim Nasiry, Shigeo Honma

Abstract:

We examine two-dimensional oil displacement by water in a petroleum reservoir. The pore fluid is immiscible, and the porous media is homogenous and isotropic in the horizontal direction. Buckley-Leverett theory and a combination of Laplacian and Darcy’s law are used to study the fluid flow through porous media, and the Laplacian that defines the dispersion and diffusion of fluid in the sand using heavy oil is discussed. The reservoir is homogenous in the horizontal direction, as expressed by the partial differential equation. Two main factors which are observed are the water saturation and pressure distribution in the reservoir, and they are evaluated for predicting oil recovery in two dimensions by a physical and mathematical simulation model. We review the numerical simulation that solves difficult partial differential reservoir equations. Based on the numerical simulations, the saturation and pressure equations are calculated by the iterative alternating direction implicit method and the iterative alternating direction explicit method, respectively, according to the finite difference assumption. However, to understand the displacement of oil by water and the amount of water dispersion in the reservoir better, an interpolated contour line of the water distribution of the five-spot pattern, that provides an approximate solution which agrees well with the experimental results, is also presented. Finally, a computer program is developed to calculate the equation for pressure and water saturation and to draw the pressure contour line and water distribution contour line for the reservoir.

Keywords: numerical simulation, immiscible, finite difference, IADI, IDE, waterflooding

Procedia PDF Downloads 331
697 A Machine Learning Approach for the Leakage Classification in the Hydraulic Final Test

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

The widespread use of machine learning applications in production is significantly accelerated by improved computing power and increasing data availability. Predictive quality enables the assurance of product quality by using machine learning models as a basis for decisions on test results. The use of real Bosch production data based on geometric gauge blocks from machining, mating data from assembly and hydraulic measurement data from final testing of directional valves is a promising approach to classifying the quality characteristics of workpieces.

Keywords: machine learning, classification, predictive quality, hydraulics, supervised learning

Procedia PDF Downloads 213
696 Unconventional Calculus Spreadsheet Functions

Authors: Chahid K. Ghaddar

Abstract:

The spreadsheet engine is exploited via a non-conventional mechanism to enable novel worksheet solver functions for computational calculus. The solver functions bypass inherent restrictions on built-in math and user defined functions by taking variable formulas as a new type of argument while retaining purity and recursion properties. The enabling mechanism permits integration of numerical algorithms into worksheet functions for solving virtually any computational problem that can be modelled by formulas and variables. Several examples are presented for computing integrals, derivatives, and systems of deferential-algebraic equations. Incorporation of the worksheet solver functions with the ubiquitous spreadsheet extend the utility of the latter as a powerful tool for computational mathematics.

Keywords: calculus, differential algebraic equations, solvers, spreadsheet

Procedia PDF Downloads 360
695 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 122
694 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures

Procedia PDF Downloads 355
693 Design of Cloud Service Brokerage System Intermediating Integrated Services in Multiple Cloud Environment

Authors: Dongjae Kang, Sokho Son, Jinmee Kim

Abstract:

Cloud service brokering is a new service paradigm that provides interoperability and portability of application across multiple Cloud providers. In this paper, we designed cloud service brokerage system, any broker, supporting integrated service provisioning and SLA based service life cycle management. For the system design, we introduce the system concept and whole architecture, details of main components and use cases of primary operations in the system. These features ease the Cloud service provider and customer’s concern and support new Cloud service open market to increase cloud service profit and prompt Cloud service echo system in cloud computing related area.

Keywords: cloud service brokerage, multiple Clouds, Integrated service provisioning, SLA, network service

Procedia PDF Downloads 488
692 Comparative Study of Scheduling Algorithms for LTE Networks

Authors: Samia Dardouri, Ridha Bouallegue

Abstract:

Scheduling is the process of dynamically allocating physical resources to User Equipment (UE) based on scheduling algorithms implemented at the LTE base station. Various algorithms have been proposed by network researchers as the implementation of scheduling algorithm which represents an open issue in Long Term Evolution (LTE) standard. This paper makes an attempt to study and compare the performance of PF, MLWDF and EXP/PF scheduling algorithms. The evaluation is considered for a single cell with interference scenario for different flows such as Best effort, Video and VoIP in a pedestrian and vehicular environment using the LTE-Sim network simulator. The comparative study is conducted in terms of system throughput, fairness index, delay, packet loss ratio (PLR) and total cell spectral efficiency.

Keywords: LTE, multimedia flows, scheduling algorithms, mobile computing

Procedia PDF Downloads 383
691 Developement of a New Wearable Device for Automatic Guidance Service

Authors: Dawei Cai

Abstract:

In this paper, we present a new wearable device that provide an automatic guidance servie for visitors. By combining the position information from NFC and the orientation information from a 6 axis acceleration and terrestrial magnetism sensor, the head's direction can be calculated. We developed an algorithm to calculate the device orientation based on the data from acceleration and terrestrial magnetism sensor. If visitors want to know some explanation about an exhibit in front of him, what he has to do is just lift up his mobile device. The identification program will automatically identify the status based on the information from NFC and MEMS, and start playing explanation content for him. This service may be convenient for old people or disables or children.

Keywords: wearable device, ubiquitous computing, guide sysem, MEMS sensor, NFC

Procedia PDF Downloads 425
690 A Low-Area Fully-Reconfigurable Hardware Design of Fast Fourier Transform System for 3GPP-LTE Standard

Authors: Xin-Yu Shih, Yue-Qu Liu, Hong-Ru Chou

Abstract:

This paper presents a low-area and fully-reconfigurable Fast Fourier Transform (FFT) hardware design for 3GPP-LTE communication standard. It can fully support 32 different FFT sizes, up to 2048 FFT points. Besides, a special processing element is developed for making reconfigurable computing characteristics possible, while first-in first-out (FIFO) scheduling scheme design technique is proposed for hardware-friendly FIFO resource arranging. In a synthesis chip realization via TSMC 40 nm CMOS technology, the hardware circuit only occupies core area of 0.2325 mm2 and dissipates 233.5 mW at maximal operating frequency of 250 MHz.

Keywords: reconfigurable, fast Fourier transform (FFT), single-path delay feedback (SDF), 3GPP-LTE

Procedia PDF Downloads 278
689 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 285
688 Split Monotone Inclusion and Fixed Point Problems in Real Hilbert Spaces

Authors: Francis O. Nwawuru

Abstract:

The convergence analysis of split monotone inclusion problems and fixed point problems of certain nonlinear mappings are investigated in the setting of real Hilbert spaces. Inertial extrapolation term in the spirit of Polyak is incorporated to speed up the rate of convergence. Under standard assumptions, a strong convergence of the proposed algorithm is established without computing the resolvent operator or involving Yosida approximation method. The stepsize involved in the algorithm does not depend on the spectral radius of the linear operator. Furthermore, applications of the proposed algorithm in solving some related optimization problems are also considered. Our result complements and extends numerous results in the literature.

Keywords: fixedpoint, hilbertspace, monotonemapping, resolventoperators

Procedia PDF Downloads 52
687 CMPD: Cancer Mutant Proteome Database

Authors: Po-Jung Huang, Chi-Ching Lee, Bertrand Chin-Ming Tan, Yuan-Ming Yeh, Julie Lichieh Chu, Tin-Wen Chen, Cheng-Yang Lee, Ruei-Chi Gan, Hsuan Liu, Petrus Tang

Abstract:

Whole-exome sequencing focuses on the protein coding regions of disease/cancer associated genes based on a priori knowledge is the most cost-effective method to study the association between genetic alterations and disease. Recent advances in high throughput sequencing technologies and proteomic techniques has provided an opportunity to integrate genomics and proteomics, allowing readily detectable mutated peptides corresponding to mutated genes. Since sequence database search is the most widely used method for protein identification using Mass spectrometry (MS)-based proteomics technology, a mutant proteome database is required to better approximate the real protein pool to improve disease-associated mutated protein identification. Large-scale whole exome/genome sequencing studies were launched by National Cancer Institute (NCI), Broad Institute, and The Cancer Genome Atlas (TCGA), which provide not only a comprehensive report on the analysis of coding variants in diverse samples cell lines but a invaluable resource for extensive research community. No existing database is available for the collection of mutant protein sequences related to the identified variants in these studies. CMPD is designed to address this issue, serving as a bridge between genomic data and proteomic studies and focusing on protein sequence-altering variations originated from both germline and cancer-associated somatic variations.

Keywords: TCGA, cancer, mutant, proteome

Procedia PDF Downloads 593
686 Earthquake Forecasting Procedure Due to Diurnal Stress Transfer by the Core to the Crust

Authors: Hassan Gholibeigian, Kazem Gholibeigian

Abstract:

In this paper, our goal is determination of loading versus time in crust. For this goal, we present a computational procedure to propose a cumulative strain energy time profile which can be used to predict the approximate location and time of the next major earthquake (M > 4.5) along a specific fault, which we believe, is more accurate than many of the methods presently in use. In the coming pages, after a short review of the research works presently going on in the area of earthquake analysis and prediction, earthquake mechanisms in both the jerk and sequence earthquake direction is discussed, then our computational procedure is presented using differential equations of equilibrium which govern the nonlinear dynamic response of a system of finite elements, modified with an extra term to account for the jerk produced during the quake. We then employ Von Mises developed model for the stress strain relationship in our calculations, modified with the addition of an extra term to account for thermal effects. For calculation of the strain energy the idea of Pulsating Mantle Hypothesis (PMH) is used. This hypothesis, in brief, states that the mantle is under diurnal cyclic pulsating loads due to unbalanced gravitational attraction of the sun and the moon. A brief discussion is done on the Denali fault as a case study. The cumulative strain energy is then graphically represented versus time. At the end, based on some hypothetic earthquake data, the final results are verified.

Keywords: pulsating mantle hypothesis, inner core’s dislocation, outer core’s bulge, constitutive model, transient hydro-magneto-thermo-mechanical load, diurnal stress, jerk, fault behaviour

Procedia PDF Downloads 276
685 Microwave Synthesis and Molecular Docking Studies of Azetidinone Analogous Bearing Diphenyl Ether Nucleus as a Potent Antimycobacterial and Antiprotozoal Agent

Authors: Vatsal M. Patel, Navin B. Patel

Abstract:

The present studies deal with the developing a series bearing a diphenyl ethers nucleus using structure-based drug design concept. A newer series of diphenyl ether based azetidinone namely N-(3-chloro-2-oxo-4-(3-phenoxyphenyl)azetidin-1-yl)-2-(substituted amino)acetamide (2a-j) have been synthesized by condensation of m-phenoxybenzaldehyde with 2-(substituted-phenylamino)acetohydrazide followed by the cyclisation of resulting Schiff base (1a-j) by conventional method as well as microwave heating approach as a part of an environmentally benign synthetic protocol. All the synthesized compounds were characterized by spectral analysis and were screened for in vitro antimicrobial, antitubercular and antiprotozoal activity. The compound 2f was found to be most active M. tuberculosis (6.25 µM) MIC value in the primary screening as well as this same derivative has been found potency against L. mexicana and T. cruzi with MIC value 2.09 and 6.69 µM comparable to the reference drug Miltefosina and Nifurtimox. To provide understandable evidence to predict binding mode and approximate binding energy of a compound to a target in the terms of ligand-protein interaction, all synthesized compounds were docked against an enoyl-[acyl-carrier-protein] reductase of M. tuberculosis (PDB ID: 4u0j). The computational studies revealed that azetidinone derivatives have a high affinity for the active site of enzyme which provides a strong platform for new structure-based design efforts. The Lipinski’s parameters showed good drug-like properties and can be developed as an oral drug candidate.

Keywords: antimycobacterial, antiprotozoal, azetidinone, diphenylether, docking, microwave

Procedia PDF Downloads 161
684 Analyzing the Impact of DCF and PCF on WLAN Network Standards 802.11a, 802.11b, and 802.11g

Authors: Amandeep Singh Dhaliwal

Abstract:

Networking solutions, particularly wireless local area networks have revolutionized the technological advancement. Wireless Local Area Networks (WLANs) have gained a lot of popularity as they provide location-independent network access between computing devices. There are a number of access methods used in Wireless Networks among which DCF and PCF are the fundamental access methods. This paper emphasizes on the impact of DCF and PCF access mechanisms on the performance of the IEEE 802.11a, 802.11b and 802.11g standards. On the basis of various parameters viz. throughput, delay, load etc performance is evaluated between these three standards using above mentioned access mechanisms. Analysis revealed a superior throughput performance with low delays for 802.11g standard as compared to 802.11 a/b standard using both DCF and PCF access methods.

Keywords: DCF, IEEE, PCF, WLAN

Procedia PDF Downloads 425
683 Composition and Distribution of Seabed Marine Litter Along Algerian Coast (Western Mediterranean)

Authors: Ahmed Inal, Samir Rouidi, Samir Bachouche

Abstract:

The present study is focused on the distribution and composition of seafloor marine litter associated to trawlable fishing areas along Algerian coast. The sampling was done with a GOC73 bottom trawl during four (04) demersal resource assessment cruises, respectively, in 2016, 2019, 2021 and 2022, carried out on board BELKACEM GRINE R/V. A total of 254 fishing hauls were sampled for the assessment of marine litter. Hauls were performed between 22 and 600 m of depth, the duration was between 30 and 60 min. All sampling was conducted during daylight. After the haul, marine litter was sorted and split from the catch. Then, according to the basis of the MEDITS protocol, litters were sorted into six different categories (plastic, rubber, metal, wood, glass and natural fiber). Thereafter, all marine litter were counted and weighed separately to the nearest 0.5 g. The results shows that the maximums of marine litter densities in the seafloor of the trawling fishing areas along Algerian coast are, respectively, 1996 item/km2 in 2016, 5164 item/km2 in 2019, 2173 item/km2 in 2021 and 7319 item/km2 in 2022. Thus, the plastic is the most abundant litter, it represent, respectively, 46% of marine litter in 2016, 67% in 2019, 69% in 2021 and 74% in 2022. Regarding the weight of the marine litter, it varies between 0.00 and 103 kg in 2016, between 0.04 and 81 kg in 2019, between 0.00 and 68 Kg in 2021 and between 0.00 and 318 kg in 2022. Thus, the maximum rate of marine litter compared to the total catch approximate, respectively, 66% in 2016, 90% in 2019, 65% in 2021 and 91% in 2022. In fact, the average loss in catch is estimated, respectively, at 7.4% in 2016, 8.4% in 2019, 5.7% in 2021 and 6.4% in 2022. However, the bathymetric and geographical variability had a significant impact on both density and weight of marine litter. Marine litter monitoring program is necessary for offering more solution proposals.

Keywords: composition, distribution, seabed, marine litter, algerian coast

Procedia PDF Downloads 68
682 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 259
681 A Common Automated Programming Platform for Knowledge Based Software Engineering

Authors: Ivan Stanev, Maria Koleva

Abstract:

A common platform for automated programming (CPAP) is defined in details. Two versions of CPAP are described: Cloud-based (including the set of components for classic programming, and the set of components for combined programming) and KBASE based (including the set of components for automated programming, and the set of components for ontology programming). Four KBASE products (module for automated programming of robots, intelligent product manual, intelligent document display, and intelligent form generator) are analyzed and CPAP contributions to automated programming are presented.

Keywords: automated programming, cloud computing, knowledge based software engineering, service oriented architecture

Procedia PDF Downloads 344
680 An Efficient Automated Radiation Measuring System for Plasma Monopole Antenna

Authors: Gurkirandeep Kaur, Rana Pratap Yadav

Abstract:

This experimental study is aimed to examine the radiation characteristics of different plasma structures of a surface wave-driven plasma antenna by an automated measuring system. In this study, a 30 cm long plasma column of argon gas with a diameter of 3 cm is excited by surface wave discharge mechanism operating at 13.56 MHz with RF power level up to 100 Watts and gas pressure between 0.01 to 0.05 mb. The study reveals that a single structured plasma monopole can be modified into an array of plasma antenna elements by forming multiple striations or plasma blobs inside the discharge tube by altering the values of plasma properties such as working pressure, operating frequency, input RF power, discharge tube dimensions, i.e., length, radius, and thickness. It is also reported that plasma length, electron density, and conductivity are functions of operating plasma parameters and controlled by changing working pressure and input power. To investigate the antenna radiation efficiency for the far-field region, an automation-based radiation measuring system has been fabricated and presented in detail. This developed automated system involves a combined setup of controller, dc servo motors, vector network analyzer, and computing device to evaluate the radiation intensity, directivity, gain and efficiency of plasma antenna. In this system, the controller is connected to multiple motors for moving aluminum shafts in both elevation and azimuthal plane whereas radiation from plasma monopole antenna is measured by a Vector Network Analyser (VNA) which is further wired up with the computing device to display radiations in polar plot forms. Here, the radiation characteristics of both continuous and array plasma monopole antenna have been studied for various working plasma parameters. The experimental results clearly indicate that the plasma antenna is as efficient as a metallic antenna. The radiation from plasma monopole antenna is significantly influenced by plasma properties which provides a wider range in radiation pattern where desired radiation parameters like beam-width, the direction of radiation, radiation intensity, antenna efficiency, etc. can be achieved in a single monopole. Due to its wide range of selectivity in radiation pattern; this can meet the demands of wider bandwidth to get high data speed in communication systems. Moreover, this developed system provides an efficient and cost-effective solution for measuring the radiation pattern in far-field zone for any kind of antenna system.

Keywords: antenna radiation characteristics, dynamically reconfigurable, plasma antenna, plasma column, plasma striations, surface wave

Procedia PDF Downloads 119
679 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 134
678 Molecular Dynamics Simulation on Nanoelectromechanical Graphene Nanoflake Shuttle Device

Authors: Eunae Lee, Oh-Kuen Kwon, Ki-Sub Kim, Jeong Won Kang

Abstract:

We investigated the dynamic properties of graphene-nanoribbon (GNR) memory encapsulating graphene-nanoflake (GNF) shuttle in the potential to be applicable as a non-volatile random access memory via molecular dynamics simulations. This work explicitly demonstrates that the GNR encapsulating the GNF shuttle can be applied to nonvolatile memory. The potential well was originated by the increase of the attractive vdW energy between the GNRs when the GNF approached the edges of the GNRs. So the bistable positions were located near the edges of the GNRs. Such a nanoelectromechanical non-volatile memory based on graphene is also applicable to the development of switches, sensors, and quantum computing.

Keywords: graphene nanoribbon, graphene nanoflake, shuttle memory, molecular dynamics

Procedia PDF Downloads 461
677 Alexa (Machine Learning) in Artificial Intelligence

Authors: Loulwah Bokhari, Jori Nazer, Hala Sultan

Abstract:

Nowadays, artificial intelligence (AI) is used as a foundation for many activities in modern computing applications at home, in vehicles, and in businesses. Many modern machines are built to carry out a specific activity or purpose. This is where the Amazon Alexa application comes in, as it is used as a virtual assistant. The purpose of this paper is to explore the use of Amazon Alexa among people and how it has improved and made simple daily tasks easier for many people. We gave our participants several questions regarding Amazon Alexa and if they had recently used or heard of it, as well as the different tasks it provides and whether it successfully satisfied their needs. Overall, we found that participants who have recently used Alexa have found it to be helpful in their daily tasks.

Keywords: artificial intelligence, Echo system, machine learning, feature for feature match

Procedia PDF Downloads 121
676 Adobe Attenuation Coefficient Determination and Its Comparison with Other Shielding Materials for Energies Found in Common X-Rays Procedures

Authors: Camarena Rodriguez C. S., Portocarrero Bonifaz A., Palma Esparza R., Romero Carlos N. A.

Abstract:

Adobe is a construction material that fulfills the same function as a conventional brick. Widely used since ancient times, it is present in an appreciable percentage of buildings in Latin America. Adobe is a mixture of clay and sand. The interest in the study of the properties of this material arises due to its presence in the infrastructure of hospital´s radiological services, located in places with low economic resources, for the attenuation of radiation. Some materials such as lead and concrete are the most used for shielding and are widely studied in the literature. The present study will determine the mass attenuation coefficient of Adobe. The minimum required thicknesses for the primary and secondary barriers will be estimated for the shielding of radiological facilities where conventional and dental X-rays are performed. For the experimental procedure, an X-ray source emitted direct radiation towards different thicknesses of an Adobe barrier, and a detector was placed on the other side. For this purpose, an UNFORS Xi solid state detector was used, which collected information on the difference of radiation intensity. The initial parameters of the exposure started at 45 kV; and then the tube tension was varied in increments of 5 kV, reaching a maximum of 125 kV. The X-Ray tube was positioned at a distance of 0.5 m from the surface of the Adobe bricks, and the collimation of the radiation beam was set for an area of 0.15 m x 0.15 m. Finally, mathematical methods were applied to determine the mass attenuation coefficient for different energy ranges. In conclusion, the mass attenuation coefficient for Adobe was determined and the approximate thicknesses of the most common Adobe barriers in the hospital buildings were calculated for their later application in the radiological protection.

Keywords: Adobe, attenuation coefficient, radiological protection, shielding, x-rays

Procedia PDF Downloads 157
675 A Contribution to Human Activities Recognition Using Expert System Techniques

Authors: Malika Yaici, Soraya Aloui, Sara Semchaoui

Abstract:

This paper deals with human activity recognition from sensor data. It is an active research area, and the main objective is to obtain a high recognition rate. In this work, a recognition system based on expert systems is proposed; the recognition is performed using the objects, object states, and gestures and taking into account the context (the location of the objects and of the person performing the activity, the duration of the elementary actions and the activity). The system recognizes complex activities after decomposing them into simple, easy-to-recognize activities. The proposed method can be applied to any type of activity. The simulation results show the robustness of our system and its speed of decision.

Keywords: human activity recognition, ubiquitous computing, context-awareness, expert system

Procedia PDF Downloads 118
674 Views from Shores Past: Palaeogeographic Reconstructions as an Aid for Interpreting the Movement of Early Modern Humans on and between the Islands of Wallacea

Authors: S. Kealy, J. Louys, S. O’Connor

Abstract:

The island archipelago that stretches between the continents of Sunda (Southeast Asia) and Sahul (Australia - New Guinea) and comprising much of modern-day Indonesia as well as Timor-Leste, represents the biogeographic region of Wallacea. The islands of Wallaea are significant archaeologically as they have never been connected to the mainlands of either Sunda or Sahul, and thus the colonization by early modern humans of these islands and subsequently Australia and New Guinea, would have necessitated some form of water crossings. Accurate palaeogeographic reconstructions of the Wallacean Archipelago for this time are important not only for modeling likely routes of colonization but also for reconstructing likely landscapes and hence resources available to the first colonists. Here we present five digital reconstructions of coastal outlines of Wallacea and Sahul (Australia and New Guinea) for the periods 65, 60, 55, 50, and 45,000 years ago using the latest bathometric chart and a sea-level model that is adjusted to account for the average uplift rate known from Wallacea. This data was also used to reconstructed island areal extent as well as topography for each time period. These reconstructions allowed us to determine the distance from the coast and relative elevation of the earliest archaeological sites for each island where such records exist. This enabled us to approximate how much effort exploitation of coastal resources would have taken for early colonists, and how important such resources were. These reconstructions also allowed us to estimate visibility for each island in the archipelago, and to model how intervisible each island was during the period of likely human colonisation. We demonstrate how these models provide archaeologists with an important basis for visualising this ancient landscape and interpreting how it was originally viewed, traversed and exploited by its earliest modern human inhabitants.

Keywords: Wallacea, palaeogeographic reconstructions, islands, intervisibility

Procedia PDF Downloads 211