Search results for: approximate computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1302

Search results for: approximate computing

702 Analysis of Histogram Asymmetry for Waste Recognition

Authors: Janusz Bobulski, Kamila Pasternak

Abstract:

Despite many years of effort and research, the problem of waste management is still current. So far, no fully effective waste management system has been developed. Many programs and projects improve statistics on the percentage of waste recycled every year. In these efforts, it is worth using modern Computer Vision techniques supported by artificial intelligence. In the article, we present a method of identifying plastic waste based on the asymmetry analysis of the histogram of the image containing the waste. The method is simple but effective (94%), which allows it to be implemented on devices with low computing power, in particular on microcomputers. Such de-vices will be used both at home and in waste sorting plants.

Keywords: waste management, environmental protection, image processing, computer vision

Procedia PDF Downloads 104
701 A Unified Approach for Digital Forensics Analysis

Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles

Abstract:

Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.

Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool

Procedia PDF Downloads 179
700 Hybrid Thresholding Lifting Dual Tree Complex Wavelet Transform with Wiener Filter for Quality Assurance of Medical Image

Authors: Hilal Naimi, Amelbahahouda Adamou-Mitiche, Lahcene Mitiche

Abstract:

The main problem in the area of medical imaging has been image denoising. The most defying for image denoising is to secure data carrying structures like surfaces and edges in order to achieve good visual quality. Different algorithms with different denoising performances have been proposed in previous decades. More recently, models focused on deep learning have shown a great promise to outperform all traditional approaches. However, these techniques are limited to the necessity of large sample size training and high computational costs. This research proposes a denoising approach basing on LDTCWT (Lifting Dual Tree Complex Wavelet Transform) using Hybrid Thresholding with Wiener filter to enhance the quality image. This research describes the LDTCWT as a type of lifting wavelets remodeling that produce complex coefficients by employing a dual tree of lifting wavelets filters to get its real part and imaginary part. Permits the remodel to produce approximate shift invariance, directionally selective filters and reduces the computation time (properties lacking within the classical wavelets transform). To develop this approach, a hybrid thresholding function is modeled by integrating the Wiener filter into the thresholding function.

Keywords: lifting wavelet transform, image denoising, dual tree complex wavelet transform, wavelet shrinkage, wiener filter

Procedia PDF Downloads 150
699 Semirings of Graphs: An Approach Towards the Algebra of Graphs

Authors: Gete Umbrey, Saifur Rahman

Abstract:

Graphs are found to be most capable in computing, and its abstract structures have been applied in some specific computations and algorithms like in phase encoding controller, processor microcontroller, and synthesis of a CMOS switching network, etc. Being motivated by these works, we develop an independent approach to study semiring structures and various properties by defining the binary operations which in fact, seems analogous to an existing definition in some sense but with a different approach. This work emphasizes specifically on the construction of semigroup and semiring structures on the set of undirected graphs, and their properties are investigated therein. It is expected that the investigation done here may have some interesting applications in theoretical computer science, networking and decision making, and also on joining of two network systems.

Keywords: graphs, join and union of graphs, semiring, weighted graphs

Procedia PDF Downloads 132
698 Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R

Authors: Jaya Mathew

Abstract:

Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.

Keywords: predictive maintenance, machine learning, big data, cloud based, on premise solution, R

Procedia PDF Downloads 363
697 Overcoming 4-to-1 Decryption Failure of the Rabin Cryptosystem

Authors: Muhammad Rezal Kamel Ariffin, Muhammad Asyraf Asbullah

Abstract:

The square root modulo problem is a known primitive in designing an asymmetric cryptosystem. It was first attempted by Rabin. Decryption failure of the Rabin cryptosystem caused by the 4-to-1 decryption output is overcome efficiently in this work. The proposed scheme to overcome the decryption failure issue (known as the AAβ-cryptosystem) is constructed using a simple mathematical structure, it has low computational requirements and would enable communication devices with low computing power to deploy secure communication procedures efficiently.

Keywords: Rabin cryptosystem, 4-to-1 decryption failure, square root modulo problem, integer factorization problem

Procedia PDF Downloads 451
696 Fixed Point Iteration of a Damped and Unforced Duffing's Equation

Authors: Paschal A. Ochang, Emmanuel C. Oji

Abstract:

The Duffing’s Equation is a second order system that is very important because they are fundamental to the behaviour of higher order systems and they have applications in almost all fields of science and engineering. In the biological area, it is useful in plant stem dependence and natural frequency and model of the Brain Crash Analysis (BCA). In Engineering, it is useful in the study of Damping indoor construction and Traffic lights and to the meteorologist it is used in the prediction of weather conditions. However, most Problems in real life that occur are non-linear in nature and may not have analytical solutions except approximations or simulations, so trying to find an exact explicit solution may in general be complicated and sometimes impossible. Therefore we aim to find out if it is possible to obtain one analytical fixed point to the non-linear ordinary equation using fixed point analytical method. We started by exposing the scope of the Duffing’s equation and other related works on it. With a major focus on the fixed point and fixed point iterative scheme, we tried different iterative schemes on the Duffing’s Equation. We were able to identify that one can only see the fixed points to a Damped Duffing’s Equation and not to the Undamped Duffing’s Equation. This is because the cubic nonlinearity term is the determining factor to the Duffing’s Equation. We finally came to the results where we identified the stability of an equation that is damped, forced and second order in nature. Generally, in this research, we approximate the solution of Duffing’s Equation by converting it to a system of First and Second Order Ordinary Differential Equation and using Fixed Point Iterative approach. This approach shows that for different versions of Duffing’s Equations (damped), we find fixed points, therefore the order of computations and running time of applied software in all fields using the Duffing’s equation will be reduced.

Keywords: damping, Duffing's equation, fixed point analysis, second order differential, stability analysis

Procedia PDF Downloads 273
695 Using Groundwater Modeling System to Create a 3-D Groundwater Flow and Solute Transport Model for a Semiarid Region: A Case Study of the Nadhour Saouaf Sisseb El Alem Aquifer, Central Tunisia

Authors: Emna Bahri Hammami, Zammouri Mounira, Tarhouni Jamila

Abstract:

The Nadhour Saouaf Sisseb El Alem (NSSA) system comprises some of the most intensively exploited aquifers in central Tunisia. Since the 1970s, the growth in economic productivity linked to intensive agriculture in this semiarid region has been sustained by increasing pumping rates of the system’s groundwater. Exploitation of these aquifers has increased rapidly, ultimately causing their depletion. With the aim to better understand the behavior of the aquifer system and to predict its evolution, the paper presents a finite difference model of the groundwater flow and solute transport. The model is based on the Groundwater Modeling System (GMS) and was calibrated using data from 1970 to 2010. Groundwater levels observed in 1970 were used for the steady-state calibration. Groundwater levels observed from 1971 to 2010 served to calibrate the transient state. The impact of pumping discharge on the evolution of groundwater levels was studied through three hypothetical pumping scenarios. The first two scenarios replicated the approximate drawdown in the aquifer heads (about 17 m in scenario 1 and 23 m in scenario 2 in the center of NSSA) following an increase in pumping rates by 30% and 50% from their current values, respectively. In addition, pumping was stopped in the third scenario, which could increase groundwater reserves by about 7 Mm3/year. NSSA groundwater reserves could be improved considerably if the pumping rules were taken seriously.

Keywords: pumping, depletion, groundwater modeling system GMS, Nadhour Saouaf

Procedia PDF Downloads 209
694 Geophysical Exploration of Aquifer Zones by (Ves) Method at Ayma-Kharagpur, District Paschim Midnapore, West Bengal

Authors: Mayank Sharma

Abstract:

Groundwater has been a matter of great concern in the past years due to the depletion in the water table. This has resulted from the over-exploitation of groundwater resources. Sub-surface exploration of groundwater is a great way to identify the groundwater potential of an area. Thus, in order to meet the water needs for irrigation in the study area, there was a need for a tube well to be installed. Therefore, a Geophysical investigation was carried out to find the most suitable point of drilling and sinking of tube well that encounters an aquifer. Hence, an electrical resistivity survey of geophysical exploration was used to know the aquifer zones of the area. The Vertical Electrical Sounding (VES) method was employed to know the subsurface geology of the area. Seven vertical electrical soundings using Schlumberger electrode array were carried out, having the maximum AB electrode separation of 700m at selected points in Ayma, Kharagpur-1 block of Paschim Midnapore district, West Bengal. The VES was done using an IGIS DDR3 Resistivity meter up to an approximate depth of 160-180m. The data was interpreted, processed and analyzed. Based on all the interpretations using the direct method, the geology of the area at the points of sounding was interpreted. It was established that two deeper clay-sand sections exist in the area at a depth of 50-70m (having resistivity range of 40-60ohm-m) and 70-160m (having resistivity range of 25-35ohm-m). These aquifers will provide a high yield of water which would be sufficient for the desired irrigation in the study area.

Keywords: VES method, Schlumberger method, electrical resistivity survey, geophysical exploration

Procedia PDF Downloads 183
693 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band

Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant K. Srivastava

Abstract:

An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input-output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986, and 0.9214, respectively at HH-polarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373, and 0.9428, respectively.

Keywords: bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE

Procedia PDF Downloads 413
692 Graph-Oriented Summary for Optimized Resource Description Framework Graphs Streams Processing

Authors: Amadou Fall Dia, Maurras Ulbricht Togbe, Aliou Boly, Zakia Kazi Aoul, Elisabeth Metais

Abstract:

Existing RDF (Resource Description Framework) Stream Processing (RSP) systems allow continuous processing of RDF data issued from different application domains such as weather station measuring phenomena, geolocation, IoT applications, drinking water distribution management, and so on. However, processing window phase often expires before finishing the entire session and RSP systems immediately delete data streams after each processed window. Such mechanism does not allow optimized exploitation of the RDF data streams as the most relevant and pertinent information of the data is often not used in a due time and almost impossible to be exploited for further analyzes. It should be better to keep the most informative part of data within streams while minimizing the memory storage space. In this work, we propose an RDF graph summarization system based on an explicit and implicit expressed needs through three main approaches: (1) an approach for user queries (SPARQL) in order to extract their needs and group them into a more global query, (2) an extension of the closeness centrality measure issued from Social Network Analysis (SNA) to determine the most informative parts of the graph and (3) an RDF graph summarization technique combining extracted user query needs and the extended centrality measure. Experiments and evaluations show efficient results in terms of memory space storage and the most expected approximate query results on summarized graphs compared to the source ones.

Keywords: centrality measures, RDF graphs summary, RDF graphs stream, SPARQL query

Procedia PDF Downloads 182
691 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 48
690 Description of a Structural Health Monitoring and Control System Using Open Building Information Modeling

Authors: Wahhaj Ahmed Farooqi, Bilal Ahmad, Sandra Maritza Zambrano Bernal

Abstract:

In view of structural engineering, monitoring of structural responses over time is of great importance with respect to recent developments of construction technologies. Recently, developments of advanced computing tools have enabled researcher’s better execution of structural health monitoring (SHM) and control systems. In the last decade, building information modeling (BIM) has substantially enhanced the workflow of planning and operating engineering structures. Typically, building information can be stored and exchanged via model files that are based on the Industry Foundation Classes (IFC) standard. In this study a modeling approach for semantic modeling of SHM and control systems is integrated into the BIM methodology using the IFC standard. For validation of the modeling approach, a laboratory test structure, a four-story shear frame structure, is modeled using a conventional BIM software tool. An IFC schema extension is applied to describe information related to monitoring and control of a prototype SHM and control system installed on the laboratory test structure. The SHM and control system is described by a semantic model applying Unified Modeling Language (UML). Subsequently, the semantic model is mapped into the IFC schema. The test structure is composed of four aluminum slabs and plate-to-column connections are fully fixed. In the center of the top story, semi-active tuned liquid column damper (TLCD) is installed. The TLCD is used to reduce effects of structural responses in context of dynamic vibration and displacement. The wireless prototype SHM and control system is composed of wireless sensor nodes. For testing the SHM and control system, acceleration response is automatically recorded by the sensor nodes equipped with accelerometers and analyzed using embedded computing. As a result, SHM and control systems can be described within open BIM, dynamic responses and information of damages can be stored, documented, and exchanged on the formal basis of the IFC standard.

Keywords: structural health monitoring, open building information modeling, industry foundation classes, unified modeling language, semi-active tuned liquid column damper, nondestructive testing

Procedia PDF Downloads 135
689 Eight-Week Exercise for Women: Impact on Anomalies in Width Depth and Environmental Dimension

Authors: Yalcin Kaya, Fatma Arslan, Ahmet Selim Kaya

Abstract:

This study aimed to determine the undesirable hypertrophic anomalies in the body of females and to investigate how they can be affected by the exercise program according to the applied 8 week individual conditions. The research was carried out on 35 women who did not have any regular previous sports practice and had an approximate age of 30 ± 5.0 at the gymnasium because of their asymmetric structure and weight gain of the body. Measurements of width, depth, and periphery were taken from the participants' body, and the exercise protocol was applied for 8 weeks according to the individual measurements in accordance with the obtained measurements. After 8 weeks, the same measurements were applied again. Measurements were made by using ruler and paper tape. The findings were evaluated and differences were analyzed by paired sample t test. According to the findings obtained, ulnae distal proiecturas width averages were 44.77 ± 3.65 and 43.52 ± 3.47 pre- and post-exercise respectively. Bithorachanteric width averages were 29.3 ± 3.12 before exercise and 26.67 ± 3.27 after exercise. Average abdominal widths were observed as 18.64 ± 4.14 (before exercise) and 18.01 ± 6.27 (after exercise). The distances between the malleolus were measured as 16.98 ± 1.62 (before exercise) and 16.70 ± 1.64 (after exercise). The results were statistically significant (p < 0.05). The mean of pre-exercise Externus abdominis circumference was 93.97 ± 8.91, and the mean of post-exercise mean was 90.82 ± 8.24. The results are statistically significant (p < 0.05). In conclusion, findings of the study show that inactivity, daily uncontrolled activities or erroneous postural postures, malnutrition cause some anomalies in the human body. However, with consciously standardized and regular exercises, these abnormalities are reduced by an eight-week exercise protocol in parallel with the expulsion of excess kilos and can be removed when working much longer and fitter, it is proposed to be healthier and more beautiful in appearance.

Keywords: women, body, circumference-width and depth measurements, hypertrophy, exercise

Procedia PDF Downloads 372
688 Two-Dimensional Observation of Oil Displacement by Water in a Petroleum Reservoir through Numerical Simulation and Application to a Petroleum Reservoir

Authors: Ahmad Fahim Nasiry, Shigeo Honma

Abstract:

We examine two-dimensional oil displacement by water in a petroleum reservoir. The pore fluid is immiscible, and the porous media is homogenous and isotropic in the horizontal direction. Buckley-Leverett theory and a combination of Laplacian and Darcy’s law are used to study the fluid flow through porous media, and the Laplacian that defines the dispersion and diffusion of fluid in the sand using heavy oil is discussed. The reservoir is homogenous in the horizontal direction, as expressed by the partial differential equation. Two main factors which are observed are the water saturation and pressure distribution in the reservoir, and they are evaluated for predicting oil recovery in two dimensions by a physical and mathematical simulation model. We review the numerical simulation that solves difficult partial differential reservoir equations. Based on the numerical simulations, the saturation and pressure equations are calculated by the iterative alternating direction implicit method and the iterative alternating direction explicit method, respectively, according to the finite difference assumption. However, to understand the displacement of oil by water and the amount of water dispersion in the reservoir better, an interpolated contour line of the water distribution of the five-spot pattern, that provides an approximate solution which agrees well with the experimental results, is also presented. Finally, a computer program is developed to calculate the equation for pressure and water saturation and to draw the pressure contour line and water distribution contour line for the reservoir.

Keywords: numerical simulation, immiscible, finite difference, IADI, IDE, waterflooding

Procedia PDF Downloads 319
687 Ray Tracing Modified 3D Image Method Simulation of Picocellular Propagation Channel Environment

Authors: Fathi Alwafie

Abstract:

In this paper we present the simulation of the propagation characteristics of the picocellular propagation channel environment. The first aim has been to find a correct description of the environment for received wave. The result of the first investigations is that the environment of the indoor wave significantly changes as we change the electric parameters of material constructions. A modified 3D ray tracing image method tool has been utilized for the coverage prediction. A detailed analysis of the dependence of the indoor wave on the wide-band characteristics of the channel: Root Mean Square (RMS) delay spread characteristics and mean excess delay, is also investigated.

Keywords: propagation, ray tracing, network, mobile computing

Procedia PDF Downloads 390
686 A Novel Combination Method for Computing the Importance Map of Image

Authors: Ahmad Absetan, Mahdi Nooshyar

Abstract:

The importance map is an image-based measure and is a core part of the resizing algorithm. Importance measures include image gradients, saliency and entropy, as well as high level cues such as face detectors, motion detectors and more. In this work we proposed a new method to calculate the importance map, the importance map is generated automatically using a novel combination of image edge density and Harel saliency measurement. Experiments of different type images demonstrate that our method effectively detects prominent areas can be used in image resizing applications to aware important areas while preserving image quality.

Keywords: content-aware image resizing, visual saliency, edge density, image warping

Procedia PDF Downloads 570
685 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures

Procedia PDF Downloads 348
684 Chebyshev Wavelets and Applications

Authors: Emanuel Guariglia

Abstract:

In this paper we deal with Chebyshev wavelets. We analyze their properties computing their Fourier transform. Moreover, we discuss the differential properties of Chebyshev wavelets due the connection coefficients. The differential properties of Chebyshev wavelets, expressed by the connection coefficients (also called refinable integrals), are given by finite series in terms of the Kronecker delta. Moreover, we treat the p-order derivative of Chebyshev wavelets and compute its Fourier transform. Finally, we expand the mother wavelet in Taylor series with an application both in fractional calculus and fractal geometry.

Keywords: Chebyshev wavelets, Fourier transform, connection coefficients, Taylor series, local fractional derivative, Cantor set

Procedia PDF Downloads 110
683 A Novel Unconditionally Secure and Lightweight Bipartite Key Agreement Protocol

Authors: Jun Liu

Abstract:

This paper introduces a new bipartite key agreement (2PKA) protocol which provides unconditionally security and lightweight. The unconditional security is stemmed from the known impossibility of distinguishing a particular solution from all possible solutions of an underdetermined system of equations. The indistinguishability prevents an adversary from inferring to the common secret-key even with the access to an unlimited amount of computing capability. This new 2PKA protocol is also lightweight because that the calculation of a common secret-key only makes use of simple modular arithmetic. This information-theoretic 2PKA scheme provides the desired features of Key Confirmation (KC), Session Key (SK) security, Know-Key (KK) security, protection of individual privacy, and uniformly distributed value of a common key under prime modulus.

Keywords: bipartite key agreement, information-theoretic cryptography, perfect security, lightweight

Procedia PDF Downloads 57
682 Development of the Web-Based Multimedia N-Screen Service System for Cross Platform

Authors: S. Bae, J. Shin, S. Lee

Abstract:

As the development of smart devices such as Smart TV, Smartphone, Tablet PC, Laptop, the interest in N-Screen Services that can be cross-linked with heterogeneous devices is increasing. N-Screen means User-centric services that can share and constantly watch multimedia contents anytime and anywhere. However, the existing N-Screen system has the limitation that N-Screen system has to implement the application for each platform and device to provide multimedia service. To overcome this limitation, Multimedia N-Screen Service System is proposed through the web, and it is independent of different environments. The combination of Web and cloud computing technologies from this study results in increasing efficiency and reduction in costs.

Keywords: N-screen, web, cloud, multimedia

Procedia PDF Downloads 288
681 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 269
680 CMPD: Cancer Mutant Proteome Database

Authors: Po-Jung Huang, Chi-Ching Lee, Bertrand Chin-Ming Tan, Yuan-Ming Yeh, Julie Lichieh Chu, Tin-Wen Chen, Cheng-Yang Lee, Ruei-Chi Gan, Hsuan Liu, Petrus Tang

Abstract:

Whole-exome sequencing focuses on the protein coding regions of disease/cancer associated genes based on a priori knowledge is the most cost-effective method to study the association between genetic alterations and disease. Recent advances in high throughput sequencing technologies and proteomic techniques has provided an opportunity to integrate genomics and proteomics, allowing readily detectable mutated peptides corresponding to mutated genes. Since sequence database search is the most widely used method for protein identification using Mass spectrometry (MS)-based proteomics technology, a mutant proteome database is required to better approximate the real protein pool to improve disease-associated mutated protein identification. Large-scale whole exome/genome sequencing studies were launched by National Cancer Institute (NCI), Broad Institute, and The Cancer Genome Atlas (TCGA), which provide not only a comprehensive report on the analysis of coding variants in diverse samples cell lines but a invaluable resource for extensive research community. No existing database is available for the collection of mutant protein sequences related to the identified variants in these studies. CMPD is designed to address this issue, serving as a bridge between genomic data and proteomic studies and focusing on protein sequence-altering variations originated from both germline and cancer-associated somatic variations.

Keywords: TCGA, cancer, mutant, proteome

Procedia PDF Downloads 581
679 Earthquake Forecasting Procedure Due to Diurnal Stress Transfer by the Core to the Crust

Authors: Hassan Gholibeigian, Kazem Gholibeigian

Abstract:

In this paper, our goal is determination of loading versus time in crust. For this goal, we present a computational procedure to propose a cumulative strain energy time profile which can be used to predict the approximate location and time of the next major earthquake (M > 4.5) along a specific fault, which we believe, is more accurate than many of the methods presently in use. In the coming pages, after a short review of the research works presently going on in the area of earthquake analysis and prediction, earthquake mechanisms in both the jerk and sequence earthquake direction is discussed, then our computational procedure is presented using differential equations of equilibrium which govern the nonlinear dynamic response of a system of finite elements, modified with an extra term to account for the jerk produced during the quake. We then employ Von Mises developed model for the stress strain relationship in our calculations, modified with the addition of an extra term to account for thermal effects. For calculation of the strain energy the idea of Pulsating Mantle Hypothesis (PMH) is used. This hypothesis, in brief, states that the mantle is under diurnal cyclic pulsating loads due to unbalanced gravitational attraction of the sun and the moon. A brief discussion is done on the Denali fault as a case study. The cumulative strain energy is then graphically represented versus time. At the end, based on some hypothetic earthquake data, the final results are verified.

Keywords: pulsating mantle hypothesis, inner core’s dislocation, outer core’s bulge, constitutive model, transient hydro-magneto-thermo-mechanical load, diurnal stress, jerk, fault behaviour

Procedia PDF Downloads 266
678 Microwave Synthesis and Molecular Docking Studies of Azetidinone Analogous Bearing Diphenyl Ether Nucleus as a Potent Antimycobacterial and Antiprotozoal Agent

Authors: Vatsal M. Patel, Navin B. Patel

Abstract:

The present studies deal with the developing a series bearing a diphenyl ethers nucleus using structure-based drug design concept. A newer series of diphenyl ether based azetidinone namely N-(3-chloro-2-oxo-4-(3-phenoxyphenyl)azetidin-1-yl)-2-(substituted amino)acetamide (2a-j) have been synthesized by condensation of m-phenoxybenzaldehyde with 2-(substituted-phenylamino)acetohydrazide followed by the cyclisation of resulting Schiff base (1a-j) by conventional method as well as microwave heating approach as a part of an environmentally benign synthetic protocol. All the synthesized compounds were characterized by spectral analysis and were screened for in vitro antimicrobial, antitubercular and antiprotozoal activity. The compound 2f was found to be most active M. tuberculosis (6.25 µM) MIC value in the primary screening as well as this same derivative has been found potency against L. mexicana and T. cruzi with MIC value 2.09 and 6.69 µM comparable to the reference drug Miltefosina and Nifurtimox. To provide understandable evidence to predict binding mode and approximate binding energy of a compound to a target in the terms of ligand-protein interaction, all synthesized compounds were docked against an enoyl-[acyl-carrier-protein] reductase of M. tuberculosis (PDB ID: 4u0j). The computational studies revealed that azetidinone derivatives have a high affinity for the active site of enzyme which provides a strong platform for new structure-based design efforts. The Lipinski’s parameters showed good drug-like properties and can be developed as an oral drug candidate.

Keywords: antimycobacterial, antiprotozoal, azetidinone, diphenylether, docking, microwave

Procedia PDF Downloads 146
677 Composition and Distribution of Seabed Marine Litter Along Algerian Coast (Western Mediterranean)

Authors: Ahmed Inal, Samir Rouidi, Samir Bachouche

Abstract:

The present study is focused on the distribution and composition of seafloor marine litter associated to trawlable fishing areas along Algerian coast. The sampling was done with a GOC73 bottom trawl during four (04) demersal resource assessment cruises, respectively, in 2016, 2019, 2021 and 2022, carried out on board BELKACEM GRINE R/V. A total of 254 fishing hauls were sampled for the assessment of marine litter. Hauls were performed between 22 and 600 m of depth, the duration was between 30 and 60 min. All sampling was conducted during daylight. After the haul, marine litter was sorted and split from the catch. Then, according to the basis of the MEDITS protocol, litters were sorted into six different categories (plastic, rubber, metal, wood, glass and natural fiber). Thereafter, all marine litter were counted and weighed separately to the nearest 0.5 g. The results shows that the maximums of marine litter densities in the seafloor of the trawling fishing areas along Algerian coast are, respectively, 1996 item/km2 in 2016, 5164 item/km2 in 2019, 2173 item/km2 in 2021 and 7319 item/km2 in 2022. Thus, the plastic is the most abundant litter, it represent, respectively, 46% of marine litter in 2016, 67% in 2019, 69% in 2021 and 74% in 2022. Regarding the weight of the marine litter, it varies between 0.00 and 103 kg in 2016, between 0.04 and 81 kg in 2019, between 0.00 and 68 Kg in 2021 and between 0.00 and 318 kg in 2022. Thus, the maximum rate of marine litter compared to the total catch approximate, respectively, 66% in 2016, 90% in 2019, 65% in 2021 and 91% in 2022. In fact, the average loss in catch is estimated, respectively, at 7.4% in 2016, 8.4% in 2019, 5.7% in 2021 and 6.4% in 2022. However, the bathymetric and geographical variability had a significant impact on both density and weight of marine litter. Marine litter monitoring program is necessary for offering more solution proposals.

Keywords: composition, distribution, seabed, marine litter, algerian coast

Procedia PDF Downloads 53
676 Instructional Design Strategy Based on Stories with Interactive Resources for Learning English in Preschool

Authors: Vicario Marina, Ruiz Elena, Peredo Ruben, Bustos Eduardo

Abstract:

the development group of Educational Computing of the National Polytechnic (IPN) in Mexico has been developing interactive resources at preschool level in an effort to improve learning in the Child Development Centers (CENDI). This work describes both a didactic architecture and a strategy for teaching English with digital stories using interactive resources available through a Web repository designed to be used in mobile platforms. It will be accessible initially to 500 children and worldwide by the end of 2015.

Keywords: instructional design, interactive resources, digital educational resources, story based English teaching, preschool education

Procedia PDF Downloads 462
675 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act

Authors: Maria Jędrzejczak, Patryk Pieniążek

Abstract:

The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.

Keywords: data protection law, personal data, AI law, personal data breach

Procedia PDF Downloads 47
674 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning

Authors: Yangzhi Li

Abstract:

Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.

Keywords: robotic construction, robotic assembly, visual guidance, machine learning

Procedia PDF Downloads 75
673 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 249