Search results for: data harvesting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25494

Search results for: data harvesting

23964 Relationship between Driving under the Influence and Traffic Safety

Authors: Eun Hak Lee, Young-Hyun Seo, Hosuk Shin, Seung-Young Kho

Abstract:

Among traffic crashes, driving under the influence (DUI) of alcohol is the most dangerous behavior in Seoul, South Korea. In 2016 alone 40 deaths occurred on of 2,857 cases of DUI. Since DUI is one of the major factors in increasing the severity of crashes, the intensive management of DUI required to reduce traffic crash deaths and the crash damages. This study aims to investigate the relationship between DUI and traffic safety in order to establish countermeasures for traffic safety improvement. The analysis was conducted on the habitual drivers who drove under the influence. Information of habitual drivers is matched to crash data and fine data. The descriptive statistics on data used in this study, which consists of driver license acquisition, traffic fine, and crash data provided by the Korean National Police Agency, are described. The drivers under the influence are classified by statistically significant criteria, such as driver’s age, license type, driving experience, and crash reasons. With the results of the analysis, we propose some countermeasures to enhance traffic safety.

Keywords: driving under influence, traffic safety, traffic crash, traffic fine

Procedia PDF Downloads 222
23963 Simplified Measurement of Occupational Energy Expenditure

Authors: J. Wicks

Abstract:

Aim: To develop a simple methodology to allow collected heart rate (HR) data from inexpensive wearable devices to be expressed in a suitable format (METs) to quantitate occupational (and recreational) activity. Introduction: Assessment of occupational activity is commonly done by utilizing questionnaires in combination with prescribed MET levels of a vast range of previously measured activities. However for any individual the intensity of performing a specific activity can vary significantly. Ideally objective measurement of individual activity is preferred. Though there are a wide range of HR recording devices there is a distinct lack methodology to allow processing of collected data to quantitate energy expenditure (EE). The HR index equation expresses METs in relation to relative HR i.e. the ratio of activity HR to resting HR. The use of this equation provides a simple utility for objective measurement of EE. Methods: During a typical occupational work period of approximately 8 hours HR data was recorded using a Polar RS 400 wrist monitor. Recorded data was downloaded to a Windows PC and non HR data was stripped from the ASCII file using ‘Notepad’. The HR data was exported to a spread sheet program and sorted by HR range into a histogram format. Three HRs were determined, namely a resting HR (the HR delimiting the lowest 30 minutes of recorded data), a mean HR and a peak HR (the HR delimiting the highest 30 minutes of recorded data). HR indices were calculated (mean index equals mean HR/rest HR and peak index equals peak HR/rest HR) with mean and peak indices being converted to METs using the HR index equation. Conclusion: Inexpensive HR recording devices can be utilized to make reasonable estimates of occupational (or recreational) EE suitable for large scale demographic screening by utilizing the HR index equation. The intrinsic value of the HR index equation is that it is independent of factors that influence absolute HR, namely fitness, smoking and beta-blockade.

Keywords: energy expenditure, heart rate histograms, heart rate index, occupational activity

Procedia PDF Downloads 296
23962 Empirical Study of Running Correlations in Exam Marks: Same Statistical Pattern as Chance

Authors: Weisi Guo

Abstract:

It is well established that there may be running correlations in sequential exam marks due to students sitting in the order of course registration patterns. As such, a random and non-sequential sampling of exam marks is a standard recommended practice. Here, the paper examines a large number of exam data stretching several years across different modules to see the degree to which it is true. Using the real mark distribution as a generative process, it was found that random simulated data had no more sequential randomness than the real data. That is to say, the running correlations that one often observes are statistically identical to chance. Digging deeper, it was found that some high running correlations have students that indeed share a common course history and make similar mistakes. However, at the statistical scale of a module question, the combined effect is statistically similar to the random shuffling of papers. As such, there may not be the need to take random samples for marks, but it still remains good practice to mark papers in a random sequence to reduce the repetitive marking bias and errors.

Keywords: data analysis, empirical study, exams, marking

Procedia PDF Downloads 181
23961 Factors Influencing Soil Organic Carbon Storage Estimation in Agricultural Soils: A Machine Learning Approach Using Remote Sensing Data Integration

Authors: O. Sunantha, S. Zhenfeng, S. Phattraporn, A. Zeeshan

Abstract:

The decline of soil organic carbon (SOC) in global agriculture is a critical issue requiring rapid and accurate estimation for informed policymaking. While it is recognized that SOC predictors vary significantly when derived from remote sensing data and environmental variables, identifying the specific parameters most suitable for accurately estimating SOC in diverse agricultural areas remains a challenge. This study utilizes remote sensing data to precisely estimate SOC and identify influential factors in diverse agricultural areas, such as paddy, corn, sugarcane, cassava, and perennial crops. Extreme gradient boosting (XGBoost), random forest (RF), and support vector regression (SVR) models are employed to analyze these factors' impact on SOC estimation. The results show key factors influencing SOC estimation include slope, vegetation indices (EVI), spectral reflectance indices (red index, red edge2), temperature, land use, and surface soil moisture, as indicated by their averaged importance scores across XGBoost, RF, and SVR models. Therefore, using different machine learning algorithms for SOC estimation reveals varying influential factors from remote sensing data and environmental variables. This approach emphasizes feature selection, as different machine learning algorithms identify various key factors from remote sensing data and environmental variables for accurate SOC estimation.

Keywords: factors influencing SOC estimation, remote sensing data, environmental variables, machine learning

Procedia PDF Downloads 35
23960 Visualization-Based Feature Extraction for Classification in Real-Time Interaction

Authors: Ágoston Nagy

Abstract:

This paper introduces a method of using unsupervised machine learning to visualize the feature space of a dataset in 2D, in order to find most characteristic segments in the set. After dimension reduction, users can select clusters by manual drawing. Selected clusters are recorded into a data model that is used for later predictions, based on realtime data. Predictions are made with supervised learning, using Gesture Recognition Toolkit. The paper introduces two example applications: a semantic audio organizer for analyzing incoming sounds, and a gesture database organizer where gestural data (recorded by a Leap motion) is visualized for further manipulation.

Keywords: gesture recognition, machine learning, real-time interaction, visualization

Procedia PDF Downloads 353
23959 Design and Development of Bar Graph Data Visualization in 2D and 3D Space Using Front-End Technologies

Authors: Sourabh Yaduvanshi, Varsha Namdeo, Namrata Yaduvanshi

Abstract:

This study delves into the design and development intricacies of crafting detailed 2D bar charts via d3.js, recognizing its limitations in generating 3D visuals within the Document Object Model (DOM). The study combines three.js with d3.js, facilitating a smooth evolution from 2D to immersive 3D representations. This fusion epitomizes the synergy between front-end technologies, expanding horizons in data visualization. Beyond technical expertise, it symbolizes a creative convergence, pushing boundaries in visual representation. The abstract illuminates methodologies, unraveling the intricate integration of this fusion and guiding enthusiasts. It narrates a compelling story of transcending 2D constraints, propelling data visualization into captivating three-dimensional realms, and igniting creativity in front-end visualization endeavors.

Keywords: design, development, front-end technologies, visualization

Procedia PDF Downloads 35
23958 Prediction of All-Beta Protein Secondary Structure Using Garnier-Osguthorpe-Robson Method

Authors: K. Tejasri, K. Suvarna Vani, S. Prathyusha, S. Ramya

Abstract:

Proteins are chained sequences of amino acids which are brought together by the peptide bonds. Many varying formations of the chains are possible due to multiple combinations of amino acids and rotation in numerous positions along the chain. Protein structure prediction is one of the crucial goals worked towards by the members of bioinformatics and theoretical chemistry backgrounds. Among the four different structure levels in proteins, we emphasize mainly the secondary level structure. Generally, the secondary protein basically comprises alpha-helix and beta-sheets. Multi-class classification problem of data with disparity is truly a challenge to overcome and has to be addressed for the beta strands. Imbalanced data distribution constitutes a couple of the classes of data having very limited training samples collated with other classes. The secondary structure data is extracted from the protein primary sequence, and the beta-strands are predicted using suitable machine learning algorithms.

Keywords: proteins, secondary structure elements, beta-sheets, beta-strands, alpha-helices, machine learning algorithms

Procedia PDF Downloads 94
23957 Identify Users Behavior from Mobile Web Access Logs Using Automated Log Analyzer

Authors: Bharat P. Modi, Jayesh M. Patel

Abstract:

Mobile Internet is acting as a major source of data. As the number of web pages continues to grow the Mobile web provides the data miners with just the right ingredients for extracting information. In order to cater to this growing need, a special term called Mobile Web mining was coined. Mobile Web mining makes use of data mining techniques and deciphers potentially useful information from web data. Web Usage mining deals with understanding the behavior of users by making use of Mobile Web Access Logs that are generated on the server while the user is accessing the website. A Web access log comprises of various entries like the name of the user, his IP address, a number of bytes transferred time-stamp etc. A variety of Log Analyzer tools exists which help in analyzing various things like users navigational pattern, the part of the website the users are mostly interested in etc. The present paper makes use of such log analyzer tool called Mobile Web Log Expert for ascertaining the behavior of users who access an astrology website. It also provides a comparative study between a few log analyzer tools available.

Keywords: mobile web access logs, web usage mining, web server, log analyzer

Procedia PDF Downloads 361
23956 Modeling Food Popularity Dependencies Using Social Media Data

Authors: DEVASHISH KHULBE, MANU PATHAK

Abstract:

The rise in popularity of major social media platforms have enabled people to share photos and textual information about their daily life. One of the popular topics about which information is shared is food. Since a lot of media about food are attributed to particular locations and restaurants, information like spatio-temporal popularity of various cuisines can be analyzed. Tracking the popularity of food types and retail locations across space and time can also be useful for business owners and restaurant investors. In this work, we present an approach using off-the shelf machine learning techniques to identify trends and popularity of cuisine types in an area using geo-tagged data from social media, Google images and Yelp. After adjusting for time, we use the Kernel Density Estimation to get hot spots across the location and model the dependencies among food cuisines popularity using Bayesian Networks. We consider the Manhattan borough of New York City as the location for our analyses but the approach can be used for any area with social media data and information about retail businesses.

Keywords: Web Mining, Geographic Information Systems, Business popularity, Spatial Data Analyses

Procedia PDF Downloads 116
23955 Area Exclosure as a Government Strategy to Restore Woody Plant Species Diversity: Case Study in Southern Ethiopia

Authors: Tsegaw Abebe, Temesgen Abebe

Abstract:

Land degradation is one of a serious environmental challenge in Ethiopia and is one of the major underlying causes for declining agricultural productivity. The Ethiopia government realized the significance of environmental restoration specifically on deforested and degraded land after the 1973 and 1984/85 major famines that struck the country. Among the various conservation strategies, the establishment of area exclosures have been regarded as an effective response to halt and reverse the problems of land degradation. There are limited studies in Ethiopia dealing how the conversion of free grazing lands and degraded lands by closures increase biomass accumulation. However, these studies are not sufficient to conclude about the strength of area closures to restore degraded vegetations at the diverse agro-ecological condition. The overall objective of this study was, therefore, to assess and evaluate the usefulness of area closure technique in enhancing rehabilitation of degraded ecosystem and thereby increase the natural capital in the study site (southern Ethiopia). Woody plant species were collected from area exclosure for eight year and adjacent degraded land with similar landscape positions using systematic sampling plot design technique. Woody species diversity was determined by Shannon diversity. Comparative assessment result of woody plant species analysis showed that the density of woody species in the exclosure and degraded site were 778 and 222 individuals per hectare, respectively. A total of 16 woody species, representing 12 families were recorded in the study site. Out of the 12 families, all were recorded in the exclosure while 5 were recorded in the degraded site. Out of the 16 species, 15 were recorded in the exclosure while six were in the degraded site. A total of 10 species were recorded in the exclosure, which were absent in the degraded site. Similarly, one species was recorded in the degraded site which was not present in the exclosure. The results showed that protecting of degraded site from human and animal disturbances promotes woody plant species regenerations and productivity Apart from increasing woody plant species, the local communities have benefited from the exclosure in the form of both products (grass harvesting) and services (ecological). Due to this reason the local communities have positive attitudes and contribute a lot for the success of enclosures in the study site. The present study clearly showed that area closure interventions should be oriented towards managing and improving the productivity of the degraded land, in such a way that both the need for conservation of biodiversity and environmental sustainability, and the demands of the local people for biomass resources can be achieved.

Keywords: degraded land, exclosure, land restoration, woody vegetation

Procedia PDF Downloads 427
23954 Hierarchical Piecewise Linear Representation of Time Series Data

Authors: Vineetha Bettaiah, Heggere S. Ranganath

Abstract:

This paper presents a Hierarchical Piecewise Linear Approximation (HPLA) for the representation of time series data in which the time series is treated as a curve in the time-amplitude image space. The curve is partitioned into segments by choosing perceptually important points as break points. Each segment between adjacent break points is recursively partitioned into two segments at the best point or midpoint until the error between the approximating line and the original curve becomes less than a pre-specified threshold. The HPLA representation achieves dimensionality reduction while preserving prominent local features and general shape of time series. The representation permits course-fine processing at different levels of details, allows flexible definition of similarity based on mathematical measures or general time series shape, and supports time series data mining operations including query by content, clustering and classification based on whole or subsequence similarity.

Keywords: data mining, dimensionality reduction, piecewise linear representation, time series representation

Procedia PDF Downloads 275
23953 Satellite Statistical Data Approach for Upwelling Identification and Prediction in South of East Java and Bali Sea

Authors: Hary Aprianto Wijaya Siahaan, Bayu Edo Pratama

Abstract:

Sea fishery's potential to become one of the nation's assets which very contributed to Indonesia's economy. This fishery potential not in spite of the availability of the chlorophyll in the territorial waters of Indonesia. The research was conducted using three methods, namely: statistics, comparative and analytical. The data used include MODIS sea temperature data imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, MODIS data of chlorophyll-a imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, and Imaging results data ASCAT on MetOp and NOAA satellites with 27 km resolution in 2002-2015. The results of the processing of the data show that the incidence of upwelling in the south of East Java Sea began to happen in June identified with sea surface temperature anomaly below normal, the mass of the air that moves from the East to the West, and chlorophyll-a concentrations are high. In July the region upwelling events are increasingly expanding towards the West and reached its peak in August. Chlorophyll-a concentration prediction using multiple linear regression equations demonstrate excellent results to chlorophyll-a concentrations prediction in 2002 until 2015 with the correlation of predicted chlorophyll-a concentration indicate a value of 0.8 and 0.3 with RMSE value. On the chlorophyll-a concentration prediction in 2016 indicate good results despite a decline in the value of the correlation, where the correlation of predicted chlorophyll-a concentration in the year 2016 indicate a value 0.6, but showed improvement in RMSE values with 0.2.

Keywords: satellite, sea surface temperature, upwelling, wind stress

Procedia PDF Downloads 158
23952 Design an Intelligent Fire Detection System Based on Neural Network and Particle Swarm Optimization

Authors: Majid Arvan, Peyman Beygi, Sina Rokhsati

Abstract:

In-time detection of fire in buildings is of great importance. Employing intelligent methods in data processing in fire detection systems leads to a significant reduction of fire damage at lowest cost. In this paper, the raw data obtained from the fire detection sensor networks in buildings is processed by using intelligent methods based on neural networks and the likelihood of fire happening is predicted. In order to enhance the quality of system, the noise in the sensor data is reduced by analyzing wavelets and applying SVD technique. Meanwhile, the proposed neural network is trained using particle swarm optimization (PSO). In the simulation work, the data is collected from sensor network inside the room and applied to the proposed network. Then the outputs are compared with conventional MLP network. The simulation results represent the superiority of the proposed method over the conventional one.

Keywords: intelligent fire detection, neural network, particle swarm optimization, fire sensor network

Procedia PDF Downloads 380
23951 Investigation of Maritime Accidents with Exploratory Data Analysis in the Strait of Çanakkale (Dardanelles)

Authors: Gizem Kodak

Abstract:

The Strait of Çanakkale, together with the Strait of Istanbul and the Sea of Marmara, form the Turkish Straits System. In other words, the Strait of Çanakkale is the southern gate of the system that connects the Black Sea countries with the other countries of the world. Due to the heavy maritime traffic, it is important to scientifically examine the accident characteristics in the region. In particular, the results indicated by the descriptive statistics are of critical importance in order to strengthen the safety of navigation. At this point, exploratory data analysis offers strategic outputs in terms of defining the problem and knowing the strengths and weaknesses against possible accident risk. The study aims to determine the accident characteristics in the Strait of Çanakkale with temporal and spatial analysis of historical data, using Exploratory Data Analysis (EDA) as the research method. The study's results will reveal the general characteristics of maritime accidents in the region and form the infrastructure for future studies. Therefore, the text provides a clear description of the research goals and methodology, and the study's contributions are well-defined.

Keywords: maritime accidents, EDA, Strait of Çanakkale, navigational safety

Procedia PDF Downloads 97
23950 Data Analysis to Uncover Terrorist Attacks Using Data Mining Techniques

Authors: Saima Nazir, Mustansar Ali Ghazanfar, Sanay Muhammad Umar Saeed, Muhammad Awais Azam, Saad Ali Alahmari

Abstract:

Terrorism is an important and challenging concern. The entire world is threatened by only few sophisticated terrorist groups and especially in Gulf Region and Pakistan, it has become extremely destructive phenomena in recent years. Predicting the pattern of attack type, attack group and target type is an intricate task. This study offers new insight on terrorist group’s attack type and its chosen target. This research paper proposes a framework for prediction of terrorist attacks using the historical data and making an association between terrorist group, their attack type and target. Analysis shows that the number of attacks per year will keep on increasing, and Al-Harmayan in Saudi Arabia, Al-Qai’da in Gulf Region and Tehreek-e-Taliban in Pakistan will remain responsible for many future terrorist attacks. Top main targets of each group will be private citizen & property, police, government and military sector under constant circumstances.

Keywords: data mining, counter terrorism, machine learning, SVM

Procedia PDF Downloads 409
23949 SA-SPKC: Secure and Efficient Aggregation Scheme for Wireless Sensor Networks Using Stateful Public Key Cryptography

Authors: Merad Boudia Omar Rafik, Feham Mohammed

Abstract:

Data aggregation in wireless sensor networks (WSNs) provides a great reduction of energy consumption. The limited resources of sensor nodes make the choice of an encryption algorithm very important for providing security for data aggregation. Asymmetric cryptography involves large ciphertexts and heavy computations but solves, on the other hand, the problem of key distribution of symmetric one. The latter provides smaller ciphertexts and speed computations. Also, the recent researches have shown that achieving the end-to-end confidentiality and the end-to-end integrity at the same is a challenging task. In this paper, we propose (SA-SPKC), a novel security protocol which addresses both security services for WSNs, and where only the base station can verify the individual data and identify the malicious node. Our scheme is based on stateful public key encryption (StPKE). The latter combines the best features of both kinds of encryption along with state in order to reduce the computation overhead. Our analysis

Keywords: secure data aggregation, wireless sensor networks, elliptic curve cryptography, homomorphic encryption

Procedia PDF Downloads 297
23948 Solar Seawater Desalination Still with Seawater Preheater Using Efficient Heat Transfer Oil: Numerical Investigation and Data Verification

Authors: Ahmed N. Shmroukh, Gamal Tag Abdel-Jaber, Rashed D. Aldughpassi

Abstract:

The feasibility of improving the performance of the proposed solar still unit which operated in very hot climate is investigated numerically and verified with experimental data. This solar desalination unit with proposed auxiliary device as seawater preheating system using petrol based textherm oil was used to produce pure fresh water from seawater. The effective evaporation area of basin is about 1 m2. The unit was tested in two main operation modes which are normal and with seawater preheating system. The results showed that, there is good agreement between the theoretical data and the experimental data; this means that the numerical model can be accurately dependable for predicting the proposed solar still performance and design parameters. The results also showed that the fresh water productivity of the solar still in the modified preheating case which is higher than normal case, leads to an increase in productivity of 42%.

Keywords: improving productivity, seawater desalination, solar stills, theoretical model

Procedia PDF Downloads 136
23947 The Parallelization of Algorithm Based on Partition Principle for Association Rules Discovery

Authors: Khadidja Belbachir, Hafida Belbachir

Abstract:

subsequently the expansion of the physical supports storage and the needs ceaseless to accumulate several data, the sequential algorithms of associations’ rules research proved to be ineffective. Thus the introduction of the new parallel versions is imperative. We propose in this paper, a parallel version of a sequential algorithm “Partition”. This last is fundamentally different from the other sequential algorithms, because it scans the data base only twice to generate the significant association rules. By consequence, the parallel approach does not require much communication between the sites. The proposed approach was implemented for an experimental study. The obtained results, shows a great reduction in execution time compared to the sequential version and Count Distributed algorithm.

Keywords: association rules, distributed data mining, partition, parallel algorithms

Procedia PDF Downloads 416
23946 A Less Complexity Deep Learning Method for Drones Detection

Authors: Mohamad Kassab, Amal El Fallah Seghrouchni, Frederic Barbaresco, Raed Abu Zitar

Abstract:

Detecting objects such as drones is a challenging task as their relative size and maneuvering capabilities deceive machine learning models and cause them to misclassify drones as birds or other objects. In this work, we investigate applying several deep learning techniques to benchmark real data sets of flying drones. A deep learning paradigm is proposed for the purpose of mitigating the complexity of those systems. The proposed paradigm consists of a hybrid between the AdderNet deep learning paradigm and the Single Shot Detector (SSD) paradigm. The goal was to minimize multiplication operations numbers in the filtering layers within the proposed system and, hence, reduce complexity. Some standard machine learning technique, such as SVM, is also tested and compared to other deep learning systems. The data sets used for training and testing were either complete or filtered in order to remove the images with mall objects. The types of data were RGB or IR data. Comparisons were made between all these types, and conclusions were presented.

Keywords: drones detection, deep learning, birds versus drones, precision of detection, AdderNet

Procedia PDF Downloads 182
23945 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan

Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid

Abstract:

In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.

Keywords: Data quality, Null hypothesis, Seismic lines, Seismic reflection survey

Procedia PDF Downloads 164
23944 A Review of Encryption Algorithms Used in Cloud Computing

Authors: Derick M. Rakgoale, Topside E. Mathonsi, Vusumuzi Malele

Abstract:

Cloud computing offers distributed online and on-demand computational services from anywhere in the world. Cloud computing services have grown immensely over the past years, especially in the past year due to the Coronavirus pandemic. Cloud computing has changed the working environment and introduced work from work phenomenon, which enabled the adoption of technologies to fulfill the new workings, including cloud services offerings. The increased cloud computing adoption has come with new challenges regarding data privacy and its integrity in the cloud environment. Previously advanced encryption algorithms failed to reduce the memory space required for cloud computing performance, thus increasing the computational cost. This paper reviews the existing encryption algorithms used in cloud computing. In the future, artificial neural networks (ANN) algorithm design will be presented as a security solution to ensure data integrity, confidentiality, privacy, and availability of user data in cloud computing. Moreover, MATLAB will be used to evaluate the proposed solution, and simulation results will be presented.

Keywords: cloud computing, data integrity, confidentiality, privacy, availability

Procedia PDF Downloads 133
23943 Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit

Authors: Ahmed Elrewainy

Abstract:

Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The existing pure materials “endmembers” in the scene share the spectra pixels with different amounts called “abundances”. Unmixing of the data cube is an important task to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The l1-norm optimization problem “basis pursuit” could be used as a sparsity-based approach to solve this unmixing problem where the endmembers is assumed to be sparse in an appropriate domain known as dictionary. This optimization problem is solved using proximal method “iterative thresholding”. The l1-norm basis pursuit optimization problem as a sparsity-based unmixing technique was used to unmix real and synthetic hyperspectral data cubes.

Keywords: basis pursuit, blind source separation, hyperspectral imaging, spectral unmixing, wavelets

Procedia PDF Downloads 195
23942 Survivable IP over WDM Network Design Based on 1 ⊕ 1 Network Coding

Authors: Nihed Bahria El Asghar, Imen Jouili, Mounir Frikha

Abstract:

Inter-datacenter transport network is very bandwidth and delay demanding. The data transferred over such a network is also highly QoS-exigent mostly because a huge volume of data should be transported transparently with regard to the application user. To avoid the data transfer failure, a backup path should be reserved. No re-routing delay should be observed. A dedicated 1+1 protection is however not applicable in inter-datacenter transport network because of the huge spare capacity. In this context, we propose a survivable virtual network with minimal backup based on network coding (1 ⊕ 1) and solve it using a modified Dijkstra-based heuristic.

Keywords: network coding, dedicated protection, spare capacity, inter-datacenters transport network

Procedia PDF Downloads 447
23941 Development of Enhanced Data Encryption Standard

Authors: Benjamin Okike

Abstract:

There is a need to hide information along the superhighway. Today, information relating to the survival of individuals, organizations, or government agencies is transmitted from one point to another. Adversaries are always on the watch along the superhighway to intercept any information that would enable them to inflict psychological ‘injuries’ to their victims. But with information encryption, this can be prevented completely or at worst reduced to the barest minimum. There is no doubt that so many encryption techniques have been proposed, and some of them are already being implemented. However, adversaries always discover loopholes on them to perpetuate their evil plans. In this work, we propose the enhanced data encryption standard (EDES) that would deploy randomly generated numbers as an encryption method. Each time encryption is to be carried out, a new set of random numbers would be generated, thereby making it almost impossible for cryptanalysts to decrypt any information encrypted with this newly proposed method.

Keywords: encryption, enhanced data encryption, encryption techniques, information security

Procedia PDF Downloads 150
23940 Big Data Applications for Transportation Planning

Authors: Antonella Falanga, Armando Cartenì

Abstract:

"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning

Procedia PDF Downloads 60
23939 Implementing Fault Tolerance with Proxy Signature on the Improvement of RSA System

Authors: H. El-Kamchouchi, Heba Gaber, Fatma Ahmed, Dalia H. El-Kamchouchi

Abstract:

Fault tolerance and data security are two important issues in modern communication systems. During the transmission of data between the sender and receiver, errors may occur frequently. Therefore, the sender must re-transmit the data to the receiver in order to correct these errors, which makes the system very feeble. To improve the scalability of the scheme, we present a proxy signature scheme with fault tolerance over an efficient and secure authenticated key agreement protocol based on the improved RSA system. Authenticated key agreement protocols have an important role in building a secure communications network between the two parties.

Keywords: fault tolerance, improved RSA, key agreement, proxy signature

Procedia PDF Downloads 425
23938 The Necessity to Standardize Procedures of Providing Engineering Geological Data for Designing Road and Railway Tunneling Projects

Authors: Atefeh Saljooghi Khoshkar, Jafar Hassanpour

Abstract:

One of the main problems of the design stage relating to many tunneling projects is the lack of an appropriate standard for the provision of engineering geological data in a predefined format. In particular, this is more reflected in highway and railroad tunnel projects in which there is a number of tunnels and different professional teams involved. In this regard, comprehensive software needs to be designed using the accepted methods in order to help engineering geologists to prepare standard reports, which contain sufficient input data for the design stage. Regarding this necessity, applied software has been designed using macro capabilities and Visual Basic programming language (VBA) through Microsoft Excel. In this software, all of the engineering geological input data, which are required for designing different parts of tunnels, such as discontinuities properties, rock mass strength parameters, rock mass classification systems, boreability classification, the penetration rate, and so forth, can be calculated and reported in a standard format.

Keywords: engineering geology, rock mass classification, rock mechanic, tunnel

Procedia PDF Downloads 81
23937 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 191
23936 Objective Evaluation on Medical Image Compression Using Wavelet Transformation

Authors: Amhimmid Mohammed Saffour, Mustafa Mohamed Abdullah

Abstract:

The use of computers for handling image data in the healthcare is growing. However, the amount of data produced by modern image generating techniques is vast. This data might be a problem from a storage point of view or when the data is sent over a network. This paper using wavelet transform technique for medical images compression. MATLAB program, are designed to evaluate medical images storage and transmission time problem at Sebha Medical Center Libya. In this paper, three different Computed Tomography images which are abdomen, brain and chest have been selected and compressed using wavelet transform. Objective evaluation has been performed to measure the quality of the compressed images. For this evaluation, the results show that the Peak Signal to Noise Ratio (PSNR) which indicates the quality of the compressed image is ranging from (25.89db to 34.35db for abdomen images, 23.26db to 33.3db for brain images and 25.5db to 36.11db for chest images. These values shows that the compression ratio is nearly to 30:1 is acceptable.

Keywords: medical image, Matlab, image compression, wavelet's, objective evaluation

Procedia PDF Downloads 285
23935 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction

Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun

Abstract:

The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.

Keywords: usability, qualitative data, text-processing algorithm, natural language processing

Procedia PDF Downloads 285