Search results for: named data networking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25784

Search results for: named data networking

24584 A New Approach for Improving Accuracy of Multi Label Stream Data

Authors: Kunal Shah, Swati Patel

Abstract:

Many real world problems involve data which can be considered as multi-label data streams. Efficient methods exist for multi-label classification in non streaming scenarios. However, learning in evolving streaming scenarios is more challenging, as the learners must be able to adapt to change using limited time and memory. Classification is used to predict class of unseen instance as accurate as possible. Multi label classification is a variant of single label classification where set of labels associated with single instance. Multi label classification is used by modern applications, such as text classification, functional genomics, image classification, music categorization etc. This paper introduces the task of multi-label classification, methods for multi-label classification and evolution measure for multi-label classification. Also, comparative analysis of multi label classification methods on the basis of theoretical study, and then on the basis of simulation was done on various data sets.

Keywords: binary relevance, concept drift, data stream mining, MLSC, multiple window with buffer

Procedia PDF Downloads 584
24583 Secure Cryptographic Operations on SIM Card for Mobile Financial Services

Authors: Kerem Ok, Serafettin Senturk, Serdar Aktas, Cem Cevikbas

Abstract:

Mobile technology is very popular nowadays and it provides a digital world where users can experience many value-added services. Service Providers are also eager to offer diverse value-added services to users such as digital identity, mobile financial services and so on. In this context, the security of data storage in smartphones and the security of communication between the smartphone and service provider are critical for the success of these services. In order to provide the required security functions, the SIM card is one acceptable alternative. Since SIM cards include a Secure Element, they are able to store sensitive data, create cryptographically secure keys, encrypt and decrypt data. In this paper, we design and implement a SIM and a smartphone framework that uses a SIM card for secure key generation, key storage, data encryption, data decryption and digital signing for mobile financial services. Our frameworks show that the SIM card can be used as a controlled Secure Element to provide required security functions for popular e-services such as mobile financial services.

Keywords: SIM card, mobile financial services, cryptography, secure data storage

Procedia PDF Downloads 312
24582 Synthetic Data-Driven Prediction Using GANs and LSTMs for Smart Traffic Management

Authors: Srinivas Peri, Siva Abhishek Sirivella, Tejaswini Kallakuri, Uzair Ahmad

Abstract:

Smart cities and intelligent transportation systems rely heavily on effective traffic management and infrastructure planning. This research tackles the data scarcity challenge by generating realistically synthetic traffic data from the PeMS-Bay dataset, enhancing predictive modeling accuracy and reliability. Advanced techniques like TimeGAN and GaussianCopula are utilized to create synthetic data that mimics the statistical and structural characteristics of real-world traffic. The future integration of Spatial-Temporal Generative Adversarial Networks (ST-GAN) is anticipated to capture both spatial and temporal correlations, further improving data quality and realism. Each synthetic data generation model's performance is evaluated against real-world data to identify the most effective models for accurately replicating traffic patterns. Long Short-Term Memory (LSTM) networks are employed to model and predict complex temporal dependencies within traffic patterns. This holistic approach aims to identify areas with low vehicle counts, reveal underlying traffic issues, and guide targeted infrastructure interventions. By combining GAN-based synthetic data generation with LSTM-based traffic modeling, this study facilitates data-driven decision-making that improves urban mobility, safety, and the overall efficiency of city planning initiatives.

Keywords: GAN, long short-term memory (LSTM), synthetic data generation, traffic management

Procedia PDF Downloads 14
24581 Passion Songs in Sri Lanka with Special Reference to Village Wahakotte

Authors: Niroshi Senevirathne

Abstract:

The history of Pasan Gee (Passion Songs) relates back to the Portuguese Colonial period (1505-1658) in Sri Lanka. It is about chants on the passion of Christ during the Lent period which is repentance for Christians lasting for 40 days. Among the other villages in Sri Lanka, Wahakotte, which is situated in Matale district, Central Province is famous for their traditional Pasan melodies. It is a village where both Christians and Buddhists live. King Rajasinghe II of Kandy, who fought against the Portuguese, allowed the captives to settle down in Wahakotte. These people fairer in complexion have assimilated themselves with locals. Pasan singing in Wahakotte is a significant event and it is influenced by traditional folk music melodies such as “Nelum Gee” (harvesting songs) sung by farmers of Matale, Welapum Gee (Lamantation songs) sung at funerals in Sri Lanka and Buddhist Pirith chanting melodies. Prose of Pasan verses are included in the book named “Deshana namaye Pasan potha” (Nine Sermon Passion Book), written by Fr. Jacome Gonsalvez. The verses are composed with Sinhala and with some Tamil words. These songs are transmitted from generation to generation in an oral tradition. Today, chanting of Pasan is not heard in many Catholic areas during the lent season. Some of them have been recorded in cassette form. This research should aim to protect these traditional Passion songs unique to village Wahakotte of Sri Lanka without changing its character and original melodies.

Keywords: influence of folk melodies, passion songs, preserving traditional passion songs, traditional passion melodies

Procedia PDF Downloads 289
24580 Success Factors for Innovations in SME Networks

Authors: J. Gochermann

Abstract:

Due to complex markets and products, and increasing need to innovate, cooperation between small and medium size enterprises arose during the last decades, which are not prior driven by process optimization or sales enhancement. Especially small and medium sized enterprises (SME) collaborate increasingly in innovation and knowledge networks to enhance their knowledge and innovation potential, and to find strategic partners for product and market development. These networks are characterized by dual objectives, the superordinate goal of the total network, and the specific objectives of the network members, which can cause target conflicts. Moreover, most SMEs do not have structured innovation processes and they are not accustomed to collaborate in complex innovation projects in an open network structure. On the other hand, SMEs have suitable characteristics for promising networking. They are flexible and spontaneous, they have flat hierarchies, and the acting people are not anonymous. These characteristics indeed distinguish them from bigger concerns. Investigation of German SME networks have been done to identify success factors for SME innovation networks. The fundamental network principles, donation-return and confidence, could be confirmed and identified as basic success factors. Further factors are voluntariness, adequate number of network members, quality of communication, neutrality and competence of the network management, as well as reliability and obligingness of the network services. Innovation and knowledge networks with an appreciable number of members from science and technology institutions need also active sense-making to bring different disciplines into successful collaboration. It has also been investigated, whether and how the involvement in an innovation network impacts the innovation structure and culture inside the member companies. The degree of reaction grows with time and intensity of commitment.

Keywords: innovation and knowledge networks, SME, success factors, innovation structure and culture

Procedia PDF Downloads 283
24579 Salmon Diseases Connectivity between Fish Farm Management Areas in Chile

Authors: Pablo Reche

Abstract:

Since 1980’s aquaculture has become the biggest economic activity in southern Chile, being Salmo salar and Oncorhynchus mykiss the main finfish species. High fish density makes both species prone to contract diseases, what drives the industry to big losses, affecting greatly the local economy. Three are the most concerning infective agents, the infectious salmon anemia virus (ISAv), the bacteria Piscirickettsia salmonis and the copepod Caligus rogercresseyi. To regulate the industry the government arranged the salmon farms within management areas named as barrios, which coordinate the fallowing periods and antibiotics treatments of their salmon farms. In turn, barrios are gathered into larger management areas, named as macrozonas whose purpose is to minimize the risk of disease transmission between them and to enclose the outbreaks within their boundaries. However, disease outbreaks still happen and transmission to neighbor sites enlarges the initial event. Salmon disease agents are mostly transported passively by local currents. Thus, to understand how transmission occurs it must be firstly studied the physical environment. In Chile, salmon farming takes place in the inner seas of the southernmost regions of western Patagonia, between 41.5ºS-55ºS. This coastal marine system is characterised by western winds, latitudinally modulated by the position of the South-Eats Pacific high-pressure centre, high precipitation rates and freshwater inflows from the numerous glaciers (including the largest ice cap out of Antarctic and Greenland). All of these forcings meet in a complex bathymetry and coastline system - deep fjords, shallow sills, narrow straits, channels, archipelagos, inlets, and isolated inner seas- driving an estuarine circulation (fast outflows westwards on surface and slow deeper inflows eastwards). Such a complex system is modelled on the numerical model MIKE3, upon whose 3D current fields particle-track-biological models (one for each infective agent) are decoupled. Each agent biology is parameterized by functions for maturation and mortality (reproduction not included). Such parameterizations are depending upon environmental factors, like temperature and salinity, so their lifespan will depend upon the environmental conditions those virtual agents encounter on their way while passively transported. CLIC (Connectivity-Langrangian–IFOP-Chile) is a service platform that supports the graphical visualization of the connectivity matrices calculated from the particle trajectories files resultant of the particle-track-biological models. On CLIC users can select, from a high-resolution grid (~1km), the areas the connectivity will be calculated between them. These areas can be barrios and macrozonas. Users also can select what nodes of these areas are allowed to release and scatter particles from, depth and frequency of the initial particle release, climatic scenario (winter/summer) and type of particle (ISAv, Piscirickettsia salmonis, Caligus rogercresseyi plus an option for lifeless particles). Results include probabilities downstream (where the particles go) and upstream (where the particles come from), particle age and vertical distribution, all of them aiming to understand how currently connectivity works to eventually propose a minimum risk zonation for aquaculture purpose. Preliminary results in Chiloe inner sea shows that the risk depends not only upon dynamic conditions but upon barrios location with respect to their neighbors.

Keywords: aquaculture zonation, Caligus rogercresseyi, Chilean Patagonia, coastal oceanography, connectivity, infectious salmon anemia virus, Piscirickettsia salmonis

Procedia PDF Downloads 155
24578 Multifunctional Polydopamine-Silver-Polydopamine Nanofilm With Applications in Digital Microfluidics and SERS

Authors: Yilei Xue, Yat-Hing Ham, Wenting Qiu, Wan Chan, Stefan Nagl

Abstract:

Polydopamine (PDA) is a popular material in biological and medical applications due to its excellent biocompatibility, outstanding physicochemical properties, and facile fabrication. In this project, a new sandwich-structured PDA and silver (Ag) hybrid material named PDA-Ag-PDA was synthesized and characterized layer-by-layer, where silver nanoparticles (Ag NPs) are wrapped in PDA coatings, using SEM, AFM, 3D surface metrology, and contact angle meter. The silver loading capacity is positively proportional to the roughness value of the initial PDA film. This designed film was subsequently integrated within a digital microfluidic (DMF) platform coupling with an oxygen sensor layer for on-chip antibacterial assay. The concentration of E. coli was quantified on DMF by real-time monitoring oxygen consumption during E. coli growth with the optical oxygen sensor layer. The PDA-Ag-PDA coating shows an 99.9% reduction in E. coli population under non-nutritive condition with 1-hour treatment and has a strong growth inhibition of E. coliin nutrient LB broth as well. Furthermore, PDA-Ag-PDA film maintaining a low cytotoxicity effect to human cells. After treating with PDA-Ag-PDA film for 24 hours, 82% HEK 293 and 86% HeLa cells were viable. The SERS enhancement factor of PDA-Ag-PDA is estimated to be 1.9 × 104 using Rhodamine 6G (R6G). Multifunctional PDA-Ag-PDA coating provides an alternative platform to conjugate biomolecules and perform biological applications on DMF, in particular, for the adhesive protein and cell study.

Keywords: polydopamine, silver nanoparticles, digital microfluidic, optical sensor, antimicrobial assay, SERS

Procedia PDF Downloads 93
24577 Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection

Authors: Salma El Hajjami, Jamal Malki, Alain Bouju, Mohammed Berrada

Abstract:

With the expansion of machine learning and data mining in the context of Big Data analytics, the common problem that affects data is class imbalance. It refers to an imbalanced distribution of instances belonging to each class. This problem is present in many real world applications such as fraud detection, network intrusion detection, medical diagnostics, etc. In these cases, data instances labeled negatively are significantly more numerous than the instances labeled positively. When this difference is too large, the learning system may face difficulty when tackling this problem, since it is initially designed to work in relatively balanced class distribution scenarios. Another important problem, which usually accompanies these imbalanced data, is the overlapping instances between the two classes. It is commonly referred to as noise or overlapping data. In this article, we propose an approach called: One Side Behavioral Noise Reduction (OSBNR). This approach presents a way to deal with the problem of class imbalance in the presence of a high noise level. OSBNR is based on two steps. Firstly, a cluster analysis is applied to groups similar instances from the minority class into several behavior clusters. Secondly, we select and eliminate the instances of the majority class, considered as behavioral noise, which overlap with behavior clusters of the minority class. The results of experiments carried out on a representative public dataset confirm that the proposed approach is efficient for the treatment of class imbalances in the presence of noise.

Keywords: machine learning, imbalanced data, data mining, big data

Procedia PDF Downloads 130
24576 Automatic Detection of Traffic Stop Locations Using GPS Data

Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell

Abstract:

Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.

Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data

Procedia PDF Downloads 275
24575 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data

Authors: Gayathri Nagarajan, L. D. Dhinesh Babu

Abstract:

Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.

Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform

Procedia PDF Downloads 240
24574 Need Assessments of Midwives in Public's Health Center (Puskesmas) at Sukabumi Municipal, Province of Jawa Barat, Indonesia

Authors: Al Asyary, Meita Veruswati, Dian Ayyubi

Abstract:

Sukabumi municipal has highest rank for maternal mortality in Indonesia with 102 by 100,000 live birth with almost 80% of birth were not attended by skilled birth attendant (SBA). Although universal health coverage has been implemented, availability and sufficiency of SBA, such as midwife in this developing country, are problematic agenda for the quality of public healthcare as well as decreasing maternal mortality rate. This study aims to describe the equal distribution of midwives in Sukabumi municipal as support of government’s program named Millennium Development Goals (MDGs) that suppressed maternal mortality rate in Indonesia. We conducted an observational study with Workload Indicator of Staffing Need (WISN) analysis to present the dispersion of midwives by their activities and workloads in 37 Puskesmas. We also generated in-depth interview with several executive chief of health sections, including chief of health offices in Sukabumi municipal. It resulted inferentially that several activities in midwives’ program were differed at once of existing than needed condition ideally (ρ value = 0.002 < 0.05). Meanwhile, decision for midwives’ procurement and placement were held by un-systematically procedure such as based on where the midwife was staying, and it also progressed by neighborhood issue priorities. The absence of formal regulation in local government is a serious problem that indicated poor political commitment, while access to SBA shall be focused carefully.

Keywords: developing country, health professional resources, health policy, need assessment

Procedia PDF Downloads 168
24573 Performance and Voyage Analysis of Marine Gas Turbine Engine, Installed to Power and Propel an Ocean-Going Cruise Ship from Lagos to Jeddah

Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris

Abstract:

An aero-derivative marine Gas Turbine engine model is simulated to be installed as the main propulsion prime mover to power a cruise ship which is designed and routed to transport intending Muslim pilgrims for the annual hajj pilgrimage from Nigeria to the Islamic port city of Jeddah in Saudi Arabia. A performance assessment of the Gas Turbine engine has been conducted by examining the effect of varying aerodynamic and hydrodynamic conditions encountered at various geographical locations along the scheduled transit route during the voyage. The investigation focuses on the overall behavior of the Gas Turbine engine employed to power and propel the ship as it operates under ideal and adverse conditions to be encountered during calm and rough weather according to the different seasons of the year under which the voyage may be undertaken. The variation of engine performance under varying operating conditions has been considered as a very important economic issue by determining the time the speed by which the journey is completed as well as the quantity of fuel required for undertaking the voyage. The assessment also focuses on the increased resistance caused by the fouling of the submerged portion of the ship hull surface with its resultant effect on the power output of the engine as well as the overall performance of the propulsion system. Daily ambient temperature levels were obtained by accessing data from the UK Meteorological Office while the varying degree of turbulence along the transit route and according to the Beaufort scale were also obtained as major input variables of the investigation. By assuming the ship to be navigating the Atlantic Ocean and the Mediterranean Sea during winter, spring and summer seasons, the performance modeling and simulation was accomplished through the use of an integrated Gas Turbine performance simulation code known as ‘Turbomach’ along with a Matlab generated code named ‘Poseidon’, all of which have been developed at the Power and Propulsion Department of Cranfield University. As a case study, the results of the various assumptions have further revealed that the marine Gas Turbine is a reliable and available alternative to the conventional marine propulsion prime movers that have dominated the maritime industry before now. The techno-economic and environmental assessment of this type of propulsion prime mover has enabled the determination of the effect of changes in weather and sea conditions on the ship speed as well as trip time and the quantity of fuel required to be burned throughout the voyage.

Keywords: ambient temperature, hull fouling, marine gas turbine, performance, propulsion, voyage

Procedia PDF Downloads 186
24572 Analysis of Sediment Distribution around Karang Sela Coral Reef Using Multibeam Backscatter

Authors: Razak Zakariya, Fazliana Mustajap, Lenny Sharinee Sakai

Abstract:

A sediment map is quite important in the marine environment. The sediment itself contains thousands of information that can be used for other research. This study was conducted by using a multibeam echo sounder Reson T20 on 15 August 2020 at the Karang Sela (coral reef area) at Pulau Bidong. The study aims to identify the sediment type around the coral reef by using bathymetry and backscatter data. The sediment in the study area was collected as ground truthing data to verify the classification of the seabed. A dry sieving method was used to analyze the sediment sample by using a sieve shaker. PDS 2000 software was used for data acquisition, and Qimera QPS version 2.4.5 was used for processing the bathymetry data. Meanwhile, FMGT QPS version 7.10 processes the backscatter data. Then, backscatter data were analyzed by using the maximum likelihood classification tool in ArcGIS version 10.8 software. The result identified three types of sediments around the coral which were very coarse sand, coarse sand, and medium sand.

Keywords: sediment type, MBES echo sounder, backscatter, ArcGIS

Procedia PDF Downloads 86
24571 Stem Covers of Leibniz n-Algebras

Authors: Natália Maria Rego

Abstract:

ALeibnizn-algebraGis aK-vector space endowed whit a n-linearbracket operation [-,…-] : GG … G→ Gsatisfying the fundamental identity, which can be expressed saying that the right multiplication map Ry2, …, ᵧₙ: Gn→ G, Rᵧ₂, …, ᵧₙn(ˣ¹, …, ₓₙ) = [[ˣ¹, …, ₓₙ], ᵧ₂, …, ᵧₙ], is a derivation. This structure, together with its skew-symmetric version, named as Lie n-algebra or Filippov algebra, arose in the setting of Nambumechanics, an n-ary generalization of the Hamiltonian mechanics. Thefirst goal of this work is to provide a characterization of various classes of central extensions of Leibniz n-algebras in terms of homological properties. Namely, Commutator extension, Quasi-commutator extension, Stem extension, and Stem cover. These kind of central extensions are characterized by means of the character of the map *(E): nHL1(G) → M provided by the five-term exact sequence in homology with trivial coefficients of Leibniz n-algebras associated to an extension E : 0 → M → K → G → 0. For a free presentation 0 →R→ F →G→ 0of a Leibniz n-algebra G,the term M(G) = (R[F,…n.., F])/[R, F,..n-1..,F] is called the Schur multiplier of G, which is a Baer invariant, i.e., it does not depend on the chosen free presentation, and it is isomorphic to the first Leibniz n-algebras homology with trivial coefficients of G. A central extension of Leibniz n-algebras is a short exact sequenceE : 0 →M→K→G→ 0such that [M, K,.. ⁿ⁻¹.., K]=0. It is said to be a stem extension if M⊆[G, .. n.., G]. Additionally, if the induced map M(K) → M(G) is the zero map, then the stem extension Eis said to be a stem cover. The second aim of this work is to analyze the interplay between stem covers of Leibniz n-algebras and the Schur multiplier. Concretely, in the case of finite-dimensional Leibniz n-algebras, we show the existence of coverings, and we prove that all stem covers with finite-dimensional Schur multiplier are isoclinic. Additionally, we characterize stem covers of perfect Leibniz n-algebras.

Keywords: leibniz n-algebras, central extensions, Schur multiplier, stem cover

Procedia PDF Downloads 157
24570 DCDNet: Lightweight Document Corner Detection Network Based on Attention Mechanism

Authors: Kun Xu, Yuan Xu, Jia Qiao

Abstract:

The document detection plays an important role in optical character recognition and text analysis. Because the traditional detection methods have weak generalization ability, and deep neural network has complex structure and large number of parameters, which cannot be well applied in mobile devices, this paper proposes a lightweight Document Corner Detection Network (DCDNet). DCDNet is a two-stage architecture. The first stage with Encoder-Decoder structure adopts depthwise separable convolution to greatly reduce the network parameters. After introducing the Feature Attention Union (FAU) module, the second stage enhances the feature information of spatial and channel dim and adaptively adjusts the size of receptive field to enhance the feature expression ability of the model. Aiming at solving the problem of the large difference in the number of pixel distribution between corner and non-corner, Weighted Binary Cross Entropy Loss (WBCE Loss) is proposed to define corner detection problem as a classification problem to make the training process more efficient. In order to make up for the lack of Dataset of document corner detection, a Dataset containing 6620 images named Document Corner Detection Dataset (DCDD) is made. Experimental results show that the proposed method can obtain fast, stable and accurate detection results on DCDD.

Keywords: document detection, corner detection, attention mechanism, lightweight

Procedia PDF Downloads 354
24569 Scrutiny and Solving Analytically Nonlinear Differential at Engineering Field of Fluids, Heat, Mass and Wave by New Method AGM

Authors: Mohammadreza Akbari, Sara Akbari, Davood Domiri Ganji, Pooya Solimani, Reza Khalili

Abstract:

As all experts know most of engineering system behavior in practical are nonlinear process (especially heat, fluid and mass, etc.) and analytical solving (no numeric) these problems are difficult, complex and sometimes impossible like (fluids and gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure a innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will be emerged after comparing the achieved solutions by Numerical method (Runge-Kutte 4th) and so compare to other methods such as HPM, ADM,… and exact solutions. Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students (engineering and basic science) in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear differential equations, with help of that there is no difficulty for solving nonlinear differential equations(ODE and PDE). In this paper, we investigate and solve 4 types of the nonlinear differential equation with AGM method : 1-Heat and fluid, 2-Unsteady state of nonlinear partial differential, 3-Coupled nonlinear partial differential in wave equation, and 4-Nonlinear integro-differential equation.

Keywords: new method AGM, sets of coupled nonlinear equations at engineering field, waves equations, integro-differential, fluid and thermal

Procedia PDF Downloads 546
24568 Location Privacy Preservation of Vehicle Data In Internet of Vehicles

Authors: Ying Ying Liu, Austin Cooke, Parimala Thulasiraman

Abstract:

Internet of Things (IoT) has attracted a recent spark in research on Internet of Vehicles (IoV). In this paper, we focus on one research area in IoV: preserving location privacy of vehicle data. We discuss existing location privacy preserving techniques and provide a scheme for evaluating these techniques under IoV traffic condition. We propose a different strategy in applying Differential Privacy using k-d tree data structure to preserve location privacy and experiment on real world Gowalla data set. We show that our strategy produces differentially private data, good preservation of utility by achieving similar regression accuracy to the original dataset on an LSTM (Long Term Short Term Memory) neural network traffic predictor.

Keywords: differential privacy, internet of things, internet of vehicles, location privacy, privacy preservation scheme

Procedia PDF Downloads 179
24567 Isolation and Molecular IdentıFıCation of Polyethylene Degrading Bacteria From Soil and Degradation Detection by FTIR Analysis

Authors: Morteza Haghi, Cigdem Yilmazbas, Ayse Zeynep Uysal, Melisa Tepedelen, Gozde Turkoz Bakirci

Abstract:

Today, the increase in plastic waste accumulation is an inescapable consequence of environmental pollution; the disposal of these wastes has caused a significant problem. Variable methods have been utilized; however, biodegradation is the most environmentally friendly and low-cost method. Accordingly, the present study aimed to isolate the bacteria capable of biodegradation of plastics. In doing so, we applied the liquid carbon-free basal medium (LCFBM) prepared with deionized water for the isolation of bacterial species obtained from soil samples taken from the Izmir Menemen region. Isolates forming biofilms on plastic were selected and named (PLB3, PLF1, PLB1B) and subjected to a degradation test. FTIR analysis, 16s rDNA amplification, sequencing, identification of isolates were performed. Finally, at the end of the process, a mass loss of 16.6% in PLB3 isolate and 25% in PLF1 isolate was observed, while no mass loss was detected in PLB1B isolate. Only PLF1 and PLB1B created transparent zones on plastic texture. Considering the FTIR result, PLB3 changed plastic structure by 13.6% and PLF1 by 17%, while PLB1B did not change the plastic texture. According to the 16s rDNA sequence analysis, FLP1, PLB1B, and PLB3 isolates were identified as Streptomyces albogriseolus, Enterobacter cloacae, and Klebsiella pneumoniae, respectively.

Keywords: polyethylene, biodegradation, bacteria, 16s rDNA, FTIR

Procedia PDF Downloads 202
24566 Investigating Data Normalization Techniques in Swarm Intelligence Forecasting for Energy Commodity Spot Price

Authors: Yuhanis Yusof, Zuriani Mustaffa, Siti Sakira Kamaruddin

Abstract:

Data mining is a fundamental technique in identifying patterns from large data sets. The extracted facts and patterns contribute in various domains such as marketing, forecasting, and medical. Prior to that, data are consolidated so that the resulting mining process may be more efficient. This study investigates the effect of different data normalization techniques, which are Min-max, Z-score, and decimal scaling, on Swarm-based forecasting models. Recent swarm intelligence algorithms employed includes the Grey Wolf Optimizer (GWO) and Artificial Bee Colony (ABC). Forecasting models are later developed to predict the daily spot price of crude oil and gasoline. Results showed that GWO works better with Z-score normalization technique while ABC produces better accuracy with the Min-Max. Nevertheless, the GWO is more superior that ABC as its model generates the highest accuracy for both crude oil and gasoline price. Such a result indicates that GWO is a promising competitor in the family of swarm intelligence algorithms.

Keywords: artificial bee colony, data normalization, forecasting, Grey Wolf optimizer

Procedia PDF Downloads 475
24565 Review of Influential Factors on the Personnel Interview for Employment from Point of View of Human Resources Management

Authors: Abbas Ghahremani

Abstract:

One of the most fundamental management issues in organizations and companies is the recruiting of efficient staff and compiling exact and perfect criteria for testing the applicants,which is guided and practiced by the manager of human resources of the organization. Obviously, each part of the organization seeks special features and abilities in the people apart from common features among all the staff in all units,which are called principal duties and abilities,and we will study them more. This article is trying to find out how we can identify the most efficient people among the applicants of employment by using proper methods of testing appropriate for the needs of different of employment by using proper methods of testing appropriate for the needs of different units of the organization and recruit efficient staff. Acceptable method for recruiting is to closely identify their characters from various aspects such as ability to communicate, flexibility, stress management, risk acceptance, tolerance, vision to future, familiarity with the art, amount of creativity and different thinking and by raising proper questions related with the above named features and presenting a questionnaire, evaluate them from various aspect in order to gain the proper result. According to the above explanations, it can be concluded which aspects of abilities and characteristics of a person must be evaluated in order to reduce any mistake in recruitment and approach an ideal result and ultimately gain an organized system according to the standards and avoid waste of energy for unprofessional personnel which is a marginal issue in the organizations.

Keywords: human resources management, staff recuiting, employment factors, efficient staff

Procedia PDF Downloads 461
24564 Advances in Mathematical Sciences: Unveiling the Power of Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid advancements in data collection, storage, and processing capabilities have led to an explosion of data in various domains. In this era of big data, mathematical sciences play a crucial role in uncovering valuable insights and driving informed decision-making through data analytics. The purpose of this abstract is to present the latest advances in mathematical sciences and their application in harnessing the power of data analytics. This abstract highlights the interdisciplinary nature of data analytics, showcasing how mathematics intersects with statistics, computer science, and other related fields to develop cutting-edge methodologies. It explores key mathematical techniques such as optimization, mathematical modeling, network analysis, and computational algorithms that underpin effective data analysis and interpretation. The abstract emphasizes the role of mathematical sciences in addressing real-world challenges across different sectors, including finance, healthcare, engineering, social sciences, and beyond. It showcases how mathematical models and statistical methods extract meaningful insights from complex datasets, facilitating evidence-based decision-making and driving innovation. Furthermore, the abstract emphasizes the importance of collaboration and knowledge exchange among researchers, practitioners, and industry professionals. It recognizes the value of interdisciplinary collaborations and the need to bridge the gap between academia and industry to ensure the practical application of mathematical advancements in data analytics. The abstract highlights the significance of ongoing research in mathematical sciences and its impact on data analytics. It emphasizes the need for continued exploration and innovation in mathematical methodologies to tackle emerging challenges in the era of big data and digital transformation. In summary, this abstract sheds light on the advances in mathematical sciences and their pivotal role in unveiling the power of data analytics. It calls for interdisciplinary collaboration, knowledge exchange, and ongoing research to further unlock the potential of mathematical methodologies in addressing complex problems and driving data-driven decision-making in various domains.

Keywords: mathematical sciences, data analytics, advances, unveiling

Procedia PDF Downloads 93
24563 A Formal Approach for Instructional Design Integrated with Data Visualization for Learning Analytics

Authors: Douglas A. Menezes, Isabel D. Nunes, Ulrich Schiel

Abstract:

Most Virtual Learning Environments do not provide support mechanisms for the integrated planning, construction and follow-up of Instructional Design supported by Learning Analytic results. The present work aims to present an authoring tool that will be responsible for constructing the structure of an Instructional Design (ID), without the data being altered during the execution of the course. The visual interface aims to present the critical situations present in this ID, serving as a support tool for the course follow-up and possible improvements, which can be made during its execution or in the planning of a new edition of this course. The model for the ID is based on High-Level Petri Nets and the visualization forms are determined by the specific kind of the data generated by an e-course, a population of students generating sequentially dependent data.

Keywords: educational data visualization, high-level petri nets, instructional design, learning analytics

Procedia PDF Downloads 243
24562 Analysis of Users’ Behavior on Book Loan Log Based on Association Rule Mining

Authors: Kanyarat Bussaban, Kunyanuth Kularbphettong

Abstract:

This research aims to create a model for analysis of student behavior using Library resources based on data mining technique in case of Suan Sunandha Rajabhat University. The model was created under association rules, apriori algorithm. The results were found 14 rules and the rules were tested with testing data set and it showed that the ability of classify data was 79.24 percent and the MSE was 22.91. The results showed that the user’s behavior model by using association rule technique can use to manage the library resources.

Keywords: behavior, data mining technique, a priori algorithm, knowledge discovery

Procedia PDF Downloads 404
24561 Exploration of RFID in Healthcare: A Data Mining Approach

Authors: Shilpa Balan

Abstract:

Radio Frequency Identification, also popularly known as RFID is used to automatically identify and track tags attached to items. This study focuses on the application of RFID in healthcare. The adoption of RFID in healthcare is a crucial technology to patient safety and inventory management. Data from RFID tags are used to identify the locations of patients and inventory in real time. Medical errors are thought to be a prominent cause of loss of life and injury. The major advantage of RFID application in healthcare industry is the reduction of medical errors. The healthcare industry has generated huge amounts of data. By discovering patterns and trends within the data, big data analytics can help improve patient care and lower healthcare costs. The number of increasing research publications leading to innovations in RFID applications shows the importance of this technology. This study explores the current state of research of RFID in healthcare using a text mining approach. No study has been performed yet on examining the current state of RFID research in healthcare using a data mining approach. In this study, related articles were collected on RFID from healthcare journal and news articles. Articles collected were from the year 2000 to 2015. Significant keywords on the topic of focus are identified and analyzed using open source data analytics software such as Rapid Miner. These analytical tools help extract pertinent information from massive volumes of data. It is seen that the main benefits of adopting RFID technology in healthcare include tracking medicines and equipment, upholding patient safety, and security improvement. The real-time tracking features of RFID allows for enhanced supply chain management. By productively using big data, healthcare organizations can gain significant benefits. Big data analytics in healthcare enables improved decisions by extracting insights from large volumes of data.

Keywords: RFID, data mining, data analysis, healthcare

Procedia PDF Downloads 233
24560 The Importance of Knowledge Innovation for External Audit on Anti-Corruption

Authors: Adel M. Qatawneh

Abstract:

This paper aimed to determine the importance of knowledge innovation for external audit on anti-corruption in the entire Jordanian bank companies are listed in Amman Stock Exchange (ASE). The study importance arises from the need to recognize the Knowledge innovation for external audit and anti-corruption as the development in the world of business, the variables that will be affected by external audit innovation are: reliability of financial data, relevantly of financial data, consistency of the financial data, Full disclosure of financial data and protecting the rights of investors to achieve the objectives of the study a questionnaire was designed and distributed to the society of the Jordanian bank are listed in Amman Stock Exchange. The data analysis found out that the banks in Jordan have a positive importance of Knowledge innovation for external audit on anti-corruption. They agree on the benefit of Knowledge innovation for external audit on anti-corruption. The statistical analysis showed that Knowledge innovation for external audit had a positive impact on the anti-corruption and that external audit has a significantly statistical relationship with anti-corruption, reliability of financial data, consistency of the financial data, a full disclosure of financial data and protecting the rights of investors.

Keywords: knowledge innovation, external audit, anti-corruption, Amman Stock Exchange

Procedia PDF Downloads 464
24559 Automated End-to-End Pipeline Processing Solution for Autonomous Driving

Authors: Ashish Kumar, Munesh Raghuraj Varma, Nisarg Joshi, Gujjula Vishwa Teja, Srikanth Sambi, Arpit Awasthi

Abstract:

Autonomous driving vehicles are revolutionizing the transportation system of the 21st century. This has been possible due to intensive research put into making a robust, reliable, and intelligent program that can perceive and understand its environment and make decisions based on the understanding. It is a very data-intensive task with data coming from multiple sensors and the amount of data directly reflects on the performance of the system. Researchers have to design the preprocessing pipeline for different datasets with different sensor orientations and alignments before the dataset can be fed to the model. This paper proposes a solution that provides a method to unify all the data from different sources into a uniform format using the intrinsic and extrinsic parameters of the sensor used to capture the data allowing the same pipeline to use data from multiple sources at a time. This also means easy adoption of new datasets or In-house generated datasets. The solution also automates the complete deep learning pipeline from preprocessing to post-processing for various tasks allowing researchers to design multiple custom end-to-end pipelines. Thus, the solution takes care of the input and output data handling, saving the time and effort spent on it and allowing more time for model improvement.

Keywords: augmentation, autonomous driving, camera, custom end-to-end pipeline, data unification, lidar, post-processing, preprocessing

Procedia PDF Downloads 123
24558 Visual Text Analytics Technologies for Real-Time Big Data: Chronological Evolution and Issues

Authors: Siti Azrina B. A. Aziz, Siti Hafizah A. Hamid

Abstract:

New approaches to analyze and visualize data stream in real-time basis is important in making a prompt decision by the decision maker. Financial market trading and surveillance, large-scale emergency response and crowd control are some example scenarios that require real-time analytic and data visualization. This situation has led to the development of techniques and tools that support humans in analyzing the source data. With the emergence of Big Data and social media, new techniques and tools are required in order to process the streaming data. Today, ranges of tools which implement some of these functionalities are available. In this paper, we present chronological evolution evaluation of technologies for supporting of real-time analytic and visualization of the data stream. Based on the past research papers published from 2002 to 2014, we gathered the general information, main techniques, challenges and open issues. The techniques for streaming text visualization are identified based on Text Visualization Browser in chronological order. This paper aims to review the evolution of streaming text visualization techniques and tools, as well as to discuss the problems and challenges for each of identified tools.

Keywords: information visualization, visual analytics, text mining, visual text analytics tools, big data visualization

Procedia PDF Downloads 399
24557 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks

Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz

Abstract:

Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.

Keywords: customer relationship management, churn prediction, telecom industry, deep learning, artificial neural networks

Procedia PDF Downloads 144
24556 The Face Sync-Smart Attendance

Authors: Bekkem Chakradhar Reddy, Y. Soni Priya, Mathivanan G., L. K. Joshila Grace, N. Srinivasan, Asha P.

Abstract:

Currently, there are a lot of problems related to marking attendance in schools, offices, or other places. Organizations tasked with collecting daily attendance data have numerous concerns. There are different ways to mark attendance. The most commonly used method is collecting data manually by calling each student. It is a longer process and problematic. Now, there are a lot of new technologies that help to mark attendance automatically. It reduces work and records the data. We have proposed to implement attendance marking using the latest technologies. We have implemented a system based on face identification and analyzing faces. The project is developed by gathering faces and analyzing data, using deep learning algorithms to recognize faces effectively. The data is recorded and forwarded to the host through mail. The project was implemented in Python and Python libraries used are CV2, Face Recognition, and Smtplib.

Keywords: python, deep learning, face recognition, CV2, smtplib, Dlib.

Procedia PDF Downloads 57
24555 The Impact of Artificial Intelligence on Digital Crime

Authors: Á. L. Bendes

Abstract:

By the end of the second decade of the 21st century, artificial intelligence (AI) has become an unavoidable part of everyday life and has necessarily aroused the interest of researchers in almost every field of science. This is no different in the case of jurisprudence, whose main task is not only to create its own theoretical paradigm related to AI. Perhaps the biggest impact on digital crime is artificial intelligence. In addition, the need to create legal frameworks suitable for the future application of the law has a similar importance. The prognosis according to which AI can reshape the practical application of law and, ultimately, the entire legal life is also of considerable importance. In the past, criminal law was basically created to sanction the criminal acts of a person, so the application of its concepts with original content to AI-related violations is not expected to be sufficient in the future. Taking this into account, it is necessary to rethink the basic elements of criminal law, such as the act and factuality, but also, in connection with criminality barriers and criminal sanctions, several new aspects have appeared that challenge both the criminal law researcher and the legislator. It is recommended to continuously monitor technological changes in the field of criminal law as well since it will be timely to re-create both the legal and scientific frameworks to correctly assess the events related to them, which may require a criminal law response. Artificial intelligence has completely reformed the world of digital crime. New crimes have appeared, which the legal systems of many countries do not or do not adequately regulate. It is considered important to investigate and sanction these digital crimes. The primary goal is prevention, for which we need a comprehensive picture of the intertwining of artificial intelligence and digital crimes. The goal is to explore these problems, present them, and create comprehensive proposals that support legal certainty.

Keywords: artificial intelligence, chat forums, defamation, international criminal cooperation, social networking, virtual sites

Procedia PDF Downloads 89