Search results for: minimum data set
25296 Starting Characteristic Analysis of LSPM for Pumping System Considering Demagnetization
Authors: Subrato Saha, Yun-Hyun Cho
Abstract:
This paper presents the design process of a high performance 3-phase 3.7 kW 2-pole line start permanent magnet synchronous motor for pumping system. A method was proposed to study the starting torque characteristics considering line start with high inertia load. A d-q model including cage was built to study the synchronization capability. Time-stepping finite element method analysis was utilized to accurately predict the dynamic and transient performance, efficiency, starting current, speed curve and, etc. Considering the load torque of pumps during starting stage, the rotor bar was designed with minimum demagnetization of permanent magnet caused by huge starting current.Keywords: LSPM, starting analysis, demagnetization, FEA, pumping system
Procedia PDF Downloads 47125295 Pathogenic Bacteria Isolated from Diseased Giant Freshwater Prawn in Shrimp Culture Ponds
Authors: Kusumawadee Thancharoen, Rungrat Nontawong, Thanawat Junsom
Abstract:
Pathogenic bacterial flora was isolated from giant freshwater prawns, Macrobrachium rosenbergii. Infected shrimp samples were collected from BuaBan Aquafarm in Kalasin Province, Thailand, between June and September 2018. Bacterial species were isolated by serial dilution and plated on Thiosulfate Citrate Bile Salt Sucrose (TCBS) agar medium. A total 89 colonies were isolated and identified using the API 20E biochemical tests. Results showed the presence of genera Aeromonas, Citrobacter, Chromobacterium, Providencia, Pseudomonas, Stenotrophomonas and Vibrio. Maximum number of species was recorded in Pseudomonas (50.57%) with minimum observed in Chromobacterium and Providencia (1.12%).Keywords: biochemical test, giant freshwater prawn, isolation, salt tolerance, shrimp diseases
Procedia PDF Downloads 23825294 Touch Interaction through Tagging Context
Authors: Gabriel Chavira, Jorge Orozco, Salvador Nava, Eduardo Álvarez, Julio Rolón, Roberto Pichardo
Abstract:
Ambient Intelligence promotes a shift in computing which involves fitting-out the environments with devices to support context-aware applications. One of main objectives is the reduction to a minimum of the user’s interactive effort, the diversity and quantity of devices with which people are surrounded with, in existing environments; increase the level of difficulty to achieve this goal. The mobile phones and their amazing global penetration, makes it an excellent device for delivering new services to the user, without requiring a learning effort. The environment will have to be able to perceive all of the interaction techniques. In this paper, we present the PICTAC model (Perceiving touch Interaction through TAgging Context), which similarly delivers service to members of a research group.Keywords: ambient intelligence, tagging context, touch interaction, touching services
Procedia PDF Downloads 38425293 Robust Speed Sensorless Control to Estimated Error for PMa-SynRM
Authors: Kyoung-Jin Joo, In-Gun Kim, Hyun-Seok Hong, Dong-Woo Kang, Ju Lee
Abstract:
Recently, the permanent magnet-assisted synchronous reluctance motor (PMa-SynRM) that can be substituted for the induction motor has been studying because of the needs of the development of the premium high efficiency motor for the minimum energy performance standard (MEPS). PMa-SynRM is required to the speed and position information for motor speed and torque controls. However, to apply the sensors has many problems that are sensor mounting space shortage and additional cost, etc. Therefore, in this paper, speed-sensorless control based on model reference adaptive system (MRAS) is introduced to eliminate the sensor. The sensorless method is constructed in a reference model as standard and an adaptive model as the state observer. The proposed algorithm is verified by the simulation.Keywords: PMa-SynRM, sensorless control, robust estimation, MRAS method
Procedia PDF Downloads 40425292 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems
Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber
Abstract:
Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement
Procedia PDF Downloads 15025291 Variance-Aware Routing and Authentication Scheme for Harvesting Data in Cloud-Centric Wireless Sensor Networks
Authors: Olakanmi Oladayo Olufemi, Bamifewe Olusegun James, Badmus Yaya Opeyemi, Adegoke Kayode
Abstract:
The wireless sensor network (WSN) has made a significant contribution to the emergence of various intelligent services or cloud-based applications. Most of the time, these data are stored on a cloud platform for efficient management and sharing among different services or users. However, the sensitivity of the data makes them prone to various confidentiality and performance-related attacks during and after harvesting. Various security schemes have been developed to ensure the integrity and confidentiality of the WSNs' data. However, their specificity towards particular attacks and the resource constraint and heterogeneity of WSNs make most of these schemes imperfect. In this paper, we propose a secure variance-aware routing and authentication scheme with two-tier verification to collect, share, and manage WSN data. The scheme is capable of classifying WSN into different subnets, detecting any attempt of wormhole and black hole attack during harvesting, and enforcing access control on the harvested data stored in the cloud. The results of the analysis showed that the proposed scheme has more security functionalities than other related schemes, solves most of the WSNs and cloud security issues, prevents wormhole and black hole attacks, identifies the attackers during data harvesting, and enforces access control on the harvested data stored in the cloud at low computational, storage, and communication overheads.Keywords: data block, heterogeneous IoT network, data harvesting, wormhole attack, blackhole attack access control
Procedia PDF Downloads 8425290 Quality of Age Reporting from Tanzania 2012 Census Results: An Assessment Using Whipple’s Index, Myer’s Blended Index, and Age-Sex Accuracy Index
Authors: A. Sathiya Susuman, Hamisi F. Hamisi
Abstract:
Background: Many socio-economic and demographic data are age-sex attributed. However, a variety of irregularities and misstatement are noted with respect to age-related data and less to sex data because of its biological differences between the genders. Noting the misstatement/misreporting of age data regardless of its significance importance in demographics and epidemiological studies, this study aims at assessing the quality of 2012 Tanzania Population and Housing Census Results. Methods: Data for the analysis are downloaded from Tanzania National Bureau of Statistics. Age heaping and digit preference were measured using summary indices viz., Whipple’s index, Myers’ blended index, and Age-Sex Accuracy index. Results: The recorded Whipple’s index for both sexes was 154.43; male has the lowest index of about 152.65 while female has the highest index of about 156.07. For Myers’ blended index, the preferences were at digits ‘0’ and ‘5’ while avoidance were at digits ‘1’ and ‘3’ for both sexes. Finally, Age-sex index stood at 59.8 where sex ratio score was 5.82 and age ratio scores were 20.89 and 21.4 for males and female respectively. Conclusion: The evaluation of the 2012 PHC data using the demographic techniques has qualified the data inaccurate as the results of systematic heaping and digit preferences/avoidances. Thus, innovative methods in data collection along with measuring and minimizing errors using statistical techniques should be used to ensure accuracy of age data.Keywords: age heaping, digit preference/avoidance, summary indices, Whipple’s index, Myer’s index, age-sex accuracy index
Procedia PDF Downloads 47625289 Hybridization of Manually Extracted and Convolutional Features for Classification of Chest X-Ray of COVID-19
Authors: M. Bilal Ishfaq, Adnan N. Qureshi
Abstract:
COVID-19 is the most infectious disease these days, it was first reported in Wuhan, the capital city of Hubei in China then it spread rapidly throughout the whole world. Later on 11 March 2020, the World Health Organisation (WHO) declared it a pandemic. Since COVID-19 is highly contagious, it has affected approximately 219M people worldwide and caused 4.55M deaths. It has brought the importance of accurate diagnosis of respiratory diseases such as pneumonia and COVID-19 to the forefront. In this paper, we propose a hybrid approach for the automated detection of COVID-19 using medical imaging. We have presented the hybridization of manually extracted and convolutional features. Our approach combines Haralick texture features and convolutional features extracted from chest X-rays and CT scans. We also employ a minimum redundancy maximum relevance (MRMR) feature selection algorithm to reduce computational complexity and enhance classification performance. The proposed model is evaluated on four publicly available datasets, including Chest X-ray Pneumonia, COVID-19 Pneumonia, COVID-19 CTMaster, and VinBig data. The results demonstrate high accuracy and effectiveness, with 0.9925 on the Chest X-ray pneumonia dataset, 0.9895 on the COVID-19, Pneumonia and Normal Chest X-ray dataset, 0.9806 on the Covid CTMaster dataset, and 0.9398 on the VinBig dataset. We further evaluate the effectiveness of the proposed model using ROC curves, where the AUC for the best-performing model reaches 0.96. Our proposed model provides a promising tool for the early detection and accurate diagnosis of COVID-19, which can assist healthcare professionals in making informed treatment decisions and improving patient outcomes. The results of the proposed model are quite plausible and the system can be deployed in a clinical or research setting to assist in the diagnosis of COVID-19.Keywords: COVID-19, feature engineering, artificial neural networks, radiology images
Procedia PDF Downloads 7525288 Model for Introducing Products to New Customers through Decision Tree Using Algorithm C4.5 (J-48)
Authors: Komol Phaisarn, Anuphan Suttimarn, Vitchanan Keawtong, Kittisak Thongyoun, Chaiyos Jamsawang
Abstract:
This article is intended to analyze insurance information which contains information on the customer decision when purchasing life insurance pay package. The data were analyzed in order to present new customers with Life Insurance Perfect Pay package to meet new customers’ needs as much as possible. The basic data of insurance pay package were collect to get data mining; thus, reducing the scattering of information. The data were then classified in order to get decision model or decision tree using Algorithm C4.5 (J-48). In the classification, WEKA tools are used to form the model and testing datasets are used to test the decision tree for the accurate decision. The validation of this model in classifying showed that the accurate prediction was 68.43% while 31.25% were errors. The same set of data were then tested with other models, i.e. Naive Bayes and Zero R. The results showed that J-48 method could predict more accurately. So, the researcher applied the decision tree in writing the program used to introduce the product to new customers to persuade customers’ decision making in purchasing the insurance package that meets the new customers’ needs as much as possible.Keywords: decision tree, data mining, customers, life insurance pay package
Procedia PDF Downloads 42825287 The Study on Mechanical Properties of Graphene Using Molecular Mechanics
Authors: I-Ling Chang, Jer-An Chen
Abstract:
The elastic properties and fracture of two-dimensional graphene were calculated purely from the atomic bonding (stretching and bending) based on molecular mechanics method. Considering the representative unit cell of graphene under various loading conditions, the deformations of carbon bonds and the variations of the interlayer distance could be realized numerically under the geometry constraints and minimum energy assumption. In elastic region, it was found that graphene was in-plane isotropic. Meanwhile, the in-plane deformation of the representative unit cell is not uniform along armchair direction due to the discrete and non-uniform distributions of the atoms. The fracture of graphene could be predicted using fracture criteria based on the critical bond length, over which the bond would break. It was noticed that the fracture behavior were directional dependent, which was consistent with molecular dynamics simulation results.Keywords: energy minimization, fracture, graphene, molecular mechanics
Procedia PDF Downloads 40225286 Stress Corrosion Crack Identification with Direct Assessment Method in Pipeline Downstream from a Compressor Station
Authors: H. Gholami, M. Jalali Azizpour
Abstract:
Stress Corrosion Crack (SCC) in pipeline is a type of environmentally assisted cracking (EAC), since its discovery in 1965 as a possible cause of failure in pipeline, SCC has caused, on average, one of two failures per year in the U.S, According to the NACE SCC DA a pipe line segment is considered susceptible to SCC if all of the following factors are met: The operating stress exceeds 60% of specified minimum yield strength (SMYS), the operating temperature exceeds 38°C, the segment is less than 32 km downstream from a compressor station, the age of the pipeline is greater than 10 years and the coating type is other than Fusion Bonded Epoxy(FBE). In this paper as a practical experience in NISOC, Direct Assessment (DA) Method is used for identification SCC defect in unpiggable pipeline located downstream of compressor station.Keywords: stress corrosion crack, direct assessment, disbondment, transgranular SCC, compressor station
Procedia PDF Downloads 38625285 On the Equalization of Nonminimum Phase Electroacoustic Systems Using Digital Inverse Filters
Authors: Avelino Marques, Diamantino Freitas
Abstract:
Some important electroacoustic systems, like loudspeaker systems, exhibit a nonminimum phase behavior that poses considerable effort when applying advanced digital signal processing techniques, such as linear equalization. In this paper, the position and the number of zeros and poles of the inverse filter, FIR type or IIR type, designed using time domain techniques, are studied, compared and related to the nonminimum phase zeros of system to be equalized. Conclusions about the impact of the position of the system non-minimum phase zeros, on the length/order of the inverse filter and on the delay of the equalized system are outlined as a guide to previously decide which type of filter will be more adequate.Keywords: loudspeaker systems, nonminimum phase system, FIR and IIR filter, delay
Procedia PDF Downloads 7725284 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review
Authors: Faisal Muhibuddin, Ani Dijah Rahajoe
Abstract:
This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review
Procedia PDF Downloads 6525283 Assessing Supply Chain Performance through Data Mining Techniques: A Case of Automotive Industry
Authors: Emin Gundogar, Burak Erkayman, Nusret Sazak
Abstract:
Providing effective management performance through the whole supply chain is critical issue and hard to applicate. The proper evaluation of integrated data may conclude with accurate information. Analysing the supply chain data through OLAP (On-Line Analytical Processing) technologies may provide multi-angle view of the work and consolidation. In this study, association rules and classification techniques are applied to measure the supply chain performance metrics of an automotive manufacturer in Turkey. Main criteria and important rules are determined. The comparison of the results of the algorithms is presented.Keywords: supply chain performance, performance measurement, data mining, automotive
Procedia PDF Downloads 51325282 Multimodal Data Fusion Techniques in Audiovisual Speech Recognition
Authors: Hadeer M. Sayed, Hesham E. El Deeb, Shereen A. Taie
Abstract:
In the big data era, we are facing a diversity of datasets from different sources in different domains that describe a single life event. These datasets consist of multiple modalities, each of which has a different representation, distribution, scale, and density. Multimodal fusion is the concept of integrating information from multiple modalities in a joint representation with the goal of predicting an outcome through a classification task or regression task. In this paper, multimodal fusion techniques are classified into two main classes: model-agnostic techniques and model-based approaches. It provides a comprehensive study of recent research in each class and outlines the benefits and limitations of each of them. Furthermore, the audiovisual speech recognition task is expressed as a case study of multimodal data fusion approaches, and the open issues through the limitations of the current studies are presented. This paper can be considered a powerful guide for interested researchers in the field of multimodal data fusion and audiovisual speech recognition particularly.Keywords: multimodal data, data fusion, audio-visual speech recognition, neural networks
Procedia PDF Downloads 11225281 Compact Finite Difference Schemes for Fourth Order Parabolic Partial Differential Equations
Authors: Sufyan Muhammad
Abstract:
Recently, in achieving highly efficient but at the same time highly accurate solutions has become the major target of numerical analyst community. The concept is termed as compact schemes and has gained great popularity and consequently, we construct compact schemes for fourth order parabolic differential equations used to study vibrations in structures. For the superiority of newly constructed schemes, we consider range of examples. We have achieved followings i.e. (a) numerical scheme utilizes minimum number of stencil points (which means new scheme is compact); (b) numerical scheme is highly accurate (which means new scheme is reliable) and (c) numerical scheme is highly efficient (which means new scheme is fast).Keywords: central finite differences, compact schemes, Bernoulli's equations, finite differences
Procedia PDF Downloads 28825280 Application the Queuing Theory in the Warehouse Optimization
Authors: Jaroslav Masek, Juraj Camaj, Eva Nedeliakova
Abstract:
The aim of optimization of store management is not only designing the situation of store management itself including its equipment, technology and operation. In optimization of store management we need to consider also synchronizing of technological, transport, store and service operations throughout the whole process of logistic chain in such a way that a natural flow of material from provider to consumer will be achieved the shortest possible way, in the shortest possible time in requested quality and quantity and with minimum costs. The paper deals with the application of the queuing theory for optimization of warehouse processes. The first part refers to common information about the problematic of warehousing and using mathematical methods for logistics chains optimization. The second part refers to preparing a model of a warehouse within queuing theory. The conclusion of the paper includes two examples of using queuing theory in praxis.Keywords: queuing theory, logistics system, mathematical methods, warehouse optimization
Procedia PDF Downloads 59325279 The Study of Dengue Fever Outbreak in Thailand Using Geospatial Techniques, Satellite Remote Sensing Data and Big Data
Authors: Tanapat Chongkamunkong
Abstract:
The objective of this paper is to present a practical use of Geographic Information System (GIS) to the public health from spatial correlation between multiple factors and dengue fever outbreak. Meteorological factors, demographic factors and environmental factors are compiled using GIS techniques along with the Global Satellite Mapping Remote Sensing (RS) data. We use monthly dengue fever cases, population density, precipitation, Digital Elevation Model (DEM) data. The scope cover study area under climate change of the El Niño–Southern Oscillation (ENSO) indicated by sea surface temperature (SST) and study area in 12 provinces of Thailand as remote sensing (RS) data from January 2007 to December 2014.Keywords: dengue fever, sea surface temperature, Geographic Information System (GIS), remote sensing
Procedia PDF Downloads 19825278 Surface Deformation Studies in South of Johor Using the Integration of InSAR and Resistivity Methods
Authors: Sirajo Abubakar, Ismail Ahmad Abir, Muhammad Sabiu Bala, Muhammad Mustapha Adejo, Aravind Shanmugaveloo
Abstract:
Over the years, land subsidence has been a serious threat mostly to urban areas. Land subsidence is the sudden sinking or gradual downward settling of the ground’s surface with little or no horizontal motion. In most areas, land subsidence is a slow process that covers a large area; therefore, it is sometimes left unnoticed. South of Johor is the area of interest for this project because it is going through rapid urbanization. The objective of this research is to evaluate and identify potential deformations in the south of Johor using integrated remote sensing and 2D resistivity methods. Synthetic aperture radar interferometry (InSAR) which is a remote sensing technique has the potential to map coherent displacements at centimeter to millimeter resolutions. Persistent scatterer interferometry (PSI) stacking technique was applied to Sentinel-1 data to detect the earth deformation in the study area. A dipole-dipole configuration resistivity profiling was conducted in three areas to determine the subsurface features in that area. This subsurface features interpreted were then correlated with the remote sensing technique to predict the possible causes of subsidence and uplifts in the south of Johor. Based on the results obtained, West Johor Bahru (0.63mm/year) and Ulu Tiram (1.61mm/year) are going through uplift due to possible geological uplift. On the other end, East Johor Bahru (-0.26mm/year) and Senai (-1.16mm/year) undergo subsidence due to possible fracture and granitic boulders loading. Land subsidence must be taken seriously as it can cause serious damages to infrastructures and human life. Monitoring land subsidence and taking preventive actions must be done to prevent any disasters.Keywords: interferometric synthetic aperture radar, persistent scatter, minimum spanning tree, resistivity, subsidence
Procedia PDF Downloads 14725277 Model of Optimal Centroids Approach for Multivariate Data Classification
Authors: Pham Van Nha, Le Cam Binh
Abstract:
Particle swarm optimization (PSO) is a population-based stochastic optimization algorithm. PSO was inspired by the natural behavior of birds and fish in migration and foraging for food. PSO is considered as a multidisciplinary optimization model that can be applied in various optimization problems. PSO’s ideas are simple and easy to understand but PSO is only applied in simple model problems. We think that in order to expand the applicability of PSO in complex problems, PSO should be described more explicitly in the form of a mathematical model. In this paper, we represent PSO in a mathematical model and apply in the multivariate data classification. First, PSOs general mathematical model (MPSO) is analyzed as a universal optimization model. Then, Model of Optimal Centroids (MOC) is proposed for the multivariate data classification. Experiments were conducted on some benchmark data sets to prove the effectiveness of MOC compared with several proposed schemes.Keywords: analysis of optimization, artificial intelligence based optimization, optimization for learning and data analysis, global optimization
Procedia PDF Downloads 20825276 Optimization of Process Parameters by Using Taguchi Method for Bainitic Steel Machining
Authors: Vinay Patil, Swapnil Kekade, Ashish Supare, Vinayak Pawar, Shital Jadhav, Rajkumar Singh
Abstract:
In recent days, bainitic steel is used in automobile and non-automobile sectors due to its high strength. Bainitic steel is difficult to machine because of its high hardness, hence in this paper machinability of bainitic steel is studied by using Taguchi design of experiments (DOE) approach. Convectional turning experiments were done by using L16 orthogonal array for three input parameters viz. cutting speed, depth of cut and feed. The Taguchi method is applied to study the performance characteristics of machining parameters with surface roughness (Ra), cutting force and tool wear rate. By using Taguchi analysis, optimized process parameters for best surface finish and minimum cutting forces were analyzed.Keywords: conventional turning, Taguchi method, S/N ratio, bainitic steel machining
Procedia PDF Downloads 33125275 Study of Inhibition of the End Effect Based on AR Model Predict of Combined Data Extension and Window Function
Authors: Pan Hongxia, Wang Zhenhua
Abstract:
In this paper, the EMD decomposition in the process of endpoint effect adopted data based on AR model to predict the continuation and window function method of combining the two effective inhibition. Proven by simulation of the simulation signal obtained the ideal effect, then, apply this method to the gearbox test data is also achieved good effect in the process, for the analysis of the subsequent data processing to improve the calculation accuracy. In the end, under various working conditions for the gearbox fault diagnosis laid a good foundation.Keywords: gearbox, fault diagnosis, ar model, end effect
Procedia PDF Downloads 36625274 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act
Authors: Maria Jędrzejczak, Patryk Pieniążek
Abstract:
The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.Keywords: data protection law, personal data, AI law, personal data breach
Procedia PDF Downloads 6525273 A Method for Identifying Unusual Transactions in E-commerce Through Extended Data Flow Conformance Checking
Authors: Handie Pramana Putra, Ani Dijah Rahajoe
Abstract:
The proliferation of smart devices and advancements in mobile communication technologies have permeated various facets of life with the widespread influence of e-commerce. Detecting abnormal transactions holds paramount significance in this realm due to the potential for substantial financial losses. Moreover, the fusion of data flow and control flow assumes a critical role in the exploration of process modeling and data analysis, contributing significantly to the accuracy and security of business processes. This paper introduces an alternative approach to identify abnormal transactions through a model that integrates both data and control flows. Referred to as the Extended Data Petri net (DPNE), our model encapsulates the entire process, encompassing user login to the e-commerce platform and concluding with the payment stage, including the mobile transaction process. We scrutinize the model's structure, formulate an algorithm for detecting anomalies in pertinent data, and elucidate the rationale and efficacy of the comprehensive system model. A case study validates the responsive performance of each system component, demonstrating the system's adeptness in evaluating every activity within mobile transactions. Ultimately, the results of anomaly detection are derived through a thorough and comprehensive analysis.Keywords: database, data analysis, DPNE, extended data flow, e-commerce
Procedia PDF Downloads 5625272 Advanced Analytical Competency Is Necessary for Strategic Leadership to Achieve High-Quality Decision-Making
Authors: Amal Mohammed Alqahatni
Abstract:
This paper is a non-empirical analysis of existing literature on digital leadership competency, data-driven organizations, and dealing with AI technology (big data). This paper will provide insights into the importance of developing the leader’s analytical skills and style to be more effective for high-quality decision-making in a data-driven organization and achieve creativity during the organization's transformation to be digitalized. Despite the enormous potential that big data has, there are not enough experts in the field. Many organizations faced an issue with leadership style, which was considered an obstacle to organizational improvement. It investigates the obstacles to leadership style in this context and the challenges leaders face in coaching and development. The leader's lack of analytical skill with AI technology, such as big data tools, was noticed, as was the lack of understanding of the value of that data, resulting in poor communication with others, especially in meetings when the decision should be made. By acknowledging the different dynamics of work competency and organizational structure and culture, organizations can make the necessary adjustments to best support their leaders. This paper reviews prior research studies and applies what is known to assist with current obstacles. This paper addresses how analytical leadership will assist in overcoming challenges in a data-driven organization's work environment.Keywords: digital leadership, big data, leadership style, digital leadership challenge
Procedia PDF Downloads 6925271 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single
Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa
Abstract:
Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600 minimum load impedance of the DAQ card with the 5 to 20 low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.Keywords: flux density, electrical steel, LabVIEW, magnetization
Procedia PDF Downloads 29125270 Analysis of Operating Speed on Four-Lane Divided Highways under Mixed Traffic Conditions
Authors: Chaitanya Varma, Arpan Mehar
Abstract:
The present study demonstrates the procedure to analyse speed data collected on various four-lane divided sections in India. Field data for the study was collected at different straight and curved sections on rural highways with the help of radar speed gun and video camera. The data collected at the sections were analysed and parameters pertain to speed distributions were estimated. The different statistical distribution was analysed on vehicle type speed data and for mixed traffic speed data. It was found that vehicle type speed data was either follows the normal distribution or Log-normal distribution, whereas the mixed traffic speed data follows more than one type of statistical distribution. The most common fit observed on mixed traffic speed data were Beta distribution and Weibull distribution. The separate operating speed model based on traffic and roadway geometric parameters were proposed in the present study. The operating speed model with traffic parameters and curve geometry parameters were established. Two different operating speed models were proposed with variables 1/R and Ln(R) and were found to be realistic with a different range of curve radius. The models developed in the present study are simple and realistic and can be used for forecasting operating speed on four-lane highways.Keywords: highway, mixed traffic flow, modeling, operating speed
Procedia PDF Downloads 46025269 Optimized Cropping Calendar and Land Suitability for Maize through GIS and Crop Modelling
Authors: Marilyn S. Painagan, Willie Jones B. Saliling
Abstract:
This paper reports an optimized cropping calendar and land suitability for maize in North Cotabato derived from modeling crop productivity over time and space. Using Quantum GIS, eight representative soil types and 0.3o x 0.3o climate grids shapefiles were intersected to form thirty two pedoclimatic zones within the boundaries of the province. Surveys were done to ascertain crop performance and phenological properties on field. Based on these surveys, crop parameters were calibrated specific for a variety of maize. Soil properties and climatic data (daily precipitation, maximum and minimum temperatures) from pedoclimatic zones were loaded to the FAO Aquacrop Water Productivity Model along with the crop properties from field surveys to simulate yield from 1980 to 2010. The average yield per month was computed to come up with the month of planting having the highest and lowest probable yield in a year assuming that all lands were planted with maize. The yield attributes were visualized in the Quantum GIS environment. The study revealed that optimal cropping patterns varied across North Cotabato. Highest probable yield (8000 kg/ha) can be obtained when maize is planted on May and September (sandy clay-loam soils) in the northern part of the province while the lowest probable yield (1000 kg/ha) can be obtained when maize is planted on January, February and March (clay loam soils) at the northern part of the province. Yields are simulated on the basis of varieties currently planted by farmers of North Cotabato. The resulting maps suggest where and when maize is most suitable to achieve high yields. There is a need to ground truth and validate the cropping calendar on field.Keywords: aquacrop, quantum GIS, maize, cropping calendar, water productivity
Procedia PDF Downloads 25525268 Accurate HLA Typing at High-Digit Resolution from NGS Data
Authors: Yazhi Huang, Jing Yang, Dingge Ying, Yan Zhang, Vorasuk Shotelersuk, Nattiya Hirankarn, Pak Chung Sham, Yu Lung Lau, Wanling Yang
Abstract:
Human leukocyte antigen (HLA) typing from next generation sequencing (NGS) data has the potential for applications in clinical laboratories and population genetic studies. Here we introduce a novel technique for HLA typing from NGS data based on read-mapping using a comprehensive reference panel containing all known HLA alleles and de novo assembly of the gene-specific short reads. An accurate HLA typing at high-digit resolution was achieved when it was tested on publicly available NGS data, outperforming other newly-developed tools such as HLAminer and PHLAT.Keywords: human leukocyte antigens, next generation sequencing, whole exome sequencing, HLA typing
Procedia PDF Downloads 66325267 Early Childhood Education: Teachers Ability to Assess
Authors: Ade Dwi Utami
Abstract:
Pedagogic competence is the basic competence of teachers to perform their tasks as educators. The ability to assess has become one of the demands in teachers pedagogic competence. Teachers ability to assess is related to curriculum instructions and applications. This research is aimed at obtaining data concerning teachers ability to assess that comprises of understanding assessment, determining assessment type, tools and procedure, conducting assessment process, and using assessment result information. It uses mixed method of explanatory technique in which qualitative data is used to verify the quantitative data obtained through a survey. The technique of quantitative data collection is by test whereas the qualitative data collection is by observation, interview and documentation. Then, the analyzed data is processed through a proportion study technique to be categorized into high, medium and low. The result of the research shows that teachers ability to assess can be grouped into 3 namely, 2% of high, 4% of medium and 94% of low. The data shows that teachers ability to assess is still relatively low. Teachers are lack of knowledge and comprehension in assessment application. The statement is verified by the qualitative data showing that teachers did not state which aspect was assessed in learning, record children’s behavior, and use the data result as a consideration to design a program. Teachers have assessment documents yet they only serve as means of completing teachers administration for the certification program. Thus, assessment documents were not used with the basis of acquired knowledge. The condition should become a consideration of the education institution of educators and the government to improve teachers pedagogic competence, including the ability to assess.Keywords: assessment, early childhood education, pedagogic competence, teachers
Procedia PDF Downloads 246