Search results for: micro data
25511 Experimental Investigation of Performance Anode Side of PEM Fuel Cell with Spin Method Coated with YSZ+SDC
Authors: Gürol Önal, Kevser Dinçer, Salih Yayla
Abstract:
In this study, performance of proton exchange membrane PEM fuel cell was experimentally investigated. Coating on the anode side of the PEM fuel cell was accomplished with the spin method by using YSZ+SDC. A solution having 0,1 gr YttriaStabilized Zirconia (YSZ) + 0,1 Samarium-Doped Ceria (SDC) + 10 mL methanol was prepared. This solution was taken out and filled into a micro-pipette. Then the anode side of PEM fuel cell was coated with YSZ+ SDC by using spin method. In the experimental study, current, voltage and power performances before and after coating were recorded and then compared to each other. It was found that the efficiency of PEM fuel cell increases after the coating with YSZ+SDC.Keywords: fuel cell, Polymer Electrolyte Membrane (PEM), membrane, spin method
Procedia PDF Downloads 56225510 A Compact Wearable Slot Antenna for LTE and WLAN Applications
Authors: Haider K. Raad
Abstract:
In this paper, a compact wide-band, ultra-thin and flexible slot antenna intended for wearable applications is presented. The presented antenna is designed to provide Wireless Local Area Network (WLAN) and Long Term Evolution (LTE) connectivity. The presented design exhibits a relatively wide bandwidth (1600-3500 MHz below -6 dB impedance bandwidth limit). The antenna is positioned on a 33 mm x 30 mm flexible substrate with a thickness of 50 µm. Antenna properties, such as the far-field radiation patterns, scattering parameter S11 are provided. The presented compact, thin and flexible design along with excellent radiation characteristics are deemed suitable for integration into flexible and wearable devices.Keywords: wearable electronics, slot Antenna, LTE, WLAN
Procedia PDF Downloads 23425509 BigCrypt: A Probable Approach of Big Data Encryption to Protect Personal and Business Privacy
Authors: Abdullah Al Mamun, Talal Alkharobi
Abstract:
As data size is growing up, people are became more familiar to store big amount of secret information into cloud storage. Companies are always required to need transfer massive business files from one end to another. We are going to lose privacy if we transmit it as it is and continuing same scenario repeatedly without securing the communication mechanism means proper encryption. Although asymmetric key encryption solves the main problem of symmetric key encryption but it can only encrypt limited size of data which is inapplicable for large data encryption. In this paper we propose a probable approach of pretty good privacy for encrypt big data using both symmetric and asymmetric keys. Our goal is to achieve encrypt huge collection information and transmit it through a secure communication channel for committing the business and personal privacy. To justify our method an experimental dataset from three different platform is provided. We would like to show that our approach is working for massive size of various data efficiently and reliably.Keywords: big data, cloud computing, cryptography, hadoop, public key
Procedia PDF Downloads 32025508 Application of Pattern Recognition Technique to the Quality Characterization of Superficial Microstructures in Steel Coatings
Authors: H. Gonzalez-Rivera, J. L. Palmeros-Torres
Abstract:
This paper describes the application of traditional computer vision techniques as a procedure for automatic measurement of the secondary dendrite arm spacing (SDAS) from microscopic images. The algorithm is capable of finding the lineal or curve-shaped secondary column of the main microstructure, measuring its length size in a micro-meter and counting the number of spaces between dendrites. The automatic characterization was compared with a set of 1728 manually characterized images, leading to an accuracy of −0.27 µm for the length size determination and a precision of ± 2.78 counts for dendrite spacing counting, also reducing the characterization time from 7 hours to 2 minutes.Keywords: dendrite arm spacing, microstructure inspection, pattern recognition, polynomial regression
Procedia PDF Downloads 4625507 Implementation of Big Data Concepts Led by the Business Pressures
Authors: Snezana Savoska, Blagoj Ristevski, Violeta Manevska, Zlatko Savoski, Ilija Jolevski
Abstract:
Big data is widely accepted by the pharmaceutical companies as a result of business demands create through legal pressure. Pharmaceutical companies have many legal demands as well as standards’ demands and have to adapt their procedures to the legislation. To manage with these demands, they have to standardize the usage of the current information technology and use the latest software tools. This paper highlights some important aspects of experience with big data projects implementation in a pharmaceutical Macedonian company. These projects made improvements of their business processes by the help of new software tools selected to comply with legal and business demands. They use IT as a strategic tool to obtain competitive advantage on the market and to reengineer the processes towards new Internet economy and quality demands. The company is required to manage vast amounts of structured as well as unstructured data. For these reasons, they implement projects for emerging and appropriate software tools which have to deal with big data concepts accepted in the company.Keywords: big data, unstructured data, SAP ERP, documentum
Procedia PDF Downloads 27125506 Saving Energy at a Wastewater Treatment Plant through Electrical and Production Data Analysis
Authors: Adriano Araujo Carvalho, Arturo Alatrista Corrales
Abstract:
This paper intends to show how electrical energy consumption and production data analysis were used to find opportunities to save energy at Taboada wastewater treatment plant in Callao, Peru. In order to access the data, it was used independent data networks for both electrical and process instruments, which were taken to analyze under an ISO 50001 energy audit, which considered, thus, Energy Performance Indexes for each process and a step-by-step guide presented in this text. Due to the use of aforementioned methodology and data mining techniques applied on information gathered through electronic multimeters (conveniently placed on substation switchboards connected to a cloud network), it was possible to identify thoroughly the performance of each process and thus, evidence saving opportunities which were previously hidden before. The data analysis brought both costs and energy reduction, allowing the plant to save significant resources and to be certified under ISO 50001.Keywords: energy and production data analysis, energy management, ISO 50001, wastewater treatment plant energy analysis
Procedia PDF Downloads 19425505 Data Clustering in Wireless Sensor Network Implemented on Self-Organization Feature Map (SOFM) Neural Network
Authors: Krishan Kumar, Mohit Mittal, Pramod Kumar
Abstract:
Wireless sensor network is one of the most promising communication networks for monitoring remote environmental areas. In this network, all the sensor nodes are communicated with each other via radio signals. The sensor nodes have capability of sensing, data storage and processing. The sensor nodes collect the information through neighboring nodes to particular node. The data collection and processing is done by data aggregation techniques. For the data aggregation in sensor network, clustering technique is implemented in the sensor network by implementing self-organizing feature map (SOFM) neural network. Some of the sensor nodes are selected as cluster head nodes. The information aggregated to cluster head nodes from non-cluster head nodes and then this information is transferred to base station (or sink nodes). The aim of this paper is to manage the huge amount of data with the help of SOM neural network. Clustered data is selected to transfer to base station instead of whole information aggregated at cluster head nodes. This reduces the battery consumption over the huge data management. The network lifetime is enhanced at a greater extent.Keywords: artificial neural network, data clustering, self organization feature map, wireless sensor network
Procedia PDF Downloads 51725504 Review and Comparison of Associative Classification Data Mining Approaches
Authors: Suzan Wedyan
Abstract:
Data mining is one of the main phases in the Knowledge Discovery Database (KDD) which is responsible of finding hidden and useful knowledge from databases. There are many different tasks for data mining including regression, pattern recognition, clustering, classification, and association rule. In recent years a promising data mining approach called associative classification (AC) has been proposed, AC integrates classification and association rule discovery to build classification models (classifiers). This paper surveys and critically compares several AC algorithms with reference of the different procedures are used in each algorithm, such as rule learning, rule sorting, rule pruning, classifier building, and class allocation for test cases.Keywords: associative classification, classification, data mining, learning, rule ranking, rule pruning, prediction
Procedia PDF Downloads 53725503 Hierarchical Checkpoint Protocol in Data Grids
Authors: Rahma Souli-Jbali, Minyar Sassi Hidri, Rahma Ben Ayed
Abstract:
Grid of computing nodes has emerged as a representative means of connecting distributed computers or resources scattered all over the world for the purpose of computing and distributed storage. Since fault tolerance becomes complex due to the availability of resources in decentralized grid environment, it can be used in connection with replication in data grids. The objective of our work is to present fault tolerance in data grids with data replication-driven model based on clustering. The performance of the protocol is evaluated with Omnet++ simulator. The computational results show the efficiency of our protocol in terms of recovery time and the number of process in rollbacks.Keywords: data grids, fault tolerance, clustering, chandy-lamport
Procedia PDF Downloads 34125502 An Observation of the Information Technology Research and Development Based on Article Data Mining: A Survey Study on Science Direct
Authors: Muhammet Dursun Kaya, Hasan Asil
Abstract:
One of the most important factors of research and development is the deep insight into the evolutions of scientific development. The state-of-the-art tools and instruments can considerably assist the researchers, and many of the world organizations have become aware of the advantages of data mining for the acquisition of the knowledge required for the unstructured data. This paper was an attempt to review the articles on the information technology published in the past five years with the aid of data mining. A clustering approach was used to study these articles, and the research results revealed that three topics, namely health, innovation, and information systems, have captured the special attention of the researchers.Keywords: information technology, data mining, scientific development, clustering
Procedia PDF Downloads 27825501 Security in Resource Constraints: Network Energy Efficient Encryption
Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy
Abstract:
Wireless nodes in a sensor network gather and process critical information designed to process and communicate, information flooding through such network is critical for decision making and data processing, the integrity of such data is one of the most critical factors in wireless security without compromising the processing and transmission capability of the network. This paper presents mechanism to securely transmit data over a chain of sensor nodes without compromising the throughput of the network utilizing available battery resources available at the sensor node.Keywords: hybrid protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node data processing, Z-MAC
Procedia PDF Downloads 14525500 Data Mining Techniques for Anti-Money Laundering
Authors: M. Sai Veerendra
Abstract:
Today, money laundering (ML) poses a serious threat not only to financial institutions but also to the nation. This criminal activity is becoming more and more sophisticated and seems to have moved from the cliché of drug trafficking to financing terrorism and surely not forgetting personal gain. Most of the financial institutions internationally have been implementing anti-money laundering solutions (AML) to fight investment fraud activities. However, traditional investigative techniques consume numerous man-hours. Recently, data mining approaches have been developed and are considered as well-suited techniques for detecting ML activities. Within the scope of a collaboration project on developing a new data mining solution for AML Units in an international investment bank in Ireland, we survey recent data mining approaches for AML. In this paper, we present not only these approaches but also give an overview on the important factors in building data mining solutions for AML activities.Keywords: data mining, clustering, money laundering, anti-money laundering solutions
Procedia PDF Downloads 53725499 Acoustic Performance and Application of Three Personalized Sound-Absorbing Materials
Authors: Fangying Wang, Zhang Sanming, Ni Qian
Abstract:
In recent years, more and more personalized sound absorbing materials have entered the Chinese room acoustical decoration market. The acoustic performance of three kinds of personalized sound-absorbing materials: Flame-retardant Flax Fiber Sound-absorbing Cotton, Eco-Friendly Sand Acoustic Panel and Transparent Micro-perforated Panel (Film) are tested by Reverberation Room Method. The sound absorption characteristic curves show that their performance match for or even exceed the traditional sound absorbing material. Through the application in the actual projects, these personalized sound-absorbing materials also proved their sound absorption ability and unique decorative effect.Keywords: acoustic performance, application prospect personalized sound-absorbing materials
Procedia PDF Downloads 19025498 Glycemic Control in Rice Consumption among Households with Diabetes Patients: The Role of Food Security
Authors: Chandanee Wasana Kalansooriya
Abstract:
Dietary behaviour is a crucial factor affecting diabetes control. With increasing rates of diabetes prevalence in Asian countries, examining their dietary patterns, which are largely based on rice, is timely required. It has been identified that higher consumption of some rice varieties is associated with increased risk of type 2 diabetes. Although diabetes patients are advised to consume healthier rice varieties, which contains low glycemic, several conditions, one of which food insecurity, make them difficult to preserve those healthy dietary guidelines. Hence this study tries to investigate how food security affects on making right decisions of rice consumption within diabetes affected households using a sample from Sri Lanka, a country which rice considered as the staple food and records the highest diabetes prevalence rate in South Asia. The study uses data from the Household Income and Expenditure Survey 2016, a nationally representative sample conducted by the Department of Census and Statistics, Sri Lanka. The survey used a two-stage stratified sampling method to cover different sectors and districts of the country and collected micro-data on demographics, health, income and expenditures of different categories. The study uses data from 2547 households which consist of one or more diabetes patients, based on the self-recorded health status. The Household Dietary Diversity Score (HDDS), which constructed based on twelve food groups, is used to measure the level of food security. Rice is categorized into three groups according to their Glycemic Index (GI), high GI, medium GI and low GI, and the likelihood and impact made by food security on each rice consumption categories are estimated using a Two-part Model. The shares of each rice categories out of total rice consumption is considered as the dependent variable to exclude the endogeneity issue between rice consumption and the HDDS. The results indicate that the consumption of medium GI rice is likely to increase with the increasing household food security, but low GI varieties are not. Households in rural and estate sectors are less likely and Tamil ethnic group is more likely to consume low GI rice varieties. Further, an increase in food security significantly decreases the consumption share of low GI rice, while it increases the share of medium GI varieties. The consumption share of low GI rice is largely affected by the ethnic variability. The effects of food security on the likelihood of consuming high GI rice varieties and changing its shares are statistically insignificant. Accordingly, the study concludes that a higher level of food security does not ensure diabetes patients are consuming healthy rice varieties or reducing consumption of unhealthy varieties. Hence policy attention must be directed towards educating people for making healthy dietary choices. Further, the study provides a room for further studies as it reveals considerable ethnic and sectorial differences in making healthy dietary decisions.Keywords: diabetes, food security, glycemic index, rice consumption
Procedia PDF Downloads 10225497 Development of New Technology Evaluation Model by Using Patent Information and Customers' Review Data
Authors: Kisik Song, Kyuwoong Kim, Sungjoo Lee
Abstract:
Many global firms and corporations derive new technology and opportunity by identifying vacant technology from patent analysis. However, previous studies failed to focus on technologies that promised continuous growth in industrial fields. Most studies that derive new technology opportunities do not test practical effectiveness. Since previous studies depended on expert judgment, it became costly and time-consuming to evaluate new technologies based on patent analysis. Therefore, research suggests a quantitative and systematic approach to technology evaluation indicators by using patent data to and from customer communities. The first step involves collecting two types of data. The data is used to construct evaluation indicators and apply these indicators to the evaluation of new technologies. This type of data mining allows a new method of technology evaluation and better predictor of how new technologies are adopted.Keywords: data mining, evaluating new technology, technology opportunity, patent analysis
Procedia PDF Downloads 37725496 Anomaly Detection Based on System Log Data
Authors: M. Kamel, A. Hoayek, M. Batton-Hubert
Abstract:
With the increase of network virtualization and the disparity of vendors, the continuous monitoring and detection of anomalies cannot rely on static rules. An advanced analytical methodology is needed to discriminate between ordinary events and unusual anomalies. In this paper, we focus on log data (textual data), which is a crucial source of information for network performance. Then, we introduce an algorithm used as a pipeline to help with the pretreatment of such data, group it into patterns, and dynamically label each pattern as an anomaly or not. Such tools will provide users and experts with continuous real-time logs monitoring capability to detect anomalies and failures in the underlying system that can affect performance. An application of real-world data illustrates the algorithm.Keywords: logs, anomaly detection, ML, scoring, NLP
Procedia PDF Downloads 9425495 EnumTree: An Enumerative Biclustering Algorithm for DNA Microarray Data
Authors: Haifa Ben Saber, Mourad Elloumi
Abstract:
In a number of domains, like in DNA microarray data analysis, we need to cluster simultaneously rows (genes) and columns (conditions) of a data matrix to identify groups of constant rows with a group of columns. This kind of clustering is called biclustering. Biclustering algorithms are extensively used in DNA microarray data analysis. More effective biclustering algorithms are highly desirable and needed. We introduce a new algorithm called, Enumerative tree (EnumTree) for biclustering of binary microarray data. is an algorithm adopting the approach of enumerating biclusters. This algorithm extracts all biclusters consistent good quality. The main idea of EnumLat is the construction of a new tree structure to represent adequately different biclusters discovered during the process of enumeration. This algorithm adopts the strategy of all biclusters at a time. The performance of the proposed algorithm is assessed using both synthetic and real DNA micryarray data, our algorithm outperforms other biclustering algorithms for binary microarray data. Biclusters with different numbers of rows. Moreover, we test the biological significance using a gene annotation web tool to show that our proposed method is able to produce biologically relevent biclusters.Keywords: DNA microarray, biclustering, gene expression data, tree, datamining.
Procedia PDF Downloads 37225494 The Impact of Financial Reporting on Sustainability
Authors: Lynn Ruggieri
Abstract:
The worldwide pandemic has only increased sustainability awareness. The public is demanding that businesses be held accountable for their impact on the environment. While financial data enjoys uniformity in reporting requirements, there are no uniform reporting requirements for non-financial data. Europe is leading the way with some standards being implemented for reporting non-financial sustainability data; however, there is no uniformity globally. And without uniformity, there is not a clear understanding of what information to include and how to disclose it. Sustainability reporting will provide important information to stakeholders and will enable businesses to understand their impact on the environment. Therefore, there is a crucial need for this data. This paper looks at the history of sustainability reporting in the countries of the European Union and throughout the world and makes a case for worldwide reporting requirements for sustainability.Keywords: financial reporting, non-financial data, sustainability, global financial reporting
Procedia PDF Downloads 17825493 Methods and Algorithms of Ensuring Data Privacy in AI-Based Healthcare Systems and Technologies
Authors: Omar Farshad Jeelani, Makaire Njie, Viktoriia M. Korzhuk
Abstract:
Recently, the application of AI-powered algorithms in healthcare continues to flourish. Particularly, access to healthcare information, including patient health history, diagnostic data, and PII (Personally Identifiable Information) is paramount in the delivery of efficient patient outcomes. However, as the exchange of healthcare information between patients and healthcare providers through AI-powered solutions increases, protecting a person’s information and their privacy has become even more important. Arguably, the increased adoption of healthcare AI has resulted in a significant concentration on the security risks and protection measures to the security and privacy of healthcare data, leading to escalated analyses and enforcement. Since these challenges are brought by the use of AI-based healthcare solutions to manage healthcare data, AI-based data protection measures are used to resolve the underlying problems. Consequently, this project proposes AI-powered safeguards and policies/laws to protect the privacy of healthcare data. The project presents the best-in-school techniques used to preserve the data privacy of AI-powered healthcare applications. Popular privacy-protecting methods like Federated learning, cryptographic techniques, differential privacy methods, and hybrid methods are discussed together with potential cyber threats, data security concerns, and prospects. Also, the project discusses some of the relevant data security acts/laws that govern the collection, storage, and processing of healthcare data to guarantee owners’ privacy is preserved. This inquiry discusses various gaps and uncertainties associated with healthcare AI data collection procedures and identifies potential correction/mitigation measures.Keywords: data privacy, artificial intelligence (AI), healthcare AI, data sharing, healthcare organizations (HCOs)
Procedia PDF Downloads 9325492 Mapping Tunnelling Parameters for Global Optimization in Big Data via Dye Laser Simulation
Authors: Sahil Imtiyaz
Abstract:
One of the biggest challenges has emerged from the ever-expanding, dynamic, and instantaneously changing space-Big Data; and to find a data point and inherit wisdom to this space is a hard task. In this paper, we reduce the space of big data in Hamiltonian formalism that is in concordance with Ising Model. For this formulation, we simulate the system using dye laser in FORTRAN and analyse the dynamics of the data point in energy well of rhodium atom. After mapping the photon intensity and pulse width with energy and potential we concluded that as we increase the energy there is also increase in probability of tunnelling up to some point and then it starts decreasing and then shows a randomizing behaviour. It is due to decoherence with the environment and hence there is a loss of ‘quantumness’. This interprets the efficiency parameter and the extent of quantum evolution. The results are strongly encouraging in favour of the use of ‘Topological Property’ as a source of information instead of the qubit.Keywords: big data, optimization, quantum evolution, hamiltonian, dye laser, fermionic computations
Procedia PDF Downloads 19425491 Development of Anti-Fouling Surface Features Bioinspired by the Patterned Micro-Textures of the Scophthalmus rhombus (Brill)
Authors: Ivan Maguire, Alan Barrett, Alex Forte, Sandra Kwiatkowska, Rohit Mishra, Jens Ducrèe, Fiona Regan
Abstract:
Biofouling is defined as the gradual accumulation of Biomimetics refers to the use and imitation of principles copied from nature. Biomimetics has found interest across many commercial disciplines. Among many biological objects and their functions, aquatic animals deserve a special attention due to their antimicrobial capabilities resulting from chemical composition, surface topography or other behavioural defences, which can be used as an inspiration for antifouling technology. Marine biofouling has detrimental effects on seagoing vessels, both commercial and leisure, as well as on oceanographic sensors, offshore drilling rigs, and aquaculture installations. Sensor optics, membranes, housings and platforms can become fouled leading to problems with sensor performance and data integrity. While many anti-fouling solutions are currently being investigated as a cost-cutting measure, biofouling settlement may also be prevented by creating a surface that does not satisfy the settlement conditions. Brill (Scophthalmus rhombus) is a small flatfish occurring in marine waters of Mediterranean as well as Norway and Iceland. It inhabits sandy and muddy coastal waters from 5 to 80 meters. Its skin colour changes depending on environment, but generally is brownish with light and dark freckles, with creamy underside. Brill is oval in shape and its flesh is white. The aim of this study is to translate the unique micro-topography of the brill scale, to design marine inspired biomimetic surface coating and test it against a typical fouling organism. Following extensive study of scale topography of the brill fish (Scophthalmus rhombus) and the settlement behaviour of the diatom species Psammodictyon sp. via SEM, two state-of-the-art antifouling surface solutions were designed and investigated; A brill fish scale bioinspired surface pattern platform (BFD), and generic and uniformly-arrayed, circular micropillar platform (MPD), with offsets based on diatom species settlement behaviour. The BFD approach consists of different ~5 μm by ~90 μm Brill-replica patterns, grown to a 5 μm height, in a linear array pattern. The MPD approach utilises hexagonal-packed cylindrical pillars 10.6 μm in diameter, grown to a height of 5 μm, with vertical offset of 15 μm and horizontal offset of 26.6 μm. Photolithography was employed for microstructure growth, with a polydimethylsiloxane (PDMS) chip-based used as a testbed for diatom adhesion on both platforms. Settlement and adhesion tests were performed using this PDMS microfluidic chip through subjugation to centrifugal force via an in-house developed ‘spin-stand’ which features a motor, in combination with a high-resolution camera, for real-time observing diatom release from PDMS material. Diatom adhesion strength can therefore be determined based on the centrifugal force generated at varying rotational speeds. It is hoped that both the replica and bio-inspired solutions will give comparable anti-fouling results to these synthetic surfaces, whilst also assisting in determining whether anti-fouling solutions should predominantly be investigating either fully bioreplica-based, or a bioinspired, synthetically-based design.Keywords: anti-fouling applications, bio-inspired microstructures, centrifugal microfluidics, surface modification
Procedia PDF Downloads 31725490 Applying Different Stenography Techniques in Cloud Computing Technology to Improve Cloud Data Privacy and Security Issues
Authors: Muhammad Muhammad Suleiman
Abstract:
Cloud Computing is a versatile concept that refers to a service that allows users to outsource their data without having to worry about local storage issues. However, the most pressing issues to be addressed are maintaining a secure and reliable data repository rather than relying on untrustworthy service providers. In this study, we look at how stenography approaches and collaboration with Digital Watermarking can greatly improve the system's effectiveness and data security when used for Cloud Computing. The main requirement of such frameworks, where data is transferred or exchanged between servers and users, is safe data management in cloud environments. Steganography is the cloud is among the most effective methods for safe communication. Steganography is a method of writing coded messages in such a way that only the sender and recipient can safely interpret and display the information hidden in the communication channel. This study presents a new text steganography method for hiding a loaded hidden English text file in a cover English text file to ensure data protection in cloud computing. Data protection, data hiding capability, and time were all improved using the proposed technique.Keywords: cloud computing, steganography, information hiding, cloud storage, security
Procedia PDF Downloads 19225489 Investigation on Performance of Change Point Algorithm in Time Series Dynamical Regimes and Effect of Data Characteristics
Authors: Farhad Asadi, Mohammad Javad Mollakazemi
Abstract:
In this paper, Bayesian online inference in models of data series are constructed by change-points algorithm, which separated the observed time series into independent series and study the change and variation of the regime of the data with related statistical characteristics. variation of statistical characteristics of time series data often represent separated phenomena in the some dynamical system, like a change in state of brain dynamical reflected in EEG signal data measurement or a change in important regime of data in many dynamical system. In this paper, prediction algorithm for studying change point location in some time series data is simulated. It is verified that pattern of proposed distribution of data has important factor on simpler and smother fluctuation of hazard rate parameter and also for better identification of change point locations. Finally, the conditions of how the time series distribution effect on factors in this approach are explained and validated with different time series databases for some dynamical system.Keywords: time series, fluctuation in statistical characteristics, optimal learning, change-point algorithm
Procedia PDF Downloads 42725488 2D Point Clouds Features from Radar for Helicopter Classification
Authors: Danilo Habermann, Aleksander Medella, Carla Cremon, Yusef Caceres
Abstract:
This paper aims to analyze the ability of 2d point clouds features to classify different models of helicopters using radars. This method does not need to estimate the blade length, the number of blades of helicopters, and the period of their micro-Doppler signatures. It is also not necessary to generate spectrograms (or any other image based on time and frequency domain). This work transforms a radar return signal into a 2D point cloud and extracts features of it. Three classifiers are used to distinguish 9 different helicopter models in order to analyze the performance of the features used in this work. The high accuracy obtained with each of the classifiers demonstrates that the 2D point clouds features are very useful for classifying helicopters from radar signal.Keywords: helicopter classification, point clouds features, radar, supervised classifiers
Procedia PDF Downloads 22725487 Determination of the Risks of Heart Attack at the First Stage as Well as Their Control and Resource Planning with the Method of Data Mining
Authors: İbrahi̇m Kara, Seher Arslankaya
Abstract:
Frequently preferred in the field of engineering in particular, data mining has now begun to be used in the field of health as well since the data in the health sector have reached great dimensions. With data mining, it is aimed to reveal models from the great amounts of raw data in agreement with the purpose and to search for the rules and relationships which will enable one to make predictions about the future from the large amount of data set. It helps the decision-maker to find the relationships among the data which form at the stage of decision-making. In this study, it is aimed to determine the risk of heart attack at the first stage, to control it, and to make its resource planning with the method of data mining. Through the early and correct diagnosis of heart attacks, it is aimed to reveal the factors which affect the diseases, to protect health and choose the right treatment methods, to reduce the costs in health expenditures, and to shorten the durations of patients’ stay at hospitals. In this way, the diagnosis and treatment costs of a heart attack will be scrutinized, which will be useful to determine the risk of the disease at the first stage, to control it, and to make its resource planning.Keywords: data mining, decision support systems, heart attack, health sector
Procedia PDF Downloads 35625486 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder
Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen
Abstract:
Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.Keywords: count data, meta-analytic prior, negative binomial, poisson
Procedia PDF Downloads 11825485 Strategic Citizen Participation in Applied Planning Investigations: How Planners Use Etic and Emic Community Input Perspectives to Fill-in the Gaps in Their Analysis
Authors: John Gaber
Abstract:
Planners regularly use citizen input as empirical data to help them better understand community issues they know very little about. This type of community data is based on the lived experiences of local residents and is known as "emic" data. What is becoming more common practice for planners is their use of data from local experts and stakeholders (known as "etic" data or the outsider perspective) to help them fill in the gaps in their analysis of applied planning research projects. Utilizing international Health Impact Assessment (HIA) data, I look at who planners invite to their citizen input investigations. Research presented in this paper shows that planners access a wide range of emic and etic community perspectives in their search for the “community’s view.” The paper concludes with how planners can chart out a new empirical path in their execution of emic/etic citizen participation strategies in their applied planning research projects.Keywords: citizen participation, emic data, etic data, Health Impact Assessment (HIA)
Procedia PDF Downloads 48425484 Stress-Strain Relation for Human Trabecular Bone Based on Nanoindentation Measurements
Authors: Marek Pawlikowski, Krzysztof Jankowski, Konstanty Skalski, Anna Makuch
Abstract:
Nanoindentation or depth-sensing indentation (DSI) technique has proven to be very useful to measure mechanical properties of various tissues at a micro-scale. Bone tissue, both trabecular and cortical one, is one of the most commonly tested tissues by means of DSI. Most often such tests on bone samples are carried out to compare the mechanical properties of lamellar and interlamellar bone, osteonal bone as well as compact and cancellous bone. In the paper, a relation between stress and strain for human trabecular bone is presented. The relation is based on the results of nanoindentation tests. The formulation of a constitutive model for human trabecular bone is based on nanoindentation tests. In the study, the approach proposed by Olivier-Pharr is adapted. The tests were carried out on samples of trabecular tissue extracted from human femoral heads. The heads were harvested during surgeries of artificial hip joint implantation. Before samples preparation, the heads were kept in 95% alcohol in temperature 4 Celsius degrees. The cubic samples cut out of the heads were stored in the same conditions. The dimensions of the specimens were 25 mm x 25 mm x 20 mm. The number of 20 samples have been tested. The age range of donors was between 56 and 83 years old. The tests were conducted with the indenter spherical tip of the diameter 0.200 mm. The maximum load was P = 500 mN and the loading rate 500 mN/min. The data obtained from the DSI tests allows one only to determine bone behoviour in terms of nanoindentation force vs. nanoindentation depth. However, it is more interesting and useful to know the characteristics of trabecular bone in the stress-strain domain. This allows one to simulate trabecular bone behaviour in a more realistic way. The stress-strain curves obtained in the study show relation between the age and the mechanical behaviour of trabecular bone. It was also observed that the bone matrix of trabecular tissue indicates an ability of energy absorption.Keywords: constitutive model, mechanical behaviour, nanoindentation, trabecular bone
Procedia PDF Downloads 22125483 Data Augmentation for Automatic Graphical User Interface Generation Based on Generative Adversarial Network
Authors: Xulu Yao, Moi Hoon Yap, Yanlong Zhang
Abstract:
As a branch of artificial neural network, deep learning is widely used in the field of image recognition, but the lack of its dataset leads to imperfect model learning. By analysing the data scale requirements of deep learning and aiming at the application in GUI generation, it is found that the collection of GUI dataset is a time-consuming and labor-consuming project, which is difficult to meet the needs of current deep learning network. To solve this problem, this paper proposes a semi-supervised deep learning model that relies on the original small-scale datasets to produce a large number of reliable data sets. By combining the cyclic neural network with the generated countermeasure network, the cyclic neural network can learn the sequence relationship and characteristics of data, make the generated countermeasure network generate reasonable data, and then expand the Rico dataset. Relying on the network structure, the characteristics of collected data can be well analysed, and a large number of reasonable data can be generated according to these characteristics. After data processing, a reliable dataset for model training can be formed, which alleviates the problem of dataset shortage in deep learning.Keywords: GUI, deep learning, GAN, data augmentation
Procedia PDF Downloads 18425482 Modelling Rainfall-Induced Shallow Landslides in the Northern New South Wales
Authors: S. Ravindran, Y.Liu, I. Gratchev, D.Jeng
Abstract:
Rainfall-induced shallow landslides are more common in the northern New South Wales (NSW), Australia. From 2009 to 2017, around 105 rainfall-induced landslides occurred along the road corridors and caused temporary road closures in the northern NSW. Rainfall causing shallow landslides has different distributions of rainfall varying from uniform, normal, decreasing to increasing rainfall intensity. The duration of rainfall varied from one day to 18 days according to historical data. The objective of this research is to analyse slope instability of some of the sites in the northern NSW by varying cumulative rainfall using SLOPE/W and SEEP/W and compare with field data of rainfall causing shallow landslides. The rainfall data and topographical data from public authorities and soil data obtained from laboratory tests will be used for this modelling. There is a likelihood of shallow landslides if the cumulative rainfall is between 100 mm to 400 mm in accordance with field data.Keywords: landslides, modelling, rainfall, suction
Procedia PDF Downloads 180