Search results for: Atomic data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25015

Search results for: Atomic data

24385 An Approach for Estimation in Hierarchical Clustered Data Applicable to Rare Diseases

Authors: Daniel C. Bonzo

Abstract:

Practical considerations lead to the use of unit of analysis within subjects, e.g., bleeding episodes or treatment-related adverse events, in rare disease settings. This is coupled with data augmentation techniques such as extrapolation to enlarge the subject base. In general, one can think about extrapolation of data as extending information and conclusions from one estimand to another estimand. This approach induces hierarchichal clustered data with varying cluster sizes. Extrapolation of clinical trial data is being accepted increasingly by regulatory agencies as a means of generating data in diverse situations during drug development process. Under certain circumstances, data can be extrapolated to a different population, a different but related indication, and different but similar product. We consider here the problem of estimation (point and interval) using a mixed-models approach under an extrapolation. It is proposed that estimators (point and interval) be constructed using weighting schemes for the clusters, e.g., equally weighted and with weights proportional to cluster size. Simulated data generated under varying scenarios are then used to evaluate the performance of this approach. In conclusion, the evaluation result showed that the approach is a useful means for improving statistical inference in rare disease settings and thus aids not only signal detection but risk-benefit evaluation as well.

Keywords: clustered data, estimand, extrapolation, mixed model

Procedia PDF Downloads 122
24384 Determination of the Structural Parameters of Calcium Phosphate for Biomedical Use

Authors: María Magdalena Méndez-González, Miguel García Rocha, Carlos Manuel Yermo De la Cruz

Abstract:

Calcium phosphate (Ca5(PO4)3(X)) is widely used in orthopedic applications and is widely used as powder and granules. However, their presence in bone is in the form of nanometric needles 60 nm in length with a non-stoichiometric phase of apatite contains CO3-2, Na+, OH-, F-, and other ions in a matrix of collagen fibers. The crystal size, morphology control and interaction with cells are essential for the development of nanotechnology. The structural results of calcium phosphate, synthesized by chemical precipitation with crystal size of 22.85 nm are presented in this paper. The calcium phosphate powders were analyzed by X-ray diffraction, energy dispersive spectroscopy (EDS), infrared spectroscopy and FT-IR transmission electron microscopy. Network parameters, atomic positions, the indexing of the planes and the calculation of FWHM (full width at half maximum) were obtained. The crystal size was also calculated using the Scherer equation d (hkl) = cλ/βcosѲ. Where c is a constant related to the shape of the crystal, the wavelength of the radiation used for a copper anode is 1.54060Å, Ѳ is the Bragg diffraction angle, and β is the width average peak height of greater intensity. Diffraction pattern corresponding to the calcium phosphate called hydroxyapatite phase of a hexagonal crystal system was obtained. It belongs to the space group P63m with lattice parameters a = 9.4394 Å and c = 6.8861 Å. The most intense peak is obtained 2Ѳ = 31.55 (FWHM = 0.4798), with a preferred orientation in 121. The intensity difference between the experimental data and the calculated values is attributable to the temperature at which the sintering was performed. The intensity of the highest peak is at angle 2Ѳ = 32.11. The structure of calcium phosphate obtained was a hexagonal configuration. The intensity changes in the peaks of the diffraction pattern, in the lattice parameters at the corners, indicating the possible presence of a dopant. That each calcium atom is surrounded by a tetrahedron of oxygen and hydrogen was observed by infrared spectra. The unit cell pattern corresponds to hydroxyapatite and transmission electron microscopic crystal morphology corresponding to the hexagonal phase with a preferential growth along the c-plane was obtained.

Keywords: structure, nanoparticles, calcium phosphate, metallurgical and materials engineering

Procedia PDF Downloads 490
24383 Authorization of Commercial Communication Satellite Grounds for Promoting Turkish Data Relay System

Authors: Celal Dudak, Aslı Utku, Burak Yağlioğlu

Abstract:

Uninterrupted and continuous satellite communication through the whole orbit time is becoming more indispensable every day. Data relay systems are developed and built for various high/low data rate information exchanges like TDRSS of USA and EDRSS of Europe. In these missions, a couple of task-dedicated communication satellites exist. In this regard, for Turkey a data relay system is attempted to be defined exchanging low data rate information (i.e. TTC) for Earth-observing LEO satellites appointing commercial GEO communication satellites all over the world. First, justification of this attempt is given, demonstrating duration enhancements in the link. Discussion of preference of RF communication is, also, given instead of laser communication. Then, preferred communication GEOs – including TURKSAT4A already belonging to Turkey- are given, together with the coverage enhancements through STK simulations and the corresponding link budget. Also, a block diagram of the communication system is given on the LEO satellite.

Keywords: communication, GEO satellite, data relay system, coverage

Procedia PDF Downloads 420
24382 The Development of Encrypted Near Field Communication Data Exchange Format Transmission in an NFC Passive Tag for Checking the Genuine Product

Authors: Tanawat Hongthai, Dusit Thanapatay

Abstract:

This paper presents the development of encrypted near field communication (NFC) data exchange format transmission in an NFC passive tag for the feasibility of implementing a genuine product authentication. We propose a research encryption and checking the genuine product into four major categories; concept, infrastructure, development and applications. This result shows the passive NFC-forum Type 2 tag can be configured to be compatible with the NFC data exchange format (NDEF), which can be automatically partially data updated when there is NFC field.

Keywords: near field communication, NFC data exchange format, checking the genuine product, encrypted NFC

Procedia PDF Downloads 263
24381 Data Hiding by Vector Quantization in Color Image

Authors: Yung Gi Wu

Abstract:

With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.

Keywords: data hiding, vector quantization, watermark, color image

Procedia PDF Downloads 345
24380 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: anomaly detection, autoencoder, data centers, deep learning

Procedia PDF Downloads 175
24379 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 296
24378 The Study of Wetting Properties of Silica-Poly (Acrylic Acid) Thin Film Coatings

Authors: Sevil Kaynar Turkoglu, Jinde Zhang, Jo Ann Ratto, Hanna Dodiuk, Samuel Kenig, Joey Mead

Abstract:

Superhydrophilic, crack-free thin film coatings based on silica nanoparticles were fabricated by dip-coating method. Both thermodynamic and dynamic effects on the wetting properties of the thin films were investigated by modifying the coating formulation via changing the particle-to-binder ratio and weight % of silica in solution. The formulated coatings were characterized by a number of analyses. Water contact angle (WCA) measurements were conducted for all coatings to characterize the surface wetting properties. Scanning electron microscope (SEM) images were taken to examine the morphology of the coating surface. Atomic force microscopy (AFM) analysis was done to study surface topography. The presence of hydrophilic functional groups and nano-scale roughness were found to be responsible for the superhydrophilic behavior of the films. In addition, surface chemistry, compared to surface roughness, was found to be a primary factor affecting the wetting properties of the thin film coatings.

Keywords: poly (acrylic acid), silica nanoparticles, superhydrophilic coatings, surface wetting

Procedia PDF Downloads 120
24377 Enhanced Properties of Plasma-Induced Two-Dimensional Ga₂O₃/GaS Heterostructures on Liquid Alloy Substrate

Authors: S. Zhuiykov, M. Karbalaei Akbari

Abstract:

Ultra-low-level incorporation of trace impurities and dopants into two-dimensional (2D) semiconductors is a challenging step towards the development of functional electronic instruments based on 2D materials. Herein, the incorporation of sulphur atoms into 2D Ga2O3 surface oxide film of eutectic gallium-indium alloy (EGaIn) is achieved through plasma-enhanced metal-catalyst dissociation of H2S gas on EGaIn substrate. This process led to the growth of GaS crystalline nanodomains inside amorphous 2D Ga2O3 sublayer films. Consequently, 2D lateral heterophase was developed between the amorphous Ga2O3 and crystalline GaS nanodomains. The materials characterization revealed the alteration of photoluminescence (PL) characteristics and change of valence band maximum (VBM) of functionalized 2D films. The comprehensive studies by conductive atomic force microscopy (c-AFM) showed considerable enhancement of conductivity of 2D Ga2O3/GaS materials (300 times improvement) compared with that of 2D Ga2O3 film. This technique has a great potential for the fabrication of 2D metal oxide devices with tuneable electronic characteristics similar to nano junction memristors and transistors.

Keywords: 2D semiconductors, Ga₂O₃, GaS, plasma-induced functionalization

Procedia PDF Downloads 78
24376 Transforming Data into Knowledge: Mathematical and Statistical Innovations in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in various domains has created a pressing need for effective methods to transform this data into meaningful knowledge. In this era of big data, mathematical and statistical innovations play a crucial role in unlocking insights and facilitating informed decision-making in data analytics. This abstract aims to explore the transformative potential of these innovations and their impact on converting raw data into actionable knowledge. Drawing upon a comprehensive review of existing literature, this research investigates the cutting-edge mathematical and statistical techniques that enable the conversion of data into knowledge. By evaluating their underlying principles, strengths, and limitations, we aim to identify the most promising innovations in data analytics. To demonstrate the practical applications of these innovations, real-world datasets will be utilized through case studies or simulations. This empirical approach will showcase how mathematical and statistical innovations can extract patterns, trends, and insights from complex data, enabling evidence-based decision-making across diverse domains. Furthermore, a comparative analysis will be conducted to assess the performance, scalability, interpretability, and adaptability of different innovations. By benchmarking against established techniques, we aim to validate the effectiveness and superiority of the proposed mathematical and statistical innovations in data analytics. Ethical considerations surrounding data analytics, such as privacy, security, bias, and fairness, will be addressed throughout the research. Guidelines and best practices will be developed to ensure the responsible and ethical use of mathematical and statistical innovations in data analytics. The expected contributions of this research include advancements in mathematical and statistical sciences, improved data analysis techniques, enhanced decision-making processes, and practical implications for industries and policymakers. The outcomes will guide the adoption and implementation of mathematical and statistical innovations, empowering stakeholders to transform data into actionable knowledge and drive meaningful outcomes.

Keywords: data analytics, mathematical innovations, knowledge extraction, decision-making

Procedia PDF Downloads 58
24375 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: instance selection, data reduction, MapReduce, kNN

Procedia PDF Downloads 238
24374 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 60
24373 Experimental Evaluation of Succinct Ternary Tree

Authors: Dmitriy Kuptsov

Abstract:

Tree data structures, such as binary or in general k-ary trees, are essential in computer science. The applications of these data structures can range from data search and retrieval to sorting and ranking algorithms. Naive implementations of these data structures can consume prohibitively large volumes of random access memory limiting their applicability in certain solutions. Thus, in these cases, more advanced representation of these data structures is essential. In this paper we present the design of the compact version of ternary tree data structure and demonstrate the results for the experimental evaluation using static dictionary problem. We compare these results with the results for binary and regular ternary trees. The conducted evaluation study shows that our design, in the best case, consumes up to 12 times less memory (for the dictionary used in our experimental evaluation) than a regular ternary tree and in certain configuration shows performance comparable to regular ternary trees. We have evaluated the performance of the algorithms using both 32 and 64 bit operating systems.

Keywords: algorithms, data structures, succinct ternary tree, per- formance evaluation

Procedia PDF Downloads 148
24372 Development of a Robust Procedure for Generating Structural Models of Calcium Aluminosilicate Glass Surfaces

Authors: S. Perera, T. R. Walsh, M. Solvang

Abstract:

The structure-property relationships of calcium aluminosilicate (CAS) glass surfaces are of scientific and technological interest regarding dissolution phenomena. Molecular dynamics (MD) simulations can provide atomic-scale insights into the structure and properties of the CAS interfaces in vacuo as the first step to conducting computational dissolution studies on CAS surfaces. However, one limitation to date is that although the bulk properties of CAS glasses have been well studied by MD simulation, corresponding efforts on CAS surface properties are relatively few in number (both theoretical and experimental). Here, a systematic computational protocol to create CAS surfaces in vacuo is developed by evaluating the sensitivity of the resultant surface structure with respect to different factors. Factors such as the relative thickness of the surface layer, the relative thickness of the bulk region, the cooling rate, and the annealing schedule (time and temperature) are explored. Structural features such as ring size distribution, defect concentrations (five-coordinated aluminium (AlV), non-bridging oxygen (NBO), and tri-cluster oxygen (TBO)), and linkage distribution are identified as significant features in dissolution studies.

Keywords: MD simulation, CAS glasses, surface structure, structure-property, CAS interface

Procedia PDF Downloads 79
24371 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement

Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti

Abstract:

Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.

Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing

Procedia PDF Downloads 93
24370 Prosperous Digital Image Watermarking Approach by Using DCT-DWT

Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar

Abstract:

In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacks

Keywords: watermarking, digital, DCT-DWT, security

Procedia PDF Downloads 410
24369 Aqueous Extract of Argemone Mexicana Roots for Effective Corrosion Inhibition of Mild Steel in HCl Environment

Authors: Gopal Ji, Priyanka Dwivedi, Shanthi Sundaram, Rajiv Prakash

Abstract:

Inhibition effect of aqueous Argemone Mexicana root extract (AMRE) on mild steel corrosion in 1 M HCl has been studied by weight loss, Tafel polarization curves, electrochemical impedance spectroscopy (EIS), scanning electron microscopy (SEM) and atomic force microscopy (AFM) techniques. Results indicate that inhibition ability of AMRE increases with the increasing amount of the extract. A maximum corrosion inhibition of 94% is acknowledged at the extract concentration of 400 mg L-1. Polarization curves and impedance spectra reveal that both cathodic and anodic reactions are suppressed due to passive layer formation at metal-acid interface. It is also confirmed by SEM micro graphs and FTIR studies. Furthermore, the effects of acid concentration (1-5 M), immersion time (120 hours) and temperature (30-60˚C) on inhibition potential of AMRE have been investigated by weight loss method and electrochemical techniques. Adsorption mechanism is also proposed on the basis of weight loss results, which shows good agreement with Langmuir isotherm.

Keywords: mild steel, polarization, SEM, acid corrosion, EIS, green inhibition

Procedia PDF Downloads 474
24368 Machine Learning Data Architecture

Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap

Abstract:

Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.

Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning

Procedia PDF Downloads 47
24367 The Effects of External Daminozide (ALAR) Application on Nutrient Contents in Memecik Olive Trees

Authors: Sahriye Sonmez, Salih Ulger, Mustafa Kaplan, Mustafa Karhan

Abstract:

The objective of this study was to investigate the effects of external ALAR application on nutrients contents in leaf and node in ‘on (bearing)’ and ‘off (non-bearing)’ years in Memecik olive trees. For this purpose; 2000 mg L-1 ALAR was externally applied to Memecik olive trees, and leaf and node samples from olive trees were taken during the induction, initiation and differentiation periods in ‘on’ and ‘off’ years. Nutrients contents (N, P, K, Ca, Mg, Fe, Mn, Zn and Cu) in leaf and node samples were determined. The K, Ca, Mg, Fe, Mn, Zn and Cu contents were determined by atomic absorption spectrophotometry, Nitrogen by Kjeldahl procedure, and P by a spectrophotometric method. The results showed that the N, Ca, Mg, Fe, Mn, Zn and Cu contents in ‘on’ year were higher than ‘off’ year while the K contents in ‘on’ year were lower than ‘off ‘ year, but the P content was not different. The N, Ca, Mg, Fe and Mn contents in leaf samples were higher in the node samples except for K while the P, Zn and Cu contents were not different. The N, K, Ca, Fe, Mn, Zn and Cu contents were lowest during the initiation period while the P content was highest in this period. The Mg content was not different in all period.

Keywords: bearing, differentiation period, induction period, initiation period, non bearing, olive

Procedia PDF Downloads 435
24366 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: colour data, local stereo matching, stereo correspondence, disparity map

Procedia PDF Downloads 356
24365 Evaluation of Surface Water and Groundwater Quality in Parts of Umunneochi Southeast, Nigeria

Authors: Joshua Chima Chizoba, Wisdom Izuchukwu Uzoma, Elizabeth Ifeyiwa Okoyeh

Abstract:

Water cannot be optimally used and sustained unless the quality is periodically assessed. The study area Umunneochi and environs are located in south eastern part of Nigeria. It stretches geographically from latitudes 50501N to 60000N and longitudes 70201E to 70301. The major geologic formations in the area include the Asu River group, Nkporo Shale, and Ajali Sandstone. The aim of this study is to evaluate the hydrochemical characteristics of surface and ground water sources in parts of Umunneochi and environs in order to establish portability of the water sources for drinking, domestic and irrigation purposes. A total of 15 samples were collected randomly from streams, springs and wells. The samples were analyzed for physicochemical parameters and heavy metals using handheld digital kits, photometer, titration method and Atomic Absorption Spectrophotometer (AAS) following acceptable standards. The obtained analytical data were interpreted, and results were compared with World Health Organization (WHO) standard. The concentration of pH, SO42-and Cl- range from 5.81 mg/l – 6.07 mg/l, 41.93 mg/l – 142.95 mg/l and 20.00 mg/l – 111 mg/l respectively, while Pb and Zn revealed a relative low mean concentration of 0.14 mg/l and 0.40 mg/l, which are all within (WHO) permissible limits except pH. About 27% of the samples are moderately hard. This is attributed to the mining activities in the areas. The abundance of cations and anions in the area are in the order of K+>Na+>Mg2+>Ca2+ and SO4->Cl->HCO3->NO3-, respectively. Chloride, bicarbonate, and nitrate are all within the permissible limits. 13.33% of the total samples contain Sulphate above the standard permissible limits. The values of calculated Water Quality Index (WQI) are less than 50 indicating excellent water. The predominant water-type in the study area is Na-Cl water type and mixed Ca-Mg-Cl water type based on the sample plots on the Piper diagram. The Sodium Absorption Ratio (SAR) calculations showed excellent water for consumption and also good water for irrigation purpose with low sodium and alkalinity ratio respectively. Government water projects are recommended in the area for sustainable domestic and agricultural water supply to ease the stress of water supply problems.

Keywords: groundwater, hydrochemical, physichochemical, water-type, sodium adsorption ratio

Procedia PDF Downloads 117
24364 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 411
24363 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design

Authors: Qing K. Zhu

Abstract:

Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.

Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise

Procedia PDF Downloads 237
24362 Introduction of Electronic Health Records to Improve Data Quality in Emergency Department Operations

Authors: Anuruddha Jagoda, Samiddhi Samarakoon, Anil Jasinghe

Abstract:

In its simplest form, data quality can be defined as 'fitness for use' and it is a concept with multi-dimensions. Emergency Departments(ED) require information to treat patients and on the other hand it is the primary source of information regarding accidents, injuries, emergencies etc. Also, it is the starting point of various patient registries, databases and surveillance systems. This interventional study was carried out to improve data quality at the ED of the National Hospital of Sri Lanka (NHSL) by introducing an e health solution to improve data quality. The NHSL is the premier trauma care centre in Sri Lanka. The study consisted of three components. A research study was conducted to assess the quality of data in relation to selected five dimensions of data quality namely accuracy, completeness, timeliness, legibility and reliability. The intervention was to develop and deploy an electronic emergency department information system (eEDIS). Post assessment of the intervention confirmed that all five dimensions of data quality had improved. The most significant improvements are noticed in accuracy and timeliness dimensions.

Keywords: electronic health records, electronic emergency department information system, emergency department, data quality

Procedia PDF Downloads 255
24361 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset

Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba

Abstract:

We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).

Keywords: autonomous driving, physical traffic model, prediction model, statistical learning process

Procedia PDF Downloads 242
24360 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator

Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain

Abstract:

Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.

Keywords: percent depth dose, flatness, symmetry, golden beam data

Procedia PDF Downloads 470
24359 Variable-Fidelity Surrogate Modelling with Kriging

Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans

Abstract:

Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.

Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients

Procedia PDF Downloads 539
24358 Robust Barcode Detection with Synthetic-to-Real Data Augmentation

Authors: Xiaoyan Dai, Hsieh Yisan

Abstract:

Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.

Keywords: barcode detection, data augmentation, deep learning, image-based processing

Procedia PDF Downloads 142
24357 Analysis of Delivery of Quad Play Services

Authors: Rahul Malhotra, Anurag Sharma

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: FTTH, quad play, play service, access networks, data rate

Procedia PDF Downloads 391
24356 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network

Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson

Abstract:

The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.

Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0

Procedia PDF Downloads 160