Search results for: data discovery
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25294

Search results for: data discovery

24634 An Analysis of Humanitarian Data Management of Polish Non-Governmental Organizations in Ukraine Since February 2022 and Its Relevance for Ukrainian Humanitarian Data Ecosystem

Authors: Renata Kurpiewska-Korbut

Abstract:

Making an assumption that the use and sharing of data generated in humanitarian action constitute a core function of humanitarian organizations, the paper analyzes the position of the largest Polish humanitarian non-governmental organizations in the humanitarian data ecosystem in Ukraine and their approach to non-personal and personal data management since February of 2022. Both expert interviews and document analysis of non-profit organizations providing a direct response in the Ukrainian crisis context, i.e., the Polish Humanitarian Action, Caritas, Polish Medical Mission, Polish Red Cross, and the Polish Center for International Aid and the applicability of theoretical perspective of contingency theory – with its central point that the context or specific set of conditions determining the way of behavior and the choice of methods of action – help to examine the significance of data complexity and adaptive approach to data management by relief organizations in the humanitarian supply chain network. The purpose of this study is to determine how the existence of well-established and accurate internal procedures and good practices of using and sharing data (including safeguards for sensitive data) by the surveyed organizations with comparable human and technological capabilities are implemented and adjusted to Ukrainian humanitarian settings and data infrastructure. The study also poses a fundamental question of whether this crisis experience will have a determining effect on their future performance. The obtained finding indicate that Polish humanitarian organizations in Ukraine, which have their own unique code of conduct and effective managerial data practices determined by contingencies, have limited influence on improving the situational awareness of other assistance providers in the data ecosystem despite their attempts to undertake interagency work in the area of data sharing.

Keywords: humanitarian data ecosystem, humanitarian data management, polish NGOs, Ukraine

Procedia PDF Downloads 90
24633 An Approach for Estimation in Hierarchical Clustered Data Applicable to Rare Diseases

Authors: Daniel C. Bonzo

Abstract:

Practical considerations lead to the use of unit of analysis within subjects, e.g., bleeding episodes or treatment-related adverse events, in rare disease settings. This is coupled with data augmentation techniques such as extrapolation to enlarge the subject base. In general, one can think about extrapolation of data as extending information and conclusions from one estimand to another estimand. This approach induces hierarchichal clustered data with varying cluster sizes. Extrapolation of clinical trial data is being accepted increasingly by regulatory agencies as a means of generating data in diverse situations during drug development process. Under certain circumstances, data can be extrapolated to a different population, a different but related indication, and different but similar product. We consider here the problem of estimation (point and interval) using a mixed-models approach under an extrapolation. It is proposed that estimators (point and interval) be constructed using weighting schemes for the clusters, e.g., equally weighted and with weights proportional to cluster size. Simulated data generated under varying scenarios are then used to evaluate the performance of this approach. In conclusion, the evaluation result showed that the approach is a useful means for improving statistical inference in rare disease settings and thus aids not only signal detection but risk-benefit evaluation as well.

Keywords: clustered data, estimand, extrapolation, mixed model

Procedia PDF Downloads 133
24632 Authorization of Commercial Communication Satellite Grounds for Promoting Turkish Data Relay System

Authors: Celal Dudak, Aslı Utku, Burak Yağlioğlu

Abstract:

Uninterrupted and continuous satellite communication through the whole orbit time is becoming more indispensable every day. Data relay systems are developed and built for various high/low data rate information exchanges like TDRSS of USA and EDRSS of Europe. In these missions, a couple of task-dedicated communication satellites exist. In this regard, for Turkey a data relay system is attempted to be defined exchanging low data rate information (i.e. TTC) for Earth-observing LEO satellites appointing commercial GEO communication satellites all over the world. First, justification of this attempt is given, demonstrating duration enhancements in the link. Discussion of preference of RF communication is, also, given instead of laser communication. Then, preferred communication GEOs – including TURKSAT4A already belonging to Turkey- are given, together with the coverage enhancements through STK simulations and the corresponding link budget. Also, a block diagram of the communication system is given on the LEO satellite.

Keywords: communication, GEO satellite, data relay system, coverage

Procedia PDF Downloads 436
24631 Morphology, Chromosome Numbers and Molecular Evidences of Three New Species of Begonia Section Baryandra (Begoniaceae) from Panay Island, Philippines

Authors: Rosario Rivera Rubite, Ching-I Peng, Che-Wei Lin, Mark Hughes, Yoshiko Kono, Kuo-Fang Chung

Abstract:

The flora of Panay Island is under-collected compared with the other islands of the Philippines. In a joint expedition to the island, botanists from Taiwan and the Philippines found three unknown Begonia and compared them with potentially allied species. The three species are clearly assignable to Begonia section Baryandra which is largely endemic to the Philippines. Studies of literature, herbarium specimens, and living plants support the recognition of the three new species: Begonia culasiensis, Begonia merrilliana, and Begonia sykakiengii. Somatic chromosomes at metaphase were determined to be 2n=30 for B. culasiensis and 2n=28 for both B. merrilliana and B. sykakiengii, which are congruent with those of most species in sect. Baryandra. Molecular phylogenetic evidence is consistent with B. culasiensis being a relict from the late Miocene, and with B. merrilliana and B. sykakiengii being younger species of Pleistocene origin. The continuing discovery of endemic Philippine species means the remaining fragments of both primary and secondary native vegetation in the archipelago are of increasing value in terms of natural capital. A secure future for the species could be realized through ex-situ conservation collections and raising awareness with community groups.

Keywords: conservation, endemic , herbarium , limestone, phylogenetics, taxonomy

Procedia PDF Downloads 214
24630 The Development of Encrypted Near Field Communication Data Exchange Format Transmission in an NFC Passive Tag for Checking the Genuine Product

Authors: Tanawat Hongthai, Dusit Thanapatay

Abstract:

This paper presents the development of encrypted near field communication (NFC) data exchange format transmission in an NFC passive tag for the feasibility of implementing a genuine product authentication. We propose a research encryption and checking the genuine product into four major categories; concept, infrastructure, development and applications. This result shows the passive NFC-forum Type 2 tag can be configured to be compatible with the NFC data exchange format (NDEF), which can be automatically partially data updated when there is NFC field.

Keywords: near field communication, NFC data exchange format, checking the genuine product, encrypted NFC

Procedia PDF Downloads 276
24629 Data Hiding by Vector Quantization in Color Image

Authors: Yung Gi Wu

Abstract:

With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.

Keywords: data hiding, vector quantization, watermark, color image

Procedia PDF Downloads 359
24628 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: anomaly detection, autoencoder, data centers, deep learning

Procedia PDF Downloads 188
24627 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 311
24626 Transforming Data into Knowledge: Mathematical and Statistical Innovations in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in various domains has created a pressing need for effective methods to transform this data into meaningful knowledge. In this era of big data, mathematical and statistical innovations play a crucial role in unlocking insights and facilitating informed decision-making in data analytics. This abstract aims to explore the transformative potential of these innovations and their impact on converting raw data into actionable knowledge. Drawing upon a comprehensive review of existing literature, this research investigates the cutting-edge mathematical and statistical techniques that enable the conversion of data into knowledge. By evaluating their underlying principles, strengths, and limitations, we aim to identify the most promising innovations in data analytics. To demonstrate the practical applications of these innovations, real-world datasets will be utilized through case studies or simulations. This empirical approach will showcase how mathematical and statistical innovations can extract patterns, trends, and insights from complex data, enabling evidence-based decision-making across diverse domains. Furthermore, a comparative analysis will be conducted to assess the performance, scalability, interpretability, and adaptability of different innovations. By benchmarking against established techniques, we aim to validate the effectiveness and superiority of the proposed mathematical and statistical innovations in data analytics. Ethical considerations surrounding data analytics, such as privacy, security, bias, and fairness, will be addressed throughout the research. Guidelines and best practices will be developed to ensure the responsible and ethical use of mathematical and statistical innovations in data analytics. The expected contributions of this research include advancements in mathematical and statistical sciences, improved data analysis techniques, enhanced decision-making processes, and practical implications for industries and policymakers. The outcomes will guide the adoption and implementation of mathematical and statistical innovations, empowering stakeholders to transform data into actionable knowledge and drive meaningful outcomes.

Keywords: data analytics, mathematical innovations, knowledge extraction, decision-making

Procedia PDF Downloads 71
24625 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: instance selection, data reduction, MapReduce, kNN

Procedia PDF Downloads 250
24624 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 72
24623 The Models of Character Development Bali Police to Improve Quality of Moral Members in Bali Police Headquarters

Authors: Agus Masrukhin

Abstract:

This research aims to find and analyze the model of character building in the Police Headquarters in Bali with a case study of Muslim members in improving the quality of the morality of its members. The formation of patterns of thinking, behavior, mentality, and police officers noble character, later can be used as a solution to reduce the hedonistic nature of the challenges in the era of globalization. The benefit of this study is expected to be a positive recommendation to find a constructive character building models of police officers in the Republic of Indonesia, especially Bali Police. For the long term, the discovery of the character building models can be developed for the entire police force in Indonesia. The type of research that would apply in this study researchers mix the qualitative research methods based on the narrative between the subject and the concrete experience of field research and quantitative research methods with 92 respondents from the police regional police Bali. This research used a descriptive analysis and SWOT analysis then it is presented in the FGD (focus group discussion). The results of this research indicate that the variable modeling the leadership of the police and variable police offices culture have significant influence on the implementation of spiritual development.

Keywords: positive constructive, hedonistic, character models, morality

Procedia PDF Downloads 362
24622 Experimental Evaluation of Succinct Ternary Tree

Authors: Dmitriy Kuptsov

Abstract:

Tree data structures, such as binary or in general k-ary trees, are essential in computer science. The applications of these data structures can range from data search and retrieval to sorting and ranking algorithms. Naive implementations of these data structures can consume prohibitively large volumes of random access memory limiting their applicability in certain solutions. Thus, in these cases, more advanced representation of these data structures is essential. In this paper we present the design of the compact version of ternary tree data structure and demonstrate the results for the experimental evaluation using static dictionary problem. We compare these results with the results for binary and regular ternary trees. The conducted evaluation study shows that our design, in the best case, consumes up to 12 times less memory (for the dictionary used in our experimental evaluation) than a regular ternary tree and in certain configuration shows performance comparable to regular ternary trees. We have evaluated the performance of the algorithms using both 32 and 64 bit operating systems.

Keywords: algorithms, data structures, succinct ternary tree, per- formance evaluation

Procedia PDF Downloads 158
24621 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement

Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti

Abstract:

Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.

Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing

Procedia PDF Downloads 103
24620 Prosperous Digital Image Watermarking Approach by Using DCT-DWT

Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar

Abstract:

In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacks

Keywords: watermarking, digital, DCT-DWT, security

Procedia PDF Downloads 418
24619 Machine Learning Data Architecture

Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap

Abstract:

Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.

Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning

Procedia PDF Downloads 60
24618 Characterization of Triterpenoids Antimicrobial Potential in Ethyl Acetate Extracts from Aerial Parts of Deinbollia Pinnata

Authors: Rufai Yakubu And Suleiman Kabiru

Abstract:

Triterpenoids are a diverse class of secondary metabolites with potential antimicrobial properties. In this study, the crude extracts from ethyl acetate was obtained with ultrasonic extraction method. Using a combined chromatographic separation method to isolate squalene (1) stigmasterol (2), stigmasta-5,22-diene-3-ol acetate (3), γ-sitosterol (4), lupeol (5), taraxasterol (6), and betulinic acid (7) from ethyl acetate extracts. Ethyl acetate crude extracts and isolated compounds were both screened for antimicrobial activity and minimum inhibitory concentration (MIC). For ethyl acetate crude extracts with concentrations of (1.5, 0.75, 0.35, & 0.168 mg/mL) indicated marginal antibacterial activity with a range of 17, 20 and 14 mm zone of inhibition for Staphylococcus aureus, Escherichia coli and Candida albicans and lower minimum inhibitory concentrations ranges from 18.75 µg/ml to 150 µg/mL. Butulinic acid showed the highest activity against E. coli and C. albicans at 15 mm and 15 mm followed by Lupeol against S. aureus, E. coli and C. albicans at 13, 12, 12 mm. Moreso, no antimicrobial activity for both S. aureus and C. albicans with squalene except for E. coli which showed activity at 11 mm with 300 µg/mL (MIC). Thus, abundant triterpenoids in Deinbollia pinnata will be another centered area for antimicrobial drug discovery.

Keywords: triterpenoid, antimicrobial potentials, deinbollia pinnata, aerial parts

Procedia PDF Downloads 66
24617 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: colour data, local stereo matching, stereo correspondence, disparity map

Procedia PDF Downloads 367
24616 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 426
24615 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design

Authors: Qing K. Zhu

Abstract:

Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.

Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise

Procedia PDF Downloads 251
24614 Introduction of Electronic Health Records to Improve Data Quality in Emergency Department Operations

Authors: Anuruddha Jagoda, Samiddhi Samarakoon, Anil Jasinghe

Abstract:

In its simplest form, data quality can be defined as 'fitness for use' and it is a concept with multi-dimensions. Emergency Departments(ED) require information to treat patients and on the other hand it is the primary source of information regarding accidents, injuries, emergencies etc. Also, it is the starting point of various patient registries, databases and surveillance systems. This interventional study was carried out to improve data quality at the ED of the National Hospital of Sri Lanka (NHSL) by introducing an e health solution to improve data quality. The NHSL is the premier trauma care centre in Sri Lanka. The study consisted of three components. A research study was conducted to assess the quality of data in relation to selected five dimensions of data quality namely accuracy, completeness, timeliness, legibility and reliability. The intervention was to develop and deploy an electronic emergency department information system (eEDIS). Post assessment of the intervention confirmed that all five dimensions of data quality had improved. The most significant improvements are noticed in accuracy and timeliness dimensions.

Keywords: electronic health records, electronic emergency department information system, emergency department, data quality

Procedia PDF Downloads 270
24613 Semantics of the Word “Nas” in the Verse 24 of Surah Al-Baqarah Based on Izutsus’ Semantic Field Theory

Authors: Seyedeh Khadijeh. Mirbazel, Masoumeh Arjmandi

Abstract:

Semantics is a linguistic approach and a scientific stream, and like all scientific streams, it is dynamic. The study of meaning is carried out in the broad semantic collections of words that form the discourse. In other words, meaning is not something that can be found in a word; rather, the formation of meaning is a process that takes place in a discourse as a whole. One of the contemporary semantic theories is Izutsu's Semantic Field Theory. According to this theory, the discovery of meaning depends on the function of words and takes place within the context of language. The purpose of this research is to identify the meaning of the word "Nas" in the discourse of verse 24 of Surah Al-Baqarah, which introduces "Nas" as the firewood of hell, but the translators have translated it as "people". The present research has investigated the semantic structure of the word "Nas" using the aforementioned theory through the descriptive-analytical method. In the process of investigation, by matching the semantic fields of the Quranic word "Nas", this research came to the conclusion that "Nas" implies those persons who have forgotten God and His covenant in believing in His Oneness. For this reason, God called them "Nas (the forgetful)" - the imperfect participle of the noun /næsiwoɔn/ in single trinity of Arabic language, which means “to forget”. Therefore, the intended meaning of "Nas" in the verses that have the word "Nas" is not equivalent to "People" which is a general noun.

Keywords: Nas, people, semantics, semantic field theory.

Procedia PDF Downloads 187
24612 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset

Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba

Abstract:

We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).

Keywords: autonomous driving, physical traffic model, prediction model, statistical learning process

Procedia PDF Downloads 251
24611 High-Throughput Mechanized Microfluidic Test Groundwork for Precise Microbial Genomics

Authors: Pouya Karimi, Ramin Gasemi Shayan, Parsa Sheykhzade

Abstract:

Ease shotgun DNA sequencing is changing the microbial sciences. Sequencing instruments are compelling to the point that example planning is currently the key constraining element. Here, we present a microfluidic test readiness stage that incorporates the key strides in cells to grouping library test groundwork for up to 96 examples and decreases DNA input prerequisites 100-overlay while keeping up or improving information quality. The universally useful microarchitecture we show bolsters work processes with subjective quantities of response and tidy up or catch steps. By decreasing the example amount necessities, we empowered low-input (∼10,000 cells) entire genome shotgun (WGS) sequencing of Mycobacterium tuberculosis and soil miniaturized scale settlements with prevalent outcomes. We additionally utilized the upgraded throughput to succession ∼400 clinical Pseudomonas aeruginosa libraries and exhibit magnificent single-nucleotide polymorphism discovery execution that clarified phenotypically watched anti-toxin opposition. Completely coordinated lab-on-chip test arrangement beats specialized boundaries to empower more extensive organization of genomics across numerous fundamental research and translational applications.

Keywords: clinical microbiology, DNA, microbiology, microbial genomics

Procedia PDF Downloads 120
24610 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator

Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain

Abstract:

Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.

Keywords: percent depth dose, flatness, symmetry, golden beam data

Procedia PDF Downloads 485
24609 Variable-Fidelity Surrogate Modelling with Kriging

Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans

Abstract:

Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.

Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients

Procedia PDF Downloads 553
24608 Robust Barcode Detection with Synthetic-to-Real Data Augmentation

Authors: Xiaoyan Dai, Hsieh Yisan

Abstract:

Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.

Keywords: barcode detection, data augmentation, deep learning, image-based processing

Procedia PDF Downloads 162
24607 Effects of Extract from Lactuca sativa on Sleep in Pentobarbital-Induced Sleep and Caffeine-Induced Sleep Disturbance in Mice

Authors: Hae Dun Kim, Joo Hyun Jang, Geu Rim Seo, Kyung Soo Ra, Hyung Joo Suh

Abstract:

Lactuca sativa (lettuce) has been known for its medical property to relieve anxiety and nervous. This study was implemented to investigate sleep-promoting effects of the lettuce alcohol extract (LAE). Caffeine is widely used psychoactive substance known to induced wakefulness and insomnia to its consumers. In the present study, the sedative-hypnotic activity of the LAE was studied using the method of pentobarbital-induced sleep in the mouse model. The LAE was administrated to mice 30 min before the pentobarbital injection. The LAE prolonged the pentobarbital-induced sleep duration and decreased sleep latency. The effects of LAE were comparable to those of induced by diazepam. Another study was performed to examine whether LAE ameliorates caffeine-induced sleep disturbance in mice. Additionally, caffeine (10 mg/kg, p.o) delayed sleep onset and reduced sleep duration of mice. Conversely, LAE treatment (80 or 160 mg/kg, p.o), especially at 160 mg/kg, normalized the sleep disturbance induced by caffeine. LAE supplementation can counter the sleep disturbance induced by caffeine. These results suggest that LAE possess significant sedative-hypnotic activity, which supports the popular use of lettuce for treatment of insomnia and provide the basis for new drug discovery. Furthermore, these results demonstrate that the lettuce extract may be preferable for the treatment of insomnia.

Keywords: caffeine, Lactuca sativa, sleep duration, sleep latency

Procedia PDF Downloads 301
24606 Pharmaceutical Innovation in Jordan: KAP Analysis

Authors: Abdel Qader Al Bawab, Mohannad Odeh, Rami Amer

Abstract:

Recently, there has been an increasing interest in innovative business development. Nevertheless, in the pharmacy practice field, there seems to be a gap in perceptions, attitudes, and knowledge about innovation between practicing pharmacists and academia. This study explores this gap and aspects of pharmaceutical innovation in Jordan, comparing pharmacists and last-year pharmacy students. A validated (r2 = 0.74) and reliable (Pearson’s r = 0.88) online questionnaire was designed to assess and compare knowledge, attitude, and perceptions about pharmaceutical innovation. A total of 397 participants (215 pharmacy students and 182 pharmaceutical professionals) responded. Compared with 50% of the pharmacists, only 32.1% of the students claimed that they knew the differences between pharmaceutical innovation, discovery, invention, and entrepreneurship [x2 (2) = 14.238, p = 0.001; Cramer’s V = 0.189]. Pharmacists demonstrated a higher level of trust in the innovative website design for their institution compared with students (25.3% vs. 16.3%, p < 0.001, Cramer’s V = 0.327). However, 60% of the students did not know the innovative design standards for websites, while the corresponding percentage was 37% for the pharmacists (p < 0.001; Cramer’s V = 0.327). The majority of the students were interested in pharmaceutical innovation (81.9%). Unfortunately, 76.3% never studied innovation in their pharmacy curricula. Similarly, most pharmacists (76.4%) considered adopting innovation, but only 30% had a concrete plan. For the field where pharmacists aim to innovate in the next 5 years, new pharmaceutical services were the dominant field (34.6%). Despite a positive attitude and perception, pharmacists and pharmacy students expressed poor knowledge about innovation. Policies to enhance awareness about innovation and professional educational tools should be implemented.

Keywords: pharmacy, innovation, knowledge, attitude, practice

Procedia PDF Downloads 84
24605 Characterization of the Catalytic and Structural Roles of the Human Hexokinase 2 in Cancer Progression

Authors: Mir Hussain Nawaz, Lyudmila Nedyalkova, Haizhong Zhu, Wael M. Rabeh

Abstract:

In this study, we aim to biochemically and structurally characterize the interactions of human HK2 with the mitochondria in addition to the role of its N-terminal domain in catalysis and stability of the full-length enzyme. Here, we solved the crystal structure of human HK2 in complex with glucose and glucose-6-phosphate (PDB code: 2NZT), where it is a homodimer with catalytically active N- and C-terminal domains linked by a seven-turn α-helix. Different from the inactive N-terminal domains of isozymes 1 and 3, the N- domain of HK2 not only capable to catalyze a reaction but it is responsible for the thermodynamic stabilizes of the full-length enzyme. Deletion of first α-helix of the N-domain that binds to the mitochondria altered the stability and catalytic activity of the full-length HK2. In addition, we found the linker helix between the N- and C-terminal domains to play an important role in controlling the catalytic activity of the N-terminal domain. HK2 is a major step in the regulation of glucose metabolism in cancer making it an ideal target for the development of new anticancer therapeutics. Characterizing the structural and molecular mechanisms of human HK2 and its role in cancer metabolism will accelerate the design and development of new cancer therapeutics that are safe and cancer specific.

Keywords: cancer metabolism, enzymology, drug discovery, protein stability

Procedia PDF Downloads 260