Search results for: data integrity and privacy
24734 Data Hiding by Vector Quantization in Color Image
Authors: Yung Gi Wu
Abstract:
With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.Keywords: data hiding, vector quantization, watermark, color image
Procedia PDF Downloads 36424733 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model
Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin
Abstract:
Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.Keywords: anomaly detection, autoencoder, data centers, deep learning
Procedia PDF Downloads 19424732 Ultra-Wideband (45-50 GHz) mm-Wave Substrate Integrated Waveguide Cavity Slots Antenna for Future Satellite Communications
Authors: Najib Al-Fadhali, Huda Majid
Abstract:
In this article, a substrate integrated waveguide cavity slot antenna was designed using a computer simulation technology software tool to address the specific design challenges for millimeter-wave communications posed by future satellite communications. Due to the symmetrical structure, a high-order mode is generated in SIW, which yields high gain and high efficiency with a compact feed structure. The antenna has dimensions of 20 mm x 20 mm x 1.34 mm. The proposed antenna bandwidth ranges from 45 GHz to 50 GHz, covering a Q-band application such as satellite communication. Antenna efficiency is above 80% over the operational frequency range. The gain of the antenna is above 9 dB with a peak value of 9.4 dB at 47.5 GHz. The proposed antenna is suitable for various millimeter-wave applications such as sensing, body imaging, indoor scenarios, new generations of wireless networks, and future satellite communications. The simulated results show that the SIW antenna resonates throughout the bands of 45 to 50 GHz, making this new antenna cover all applications within this range. The reflection coefficients are below 10 dB in most ranges from 45 to 50 GHz. The compactness, integrity, reliability, and performance at various operating frequencies make the proposed antenna a good candidate for future satellite communications.Keywords: ultra-wideband, Q-band, SIW, mm-wave, satellite communications
Procedia PDF Downloads 8424731 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R
Authors: Pavel H. Llamocca, Victoria Lopez
Abstract:
The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.Keywords: open data, R language, data integration, environmental data
Procedia PDF Downloads 31524730 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule
Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu
Abstract:
Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.Keywords: instance selection, data reduction, MapReduce, kNN
Procedia PDF Downloads 25324729 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking
Authors: Trevor Toy, Josef Langerman
Abstract:
Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.Keywords: big data markets, open banking, blockchain, personal data management
Procedia PDF Downloads 7324728 Experimental Evaluation of Succinct Ternary Tree
Authors: Dmitriy Kuptsov
Abstract:
Tree data structures, such as binary or in general k-ary trees, are essential in computer science. The applications of these data structures can range from data search and retrieval to sorting and ranking algorithms. Naive implementations of these data structures can consume prohibitively large volumes of random access memory limiting their applicability in certain solutions. Thus, in these cases, more advanced representation of these data structures is essential. In this paper we present the design of the compact version of ternary tree data structure and demonstrate the results for the experimental evaluation using static dictionary problem. We compare these results with the results for binary and regular ternary trees. The conducted evaluation study shows that our design, in the best case, consumes up to 12 times less memory (for the dictionary used in our experimental evaluation) than a regular ternary tree and in certain configuration shows performance comparable to regular ternary trees. We have evaluated the performance of the algorithms using both 32 and 64 bit operating systems.Keywords: algorithms, data structures, succinct ternary tree, per- formance evaluation
Procedia PDF Downloads 16024727 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement
Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti
Abstract:
Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing
Procedia PDF Downloads 10824726 Culture Dimensions of Information Systems Security in Saudi Arabia National Health Services
Authors: Saleh Alumaran, Giampaolo Bella, Feng Chen
Abstract:
The study of organisations’ information security cultures has attracted scholars as well as healthcare services industry to research the topic and find appropriate tools and approaches to develop a positive culture. The vast majority of studies in Saudi national health services are on the use of technology to protect and secure health services information. On the other hand, there is a lack of research on the role and impact of an organisation’s cultural dimensions on information security. This research investigated and analysed the role and impact of cultural dimensions on information security in Saudi Arabia health service. Hypotheses were tested and two surveys were carried out in order to collect data and information from three major hospitals in Saudi Arabia (SA). The first survey identified the main cultural-dimension problems in SA health services and developed an initial information security culture framework model. The second survey evaluated and tested the developed framework model to test its usefulness, reliability and applicability. The model is based on human behaviour theory, where the individual’s attitude is the key element of the individual’s intention to behave as well as of his or her actual behaviour. The research identified six cultural dimensions: Saudi national culture, Saudi health service leadership, employees’ trust, technology, multicultural interactions and employees’ job roles. The research also identified a set of cultural sub-dimensions. These include working values and norms, tribe values and norms, attitudes towards women, power sharing, vision, social interaction, respect and understanding, hospital intra-net, hospital employees’ language(s) used, multi-national culture, communication system, employees’ job satisfaction and job security. The research identified that (a) the human behaviour towards medical information in SA is one of the main threats to information security and one of the main challenges to SA health authority, (b) The current situation of SA hospitals’ IS cultures is falling short in protecting medical information due to the current value and norms towards information security, (c) Saudi national culture and employees’ job role are the main dimensions playing major roles in the employees’ attitude, and technology is the least important dimension playing a role in the employees’ attitudes.Keywords: cultural dimension, electronic health record, information security, privacy
Procedia PDF Downloads 35124725 Prosperous Digital Image Watermarking Approach by Using DCT-DWT
Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar
Abstract:
In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacksKeywords: watermarking, digital, DCT-DWT, security
Procedia PDF Downloads 42224724 Machine Learning Data Architecture
Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap
Abstract:
Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning
Procedia PDF Downloads 6424723 Assessment of Causes of Building Collapse in Nigeria
Authors: Olufemi Oyedele
Abstract:
Building collapse (BC) in Nigeria is becoming a regular occurrence, each recording great casualties in the number of lives and materials lost. Building collapse is a situation where building which has been completed and occupied, completed but not occupied or under construction, collapses on its own due to action or inaction of man or due to natural event like earthquake, storm, flooding, tsunami or wildfire. It is different from building demolition. There are various causes of building collapse and each case requires expert judgment to decide the cause of its collapse. Rate of building collapse is a reflection of the level of organization and control of building activities and degree of sophistication of the construction professionals in a country. This study explored the use of case study by examining the causes of six (6) collapsed buildings (CB) across Nigeria. Samples of materials from the sites of the collapsed buildings were taken for testing and analysis, while critical observations were made at the sites to note the conditions of the ground (building base). The study found out that majority of the building collapses in Nigeria were due to poor workmanship, sub-standard building materials, followed by bad building base and poor design. The National Building Code 2006 is not effective due to lack of enforcement and the Physical Development Departments of states and Federal Capital Territory are just mere agents of corruption allowing all types of construction without building approvals.Keywords: building collapse, concrete tests, differential settlement, integrity test, quality control
Procedia PDF Downloads 53524722 A Comparison of Image Data Representations for Local Stereo Matching
Authors: André Smith, Amr Abdel-Dayem
Abstract:
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.Keywords: colour data, local stereo matching, stereo correspondence, disparity map
Procedia PDF Downloads 37024721 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System
Authors: Karima Qayumi, Alex Norta
Abstract:
The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)
Procedia PDF Downloads 43224720 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design
Authors: Qing K. Zhu
Abstract:
Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise
Procedia PDF Downloads 25424719 Introduction of Electronic Health Records to Improve Data Quality in Emergency Department Operations
Authors: Anuruddha Jagoda, Samiddhi Samarakoon, Anil Jasinghe
Abstract:
In its simplest form, data quality can be defined as 'fitness for use' and it is a concept with multi-dimensions. Emergency Departments(ED) require information to treat patients and on the other hand it is the primary source of information regarding accidents, injuries, emergencies etc. Also, it is the starting point of various patient registries, databases and surveillance systems. This interventional study was carried out to improve data quality at the ED of the National Hospital of Sri Lanka (NHSL) by introducing an e health solution to improve data quality. The NHSL is the premier trauma care centre in Sri Lanka. The study consisted of three components. A research study was conducted to assess the quality of data in relation to selected five dimensions of data quality namely accuracy, completeness, timeliness, legibility and reliability. The intervention was to develop and deploy an electronic emergency department information system (eEDIS). Post assessment of the intervention confirmed that all five dimensions of data quality had improved. The most significant improvements are noticed in accuracy and timeliness dimensions.Keywords: electronic health records, electronic emergency department information system, emergency department, data quality
Procedia PDF Downloads 27424718 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset
Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba
Abstract:
We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).Keywords: autonomous driving, physical traffic model, prediction model, statistical learning process
Procedia PDF Downloads 26124717 Lessons Learned from Implementation of Remote Pregnant and Newborn Care Service for Vulnerable Women and Children During COVID-19 and Political Crisis in Myanmar
Authors: Wint Wint Thu, Htet Ko Ko Win, Myat Mon San, Zaw Lin Tun, Nandar Than Aye, Khin Nyein Myat, Hayman Nyo Oo, Nay Aung Lin, Kusum Thapa, Kyaw Htet Aung
Abstract:
Background: In Myanmar, the intense political instability happened to start in Feb-2021, while the COVID-19 pandemic waves are also threatening the public health system, which subsequently led to severe health sector crisis, including difficulties in accessing maternal and newborn health care for vulnerable women and children. The Remote Pregnant and Newborn Care (RPNC) uses a telehealth approach United States Agency for International Development (USAID)-funded Essential Health Project. Implementation: The Remote Pregnant and Newborn Care (RPNC) service has adapted to the MNCH needs of vulnerable pregnant women and was implemented to mitigate the risk of limited access to essential quality MNH care in Yangon, Myanmar, under women, and the project trained 13 service providers on a telehealth care package for pregnancy and newborn developed Jhpiego to ensure understanding of evidence-based MNCH care practices. The phone numbers of the pregnant women were gathered through the preexisting and functioning community volunteers, who reach the most vulnerable pregnant women in the project's targeted area. A total of 212 pregnant women have been reached by service providers for RPNC during the implementation period. The trained service providers offer quality antenatal and postnatal care, including newborn care, via telephone calls. It includes 24/7 incoming calls and time-allotted outgoing calls to the pregnant women during antenatal and postnatal periods, including the newborn care. The required data were collected daily in time with the calls, and the quality of the medical services is made assured with the track of the calls, ensuring data privacy and patient confidentiality. Lessons learned: The key lessons are 1) cost-effectiveness: RPNC service could reduce out of pocket expenditure of pregnant women as it only costs 1.6 United States dollars (USD) per one telehealth call while it costs 8 to 10 USD per one time in-person care service at private service providers, including transportation cost, 2) network of care: telehealth call could not replace the in-person antenatal and postnatal care services, and integration of telehealth calls with in-person care by local healthcare providers with the support of the community is crucial for accessibility to essential MNH services by poor and vulnerable women, and 3) sharing information on health access points: most of the women seem to have financial barriers in accessing private health facilities while public health system collapse and telehealthcare could provide information on low-cost facilities and connect women to relevant health facilities. These key lessons are important for future efforts regarding the implementation of remote pregnancy and newborn care in Myanmar, especially during the political crisis and COVID-19 pandemic situation.Keywords: telehealth, accessibility, maternal care, newborn care
Procedia PDF Downloads 10124716 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator
Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain
Abstract:
Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.Keywords: percent depth dose, flatness, symmetry, golden beam data
Procedia PDF Downloads 48924715 Variable-Fidelity Surrogate Modelling with Kriging
Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans
Abstract:
Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients
Procedia PDF Downloads 55824714 Robust Barcode Detection with Synthetic-to-Real Data Augmentation
Authors: Xiaoyan Dai, Hsieh Yisan
Abstract:
Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.Keywords: barcode detection, data augmentation, deep learning, image-based processing
Procedia PDF Downloads 16824713 Human LACE1 Functions Pro-Apoptotic and Interacts with Mitochondrial YME1L Protease
Authors: Lukas Stiburek, Jana Cesnekova, Josef Houstek, Jiri Zeman
Abstract:
Cellular function depends on mitochondrial function and integrity that is therefore maintained by several classes of proteins possessing chaperone and/or proteolytic activities. In this work, we focused on characterization of LACE1 (lactation elevated 1) function in mitochondrial protein homeostasis maintenance. LACE1 is the human homologue of yeast mitochondrial Afg1 ATPase, a member of SEC18-NSF, PAS1, CDC48-VCP, TBP family. Yeast Afg1 was shown to be involved in mitochondrial complex IV biogenesis, and based on its similarity with CDC48 (p97/VCP) it was suggested to facilitate extraction of polytopic membrane proteins. Here we show that LACE1, which is a mitochondrial integral membrane protein, exists as part of three complexes of approx. 140, 400 and 500 kDa and is essential for maintenance of fused mitochondrial reticulum and lamellar cristae morphology. Using affinity purification of LACE1-FLAG expressed in LACE1 knockdown background we show that the protein physically interacts with mitochondrial inner membrane protease YME1L. We further show that human LACE1 exhibits significant pro-apoptotic activity and that the protein is required for normal function of the mitochondrial respiratory chain. Thus, our work establishes LACE1 as a novel factor with the crucial role in mitochondrial homeostasis maintenance.Keywords: LACE1, mitochondria, apoptosis, protease
Procedia PDF Downloads 31324712 Development of a Device for Detecting Fluids in the Esophagus
Authors: F. J. Puertas, M. Castro, A. Tebar, P. J. Fito, R. Gadea, J. M. Monzó, R. J. Colom
Abstract:
There is a great diversity of diseases that affect the integrity of the walls of the esophagus, generally of a digestive nature. Among them, gastroesophageal reflux is a common disease in the general population, affecting the patient's quality of life; however, there are still unmet diagnostic and therapeutic issues. The consequences of untreated or asymptomatic acid reflux on the esophageal mucosa are not only pain, heartburn, and acid regurgitation but also an increased risk of esophageal cancer. Currently, the diagnostic methods to detect problems in the esophageal tract are invasive and annoying, as 24-hour impedance-pH monitoring forces the patient to be uncomfortable for hours to be able to make a correct diagnosis. In this work, the development of a sensor able to measure in depth is proposed, allowing the detection of liquids circulating in the esophageal tract. The multisensor detection system is based on radiofrequency photospectrometry. At an experimental level, consumers representative of the population in terms of sex and age have been used, placing the sensors between the trachea and the diaphragm analyzing the measurements in vacuum, water, orange juice and saline medium. The results obtained have allowed us to detect the appearance of different liquid media in the esophagus, segregating them based on their ionic content.Keywords: bioimpedance, dielectric spectroscopy, gastroesophageal reflux, GERD
Procedia PDF Downloads 10124711 Analysis of Delivery of Quad Play Services
Authors: Rahul Malhotra, Anurag Sharma
Abstract:
Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.Keywords: FTTH, quad play, play service, access networks, data rate
Procedia PDF Downloads 41524710 Smart-Textile Containers for Urban Mobility
Authors: René Vieroth, Christian Dils, M. V. Krshiwoblozki, Christine Kallmayer, Martin Schneider-Ramelow, Klaus-Dieter Lang
Abstract:
Green urban mobility in commercial and private contexts is one of the great challenges for the continuously growing cities all over the world. Bicycle based solutions are already and since a long time the key to success. Modern developments like e-bikes and high-end cargo-bikes complement the portfolio. Weight, aerodynamic drag, and security for the transported goods are the key factors for working solutions. Recent achievements in the field of smart-textiles allowed the creation of a totally new generation of intelligent textile cargo containers, which fulfill those demands. The fusion of technical textiles, design and electrical engineering made it possible to create an ecological solution which is very near to become a product. This paper shows all the details of this solution that includes an especially developed sensor textile for cut detection, a protective textile layer for intrusion prevention, an universal-charging-unit for energy harvesting from diverse sources and a low-energy alarm system with GSM/GPRS connection, GPS location and RFID interface.Keywords: cargo-bike, cut-detection, e-bike, energy-harvesting, green urban mobility, logistics, smart-textiles, textile-integrity sensor
Procedia PDF Downloads 31524709 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network
Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson
Abstract:
The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0
Procedia PDF Downloads 18224708 Assessing Sexual and Reproductive Health Literacy and Engagement Among Refugee and Immigrant Women in Massachusetts: A Qualitative Community-Based Study
Authors: Leen Al Kassab, Sarah Johns, Helen Noble, Nawal Nour, Elizabeth Janiak, Sarrah Shahawy
Abstract:
Introduction: Immigrant and refugee women experience disparities in sexual and reproductive health (SRH) outcomes, partially as a result of barriers to SRH literacy and to regular healthcare access and engagement. Despite the existing data highlighting growing needs for culturally relevant and structurally competent care, interventions are scarce and not well-documented. Methods: In this IRB-approved study, we used a community-based participatory research approach, with the assistance of a community advisory board, to conduct a qualitative needs assessment of SRH knowledge and service engagement with immigrant and refugee women from Africa or the Middle East and currently residing in Boston. We conducted a total of nine focus group discussions (FGDs) in partnership with medical, community, and religious centers, in six languages: Arabic, English, French, Somali, Pashtu, and Dari. A total of 44 individuals participated. We explored migrant and refugee women’s current and evolving SRH care needs and gaps, specifically related to the development of interventions and clinical best practices targeting SRH literacy, healthcare engagement, and informed decision-making. Recordings of the FGDs were transcribed verbatim and translated by interpreter services. We used open coding with multiple coders who resolved discrepancies through consensus and iteratively refined our codebook while coding data in batches using Dedoose software. Results: Participants reported immigrant adaptation experiences, discrimination, and feelings of trust, autonomy, privacy, and connectedness to family, community, and the healthcare system as factors surrounding SRH knowledge and needs. The context of previously learned SRH knowledge was commonly noted to be in schools, at menstruation, before marriage, from family members, partners, friends, and online search engines. Common themes included empowering strength drawn from religious and cultural communities, difficulties bridging educational gaps with their US- born daughters, and a desire for more SRH education from multiple sources, including family, health care providers, and religious experts & communities. Regarding further SRH education, participants’ preferences varied regarding ideal platform (virtual vs. in-person), location (in religious and community centers or not), smaller group sizes, and the involvement of men. Conclusions: Based on these results, empowering SRH initiatives should include both community and religious center-based, as well as clinic-based, interventions. Interventions should be composed of frequent educational workshops in small groups involving age-grouped women, daughters, and (sometimes) men, tailored SRH messaging, and the promotion of culturally, religiously, and linguistically competent care.Keywords: community, immigrant, religion, sexual & reproductive health, women's health
Procedia PDF Downloads 12724707 Attribute Analysis of Quick Response Code Payment Users Using Discriminant Non-negative Matrix Factorization
Authors: Hironori Karachi, Haruka Yamashita
Abstract:
Recently, the system of quick response (QR) code is getting popular. Many companies introduce new QR code payment services and the services are competing with each other to increase the number of users. For increasing the number of users, we should grasp the difference of feature of the demographic information, usage information, and value of users between services. In this study, we conduct an analysis of real-world data provided by Nomura Research Institute including the demographic data of users and information of users’ usages of two services; LINE Pay, and PayPay. For analyzing such data and interpret the feature of them, Nonnegative Matrix Factorization (NMF) is widely used; however, in case of the target data, there is a problem of the missing data. EM-algorithm NMF (EMNMF) to complete unknown values for understanding the feature of the given data presented by matrix shape. Moreover, for comparing the result of the NMF analysis of two matrices, there is Discriminant NMF (DNMF) shows the difference of users features between two matrices. In this study, we combine EMNMF and DNMF and also analyze the target data. As the interpretation, we show the difference of the features of users between LINE Pay and Paypay.Keywords: data science, non-negative matrix factorization, missing data, quality of services
Procedia PDF Downloads 13124706 Developing Guidelines for Public Health Nurse Data Management and Use in Public Health Emergencies
Authors: Margaret S. Wright
Abstract:
Background/Significance: During many recent public health emergencies/disasters, public health nursing data has been missing or delayed, potentially impacting the decision-making and response. Data used as evidence for decision-making in response, planning, and mitigation has been erratic and slow, decreasing the ability to respond. Methodology: Applying best practices in data management and data use in public health settings, and guided by the concepts outlined in ‘Disaster Standards of Care’ models leads to the development of recommendations for a model of best practices in data management and use in public health disasters/emergencies by public health nurses. As the ‘patient’ in public health disasters/emergencies is the community (local, regional or national), guidelines for patient documentation are incorporated in the recommendations. Findings: Using model public health nurses could better plan how to prepare for, respond to, and mitigate disasters in their communities, and better participate in decision-making in all three phases bringing public health nursing data to the discussion as part of the evidence base for decision-making.Keywords: data management, decision making, disaster planning documentation, public health nursing
Procedia PDF Downloads 22124705 An Embarrassingly Simple Semi-supervised Approach to Increase Recall in Online Shopping Domain to Match Structured Data with Unstructured Data
Authors: Sachin Nagargoje
Abstract:
Complete labeled data is often difficult to obtain in a practical scenario. Even if one manages to obtain the data, the quality of the data is always in question. In shopping vertical, offers are the input data, which is given by advertiser with or without a good quality of information. In this paper, an author investigated the possibility of using a very simple Semi-supervised learning approach to increase the recall of unhealthy offers (has badly written Offer Title or partial product details) in shopping vertical domain. The author found that the semisupervised learning method had improved the recall in the Smart Phone category by 30% on A=B testing on 10% traffic and increased the YoY (Year over Year) number of impressions per month by 33% at production. This also made a significant increase in Revenue, but that cannot be publicly disclosed.Keywords: semi-supervised learning, clustering, recall, coverage
Procedia PDF Downloads 122