Search results for: secure data aggregation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25897

Search results for: secure data aggregation

24877 Li-Fi Technology: Data Transmission through Visible Light

Authors: Shahzad Hassan, Kamran Saeed

Abstract:

People are always in search of Wi-Fi hotspots because Internet is a major demand nowadays. But like all other technologies, there is still room for improvement in the Wi-Fi technology with regards to the speed and quality of connectivity. In order to address these aspects, Harald Haas, a professor at the University of Edinburgh, proposed what we know as the Li-Fi (Light Fidelity). Li-Fi is a new technology in the field of wireless communication to provide connectivity within a network environment. It is a two-way mode of wireless communication using light. Basically, the data is transmitted through Light Emitting Diodes which can vary the intensity of light very fast, even faster than the blink of an eye. From the research and experiments conducted so far, it can be said that Li-Fi can increase the speed and reliability of the transfer of data. This paper pays particular attention on the assessment of the performance of this technology. In other words, it is a 5G technology which uses LED as the medium of data transfer. For coverage within the buildings, Wi-Fi is good but Li-Fi can be considered favorable in situations where large amounts of data are to be transferred in areas with electromagnetic interferences. It brings a lot of data related qualities such as efficiency, security as well as large throughputs to the table of wireless communication. All in all, it can be said that Li-Fi is going to be a future phenomenon where the presence of light will mean access to the Internet as well as speedy data transfer.

Keywords: communication, LED, Li-Fi, Wi-Fi

Procedia PDF Downloads 347
24876 Lightweight Cryptographically Generated Address for IPv6 Neighbor Discovery

Authors: Amjed Sid Ahmed, Rosilah Hassan, Nor Effendy Othman

Abstract:

Limited functioning of the Internet Protocol version 4 (IPv4) has necessitated the development of the Internetworking Protocol next generation (IPng) to curb the challenges. Indeed, the IPng is also referred to as the Internet Protocol version 6 (IPv6) and includes the Neighbor Discovery Protocol (NDP). The latter performs the role of Address Auto-configuration, Router Discovery (RD), and Neighbor Discovery (ND). Furthermore, the role of the NDP entails redirecting the service, detecting the duplicate address, and detecting the unreachable services. Despite the fact that there is an NDP’s assumption regarding the existence of trust the links’ nodes, several crucial attacks may affect the Protocol. Internet Engineering Task Force (IETF) therefore has recommended implementation of Secure Neighbor Discovery Protocol (SEND) to tackle safety issues in NDP. The SEND protocol is mainly used for validation of address rights, malicious response inhibiting techniques and finally router certification procedures. For routine running of these tasks, SEND utilizes on the following options, Cryptographically Generated Address (CGA), RSA Signature, Nonce and Timestamp option. CGA is produced at extra high costs making it the most notable disadvantage of SEND. In this paper a clear description of the constituents of CGA, its operation and also recommendations for improvements in its generation are given.

Keywords: CGA, IPv6, NDP, SEND

Procedia PDF Downloads 385
24875 An Analysis of Humanitarian Data Management of Polish Non-Governmental Organizations in Ukraine Since February 2022 and Its Relevance for Ukrainian Humanitarian Data Ecosystem

Authors: Renata Kurpiewska-Korbut

Abstract:

Making an assumption that the use and sharing of data generated in humanitarian action constitute a core function of humanitarian organizations, the paper analyzes the position of the largest Polish humanitarian non-governmental organizations in the humanitarian data ecosystem in Ukraine and their approach to non-personal and personal data management since February of 2022. Both expert interviews and document analysis of non-profit organizations providing a direct response in the Ukrainian crisis context, i.e., the Polish Humanitarian Action, Caritas, Polish Medical Mission, Polish Red Cross, and the Polish Center for International Aid and the applicability of theoretical perspective of contingency theory – with its central point that the context or specific set of conditions determining the way of behavior and the choice of methods of action – help to examine the significance of data complexity and adaptive approach to data management by relief organizations in the humanitarian supply chain network. The purpose of this study is to determine how the existence of well-established and accurate internal procedures and good practices of using and sharing data (including safeguards for sensitive data) by the surveyed organizations with comparable human and technological capabilities are implemented and adjusted to Ukrainian humanitarian settings and data infrastructure. The study also poses a fundamental question of whether this crisis experience will have a determining effect on their future performance. The obtained finding indicate that Polish humanitarian organizations in Ukraine, which have their own unique code of conduct and effective managerial data practices determined by contingencies, have limited influence on improving the situational awareness of other assistance providers in the data ecosystem despite their attempts to undertake interagency work in the area of data sharing.

Keywords: humanitarian data ecosystem, humanitarian data management, polish NGOs, Ukraine

Procedia PDF Downloads 93
24874 An Approach for Estimation in Hierarchical Clustered Data Applicable to Rare Diseases

Authors: Daniel C. Bonzo

Abstract:

Practical considerations lead to the use of unit of analysis within subjects, e.g., bleeding episodes or treatment-related adverse events, in rare disease settings. This is coupled with data augmentation techniques such as extrapolation to enlarge the subject base. In general, one can think about extrapolation of data as extending information and conclusions from one estimand to another estimand. This approach induces hierarchichal clustered data with varying cluster sizes. Extrapolation of clinical trial data is being accepted increasingly by regulatory agencies as a means of generating data in diverse situations during drug development process. Under certain circumstances, data can be extrapolated to a different population, a different but related indication, and different but similar product. We consider here the problem of estimation (point and interval) using a mixed-models approach under an extrapolation. It is proposed that estimators (point and interval) be constructed using weighting schemes for the clusters, e.g., equally weighted and with weights proportional to cluster size. Simulated data generated under varying scenarios are then used to evaluate the performance of this approach. In conclusion, the evaluation result showed that the approach is a useful means for improving statistical inference in rare disease settings and thus aids not only signal detection but risk-benefit evaluation as well.

Keywords: clustered data, estimand, extrapolation, mixed model

Procedia PDF Downloads 137
24873 Smart Side View Mirror Camera for Real Time System

Authors: Nunziata Ivana Guarneri, Arcangelo Bruna, Giuseppe Spampinato, Antonio Buemi

Abstract:

In the last decade, automotive companies have invested a lot in terms of innovation about many aspects regarding the automatic driver assistance systems. One innovation regards the usage of a smart camera placed on the car’s side mirror for monitoring the back and lateral road situation. A common road scenario is the overtaking of the preceding car and, in this case, a brief distraction or a loss of concentration can lead the driver to undertake this action, even if there is an already overtaking vehicle, leading to serious accidents. A valid support for a secure drive can be a smart camera system, which is able to automatically analyze the road scenario and consequentially to warn the driver when another vehicle is overtaking. This paper describes a method for monitoring the side view of a vehicle by using camera optical flow motion vectors. The proposed solution detects the presence of incoming vehicles, assesses their distance from the host car, and warns the driver through different levels of alert according to the estimated distance. Due to the low complexity and computational cost, the proposed system ensures real time performances.

Keywords: camera calibration, ego-motion, Kalman filters, object tracking, real time systems

Procedia PDF Downloads 229
24872 Factors Affecting Consumers’ Online Shopping Behavior in Vietnam during the COVID-19 Pandemic: A Case Study of Tiki

Authors: Thi Hai Anh Nguyen, Pantea Aria

Abstract:

Tiki is one of the leading e-commerce companies in Viet Nam. Since the beginning of 2020, COVID-19 has been spreading around the world. Thanks to this pandemic, the Tiki platform has many strengths and has faced many threats. Customer behaviour was forecasted to change during the COVID-19 pandemic. The aim of the investigation is (1) Identifying factors affecting online consumer behaviour of Tiki in Ho Chi Minh City, Vietnam, (2) Measuring the level of impact of these factors, and (3) Recommendations for Tiki to improve its business strategy for the next stage. This research studies eight factors and collected 378 online surveys for analysis. Using SPSS software identified five factors (product, price, reliability, and web design) positively influencing customer behaviour. COVID-19 factor does not impact significantly Tiki’s customer behaviour. This research conducted some qualitative interviews to understand shopping experiences and customers’ expectations. One of these interviews’ main points is that Tiki’s customers have high trust in the Tiki brand and its high-quality products. Based on the results, the Tiki corporation should secure its core value. Tiki’s employees and logistics systems should be well-trained and optimized to improve customer experiences.

Keywords: COVID-19, e-commerce, impact, pandemic, Vietnam

Procedia PDF Downloads 165
24871 Authorization of Commercial Communication Satellite Grounds for Promoting Turkish Data Relay System

Authors: Celal Dudak, Aslı Utku, Burak Yağlioğlu

Abstract:

Uninterrupted and continuous satellite communication through the whole orbit time is becoming more indispensable every day. Data relay systems are developed and built for various high/low data rate information exchanges like TDRSS of USA and EDRSS of Europe. In these missions, a couple of task-dedicated communication satellites exist. In this regard, for Turkey a data relay system is attempted to be defined exchanging low data rate information (i.e. TTC) for Earth-observing LEO satellites appointing commercial GEO communication satellites all over the world. First, justification of this attempt is given, demonstrating duration enhancements in the link. Discussion of preference of RF communication is, also, given instead of laser communication. Then, preferred communication GEOs – including TURKSAT4A already belonging to Turkey- are given, together with the coverage enhancements through STK simulations and the corresponding link budget. Also, a block diagram of the communication system is given on the LEO satellite.

Keywords: communication, GEO satellite, data relay system, coverage

Procedia PDF Downloads 442
24870 The Development of Encrypted Near Field Communication Data Exchange Format Transmission in an NFC Passive Tag for Checking the Genuine Product

Authors: Tanawat Hongthai, Dusit Thanapatay

Abstract:

This paper presents the development of encrypted near field communication (NFC) data exchange format transmission in an NFC passive tag for the feasibility of implementing a genuine product authentication. We propose a research encryption and checking the genuine product into four major categories; concept, infrastructure, development and applications. This result shows the passive NFC-forum Type 2 tag can be configured to be compatible with the NFC data exchange format (NDEF), which can be automatically partially data updated when there is NFC field.

Keywords: near field communication, NFC data exchange format, checking the genuine product, encrypted NFC

Procedia PDF Downloads 281
24869 Data Hiding by Vector Quantization in Color Image

Authors: Yung Gi Wu

Abstract:

With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.

Keywords: data hiding, vector quantization, watermark, color image

Procedia PDF Downloads 366
24868 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: anomaly detection, autoencoder, data centers, deep learning

Procedia PDF Downloads 194
24867 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 315
24866 Israel versus Palestine: Politological and Depth-Psychological Aspects

Authors: Harald Haas, Andrea Plaschke

Abstract:

Many of the contemporary major conflicts on this earth could not be solved so far, they either are perpetuated, or they are reflated again and again. Efforts of purely political conflict management or -resolution aim merely at the symptoms of conflict, not its roots. These roots are, in almost every case, also psychological ones. Thus, this contribution aims to shed light on the roots of one of the best known and longest-lasting conflicts: the Palestinian-Israeli one. Methodologies used were the compilation of existing scientific resources, field research in Palestine and Israel, as well as tests conducted with the Adult Attachment Projective in Palestine and Israel. Findings show that the majority of Palestinian, as well as Israeli test participants, show a disorganised attachment pattern which, in connection with the assumption of collective traumatization, seem to be a major obstacle to a lasting and peaceful conflict-resolution between these two peoples. There appears to be no short-term solution for this conflict, especially not within the range of usual Western legislative periods. Both sides ought to be provided with a kind of 'safe haven' over a long period of time, accompanied by a framework of various arrangements of coping with trauma, building lasting and secure relationships, as well as raising and educating present and future generations of Palestinians and Israelis for peace and co-operation with each other.

Keywords: conflict-management, trauma, political psychology, attachment theory

Procedia PDF Downloads 204
24865 Diversity and Use of Agroforestry Yards of Family Farmers of Ponte Alta – Gama, Federal District, Brazil

Authors: Kever Bruno Paradelo Gomes, Rosana Carvalho Martins

Abstract:

The home gardens areas are production systems, which are located near the homes and are quite common in the tropics. They consist of agricultural and forest species and may also involve the raising of small animals to produce food for subsistence as well as income generation, with a special focus on the conservation of biodiversity. Home gardens are diverse Agroforestry systems with multiple uses, among many, food security, income aid, traditional medicine. The work was carried out on rural properties of the family farmers of the Ponte Alta Rural Nucleus, Gama Administrative Region, in the city of Brasília, Federal District- Brazil. The present research is characterized methodologically as a quantitative, exploratory and descriptive nature. The instruments used in this research were: bibliographic survey and semi-structured questionnaire. The data collection was performed through the application of a semi-structured questionnaire, containing questions that referred to the perception and behavior of the interviewed producer on the subject under analysis. In each question, the respondent explained his knowledge about sustainability, agroecological practices, environmental legislation, conservation methods, forest and medicinal species, ago social and socioeconomic characteristics, use and purpose of agroforestry and technical assistance. The sample represented 55.62% of the universe of the study. We interviewed 99 people aged 18-83 years, with a mean age of 49 years. The low level of education, coupled with the lack of training and guidance for small family farmers in the Ponte Alta Rural Nucleus, is one of the limitations to the development of practices oriented towards sustainable and agroecological agriculture in the nucleus. It is observed that 50.5% of the interviewed people landed with agroforestry yards less than 20 years ago, and only 16.17% of them are older than 35 years. In identifying agriculture as the main activity of most of the rural properties studied, attention is drawn to the cultivation of medicinal plants, fruits and crops as the most extracted products. However, it is verified that the crops in the backyards have the exclusive purpose of family consumption, which could be complemented with the marketing of the surplus, as well as with the aggregation of value to the cultivated products. Initiatives such as this may contribute to the increase in family income and to the motivation and value of the crop in agroecological gardens. We conclude that home gardens of Ponte Alta are highly diverse thus contributing to local biodiversity conservation of are managed by women to ensure food security and allows income generation. The tradition of existing knowledge on the use and management of the diversity of resources used in agroforestry yards is of paramount importance for the development of sustainable alternative practices.

Keywords: agriculture, agroforestry system, rural development, sustainability

Procedia PDF Downloads 141
24864 Transforming Data into Knowledge: Mathematical and Statistical Innovations in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in various domains has created a pressing need for effective methods to transform this data into meaningful knowledge. In this era of big data, mathematical and statistical innovations play a crucial role in unlocking insights and facilitating informed decision-making in data analytics. This abstract aims to explore the transformative potential of these innovations and their impact on converting raw data into actionable knowledge. Drawing upon a comprehensive review of existing literature, this research investigates the cutting-edge mathematical and statistical techniques that enable the conversion of data into knowledge. By evaluating their underlying principles, strengths, and limitations, we aim to identify the most promising innovations in data analytics. To demonstrate the practical applications of these innovations, real-world datasets will be utilized through case studies or simulations. This empirical approach will showcase how mathematical and statistical innovations can extract patterns, trends, and insights from complex data, enabling evidence-based decision-making across diverse domains. Furthermore, a comparative analysis will be conducted to assess the performance, scalability, interpretability, and adaptability of different innovations. By benchmarking against established techniques, we aim to validate the effectiveness and superiority of the proposed mathematical and statistical innovations in data analytics. Ethical considerations surrounding data analytics, such as privacy, security, bias, and fairness, will be addressed throughout the research. Guidelines and best practices will be developed to ensure the responsible and ethical use of mathematical and statistical innovations in data analytics. The expected contributions of this research include advancements in mathematical and statistical sciences, improved data analysis techniques, enhanced decision-making processes, and practical implications for industries and policymakers. The outcomes will guide the adoption and implementation of mathematical and statistical innovations, empowering stakeholders to transform data into actionable knowledge and drive meaningful outcomes.

Keywords: data analytics, mathematical innovations, knowledge extraction, decision-making

Procedia PDF Downloads 75
24863 Experimental Evaluation of Succinct Ternary Tree

Authors: Dmitriy Kuptsov

Abstract:

Tree data structures, such as binary or in general k-ary trees, are essential in computer science. The applications of these data structures can range from data search and retrieval to sorting and ranking algorithms. Naive implementations of these data structures can consume prohibitively large volumes of random access memory limiting their applicability in certain solutions. Thus, in these cases, more advanced representation of these data structures is essential. In this paper we present the design of the compact version of ternary tree data structure and demonstrate the results for the experimental evaluation using static dictionary problem. We compare these results with the results for binary and regular ternary trees. The conducted evaluation study shows that our design, in the best case, consumes up to 12 times less memory (for the dictionary used in our experimental evaluation) than a regular ternary tree and in certain configuration shows performance comparable to regular ternary trees. We have evaluated the performance of the algorithms using both 32 and 64 bit operating systems.

Keywords: algorithms, data structures, succinct ternary tree, per- formance evaluation

Procedia PDF Downloads 163
24862 Morphology, Chromosome Numbers and Molecular Evidences of Three New Species of Begonia Section Baryandra (Begoniaceae) from Panay Island, Philippines

Authors: Rosario Rivera Rubite, Ching-I Peng, Che-Wei Lin, Mark Hughes, Yoshiko Kono, Kuo-Fang Chung

Abstract:

The flora of Panay Island is under-collected compared with the other islands of the Philippines. In a joint expedition to the island, botanists from Taiwan and the Philippines found three unknown Begonia and compared them with potentially allied species. The three species are clearly assignable to Begonia section Baryandra which is largely endemic to the Philippines. Studies of literature, herbarium specimens, and living plants support the recognition of the three new species: Begonia culasiensis, Begonia merrilliana, and Begonia sykakiengii. Somatic chromosomes at metaphase were determined to be 2n=30 for B. culasiensis and 2n=28 for both B. merrilliana and B. sykakiengii, which are congruent with those of most species in sect. Baryandra. Molecular phylogenetic evidence is consistent with B. culasiensis being a relict from the late Miocene, and with B. merrilliana and B. sykakiengii being younger species of Pleistocene origin. The continuing discovery of endemic Philippine species means the remaining fragments of both primary and secondary native vegetation in the archipelago are of increasing value in terms of natural capital. A secure future for the species could be realized through ex-situ conservation collections and raising awareness with community groups.

Keywords: conservation, endemic , herbarium , limestone, phylogenetics, taxonomy

Procedia PDF Downloads 219
24861 The Influence of Advertising in the Respect of the Right to Adequate Food: Some Notes regarding the Portuguese Legal Framework

Authors: Susana Almeida

Abstract:

The right to adequate food is a human right protected under several international human rights treaties of universal or regional application. In addition, this social right is – as we intend to demonstrate – guaranteed under the Portuguese Constitution. Therefore, in order to assure the protection of this right, the Portuguese State must not only abstain from interfering with this human right (negative obligation) but also take action to secure the human right to adequate food (positive obligation). In this context, the Portuguese State has developed several governmental policies, such as taxing sugary drinks, setting the maximum amount of salt in the bread or creating the National Program for the Promotion of Healthy Food. Nevertheless, we intend to demonstrate that special attention should be given to advertising, as advertisements have an extreme influence on the consumers' decisions and hence on the food decisions. In this paper, besides explaining the cross construction of the human right to adequate food, we aim to examine the Advertising Portuguese Code and to study the several provisions that could be held by the Portuguese consumer to challenge some advertisements due to the violation of the right to health and the right to adequate food. Moreover, having in mind the influence of advertising on the food decisions and the serious problems that unhealthy food may bring (e.g., child obesity), one should ask if this legal framework should not be reviewed in order to lay out some restrictions on advertising, namely setting advices like in alcohol advertisements.

Keywords: advertising code, consumer law, right to adequate food, social human right

Procedia PDF Downloads 170
24860 A Multi-Objective Decision Making Model for Biodiversity Conservation and Planning: Exploring the Concept of Interdependency

Authors: M. Mohan, J. P. Roise, G. P. Catts

Abstract:

Despite living in an era where conservation zones are de-facto the central element in any sustainable wildlife management strategy, we still find ourselves grappling with several pareto-optimal situations regarding resource allocation and area distribution for the same. In this paper, a multi-objective decision making (MODM) model is presented to answer the question of whether or not we can establish mutual relationships between these contradicting objectives. For our study, we considered a Red-cockaded woodpecker (Picoides borealis) habitat conservation scenario in the coastal plain of North Carolina, USA. Red-cockaded woodpecker (RCW) is a non-migratory territorial bird that excavates cavities in living pine trees for roosting and nesting. The RCW groups nest in an aggregation of cavity trees called ‘cluster’ and for our model we use the number of clusters to be established as a measure of evaluating the size of conservation zone required. The case study is formulated as a linear programming problem and the objective function optimises the Red-cockaded woodpecker clusters, carbon retention rate, biofuel, public safety and Net Present Value (NPV) of the forest. We studied the variation of individual objectives with respect to the amount of area available and plotted a two dimensional dynamic graph after establishing interrelations between the objectives. We further explore the concept of interdependency by integrating the MODM model with GIS, and derive a raster file representing carbon distribution from the existing forest dataset. Model results demonstrate the applicability of interdependency from both linear and spatial perspectives, and suggest that this approach holds immense potential for enhancing environmental investment decision making in future.

Keywords: conservation, interdependency, multi-objective decision making, red-cockaded woodpecker

Procedia PDF Downloads 338
24859 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement

Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti

Abstract:

Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.

Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing

Procedia PDF Downloads 109
24858 Securing Online Voting With Blockchain and Smart Contracts

Authors: Anant Mehrotra, Krish Phagwani

Abstract:

Democratic voting is vital for any country, but current methods like ballot papers or EVMs have drawbacks, including transparency issues, low voter turnout, and security concerns. Blockchain technology offers a potential solution by providing a secure, decentralized, and transparent platform for e-voting. With features like immutability, security, and anonymity, blockchain combined with smart contracts can enhance trust and prevent vote tampering. This paper explores an Ethereum-based e-voting application using Solidity, showcasing a web app that prevents duplicate voting through a token-based system, while also discussing the advantages and limitations of blockchain in digital voting. Voting is a crucial component of democratic decision-making, yet current methods, like paper ballots, remain outdated and inefficient. This paper reviews blockchain-based voting systems, highlighting strategies and guidelines to create a comprehensive electronic voting system that leverages cryptographic techniques, such as zero-knowledge proofs, to enhance privacy. It addresses limitations of existing e-voting solutions, including cost, identity management, and scalability, and provides key insights for organizations looking to design their own blockchain-based voting systems.

Keywords: electronic voting, smart contracts, blockchain nased voting, security

Procedia PDF Downloads 14
24857 Prosperous Digital Image Watermarking Approach by Using DCT-DWT

Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar

Abstract:

In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacks

Keywords: watermarking, digital, DCT-DWT, security

Procedia PDF Downloads 423
24856 Machine Learning Data Architecture

Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap

Abstract:

Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.

Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning

Procedia PDF Downloads 65
24855 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: colour data, local stereo matching, stereo correspondence, disparity map

Procedia PDF Downloads 371
24854 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 432
24853 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design

Authors: Qing K. Zhu

Abstract:

Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.

Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise

Procedia PDF Downloads 254
24852 Introduction of Electronic Health Records to Improve Data Quality in Emergency Department Operations

Authors: Anuruddha Jagoda, Samiddhi Samarakoon, Anil Jasinghe

Abstract:

In its simplest form, data quality can be defined as 'fitness for use' and it is a concept with multi-dimensions. Emergency Departments(ED) require information to treat patients and on the other hand it is the primary source of information regarding accidents, injuries, emergencies etc. Also, it is the starting point of various patient registries, databases and surveillance systems. This interventional study was carried out to improve data quality at the ED of the National Hospital of Sri Lanka (NHSL) by introducing an e health solution to improve data quality. The NHSL is the premier trauma care centre in Sri Lanka. The study consisted of three components. A research study was conducted to assess the quality of data in relation to selected five dimensions of data quality namely accuracy, completeness, timeliness, legibility and reliability. The intervention was to develop and deploy an electronic emergency department information system (eEDIS). Post assessment of the intervention confirmed that all five dimensions of data quality had improved. The most significant improvements are noticed in accuracy and timeliness dimensions.

Keywords: electronic health records, electronic emergency department information system, emergency department, data quality

Procedia PDF Downloads 276
24851 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset

Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba

Abstract:

We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).

Keywords: autonomous driving, physical traffic model, prediction model, statistical learning process

Procedia PDF Downloads 262
24850 Application of Modal Analysis for Commissioning of a Ball Screw System

Authors: T. D. Tran, H. Schlegel, R. Neugebauer

Abstract:

Ball screws are an important component in machine tools. In mechatronic systems and machine tools, a ball screw has to work usually at a high speed. Otherwise the axial compliance of the ball screw, in combination with the inertia of the slide, the motor, the coupling and the screw, will cause an oscillation resonance, which limits the systems bandwidth and consequently influences performance of the motion controller. In this paper, the modal analysis method by measuring and analysing the vibrating parameters of the ball screw system to determine the dynamic characteristic of existing structures is used. On the one hand, the results of this study were obtained by the theoretical analysis and the modal testing of a ball screw system test station with the help of an impact hammer, respectively using excitation by motor. The experimental study showed oscillating forms of the ball screw for each frequency and obtained eigenfrequencies of the ball screw system. On the other hand, in this research a simulation with the help of the numerical modal analysis in order to analyse the oscillation and to find the eigenfrequencies of the ball screw system is used. Furthermore, the model order reduction by modal reduction and also according to Guyan is carried out. On the basis of these results a secure and also rapid commissioning of the control loops with regard to operating in their optimal function is targeted.

Keywords: modal analysis, ball screw, controller system, machine tools

Procedia PDF Downloads 460
24849 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator

Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain

Abstract:

Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.

Keywords: percent depth dose, flatness, symmetry, golden beam data

Procedia PDF Downloads 490
24848 Variable-Fidelity Surrogate Modelling with Kriging

Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans

Abstract:

Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.

Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients

Procedia PDF Downloads 558