Search results for: data reliability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25628

Search results for: data reliability

24398 Quantified Metabolomics for the Determination of Phenotypes and Biomarkers across Species in Health and Disease

Authors: Miroslava Cuperlovic-Culf, Lipu Wang, Ketty Boyle, Nadine Makley, Ian Burton, Anissa Belkaid, Mohamed Touaibia, Marc E. Surrette

Abstract:

Metabolic changes are one of the major factors in the development of a variety of diseases in various species. Metabolism of agricultural plants is altered the following infection with pathogens sometimes contributing to resistance. At the same time, pathogens use metabolites for infection and progression. In humans, metabolism is a hallmark of cancer development for example. Quantified metabolomics data combined with other omics or clinical data and analyzed using various unsupervised and supervised methods can lead to better diagnosis and prognosis. It can also provide information about resistance as well as contribute knowledge of compounds significant for disease progression or prevention. In this work, different methods for metabolomics quantification and analysis from Nuclear Magnetic Resonance (NMR) measurements that are used for investigation of disease development in wheat and human cells will be presented. One-dimensional 1H NMR spectra are used extensively for metabolic profiling due to their high reliability, wide range of applicability, speed, trivial sample preparation and low cost. This presentation will describe a new method for metabolite quantification from NMR data that combines alignment of spectra of standards to sample spectra followed by multivariate linear regression optimization of spectra of assigned metabolites to samples’ spectra. Several different alignment methods were tested and multivariate linear regression result has been compared with other quantification methods. Quantified metabolomics data can be analyzed in the variety of ways and we will present different clustering methods used for phenotype determination, network analysis providing knowledge about the relationships between metabolites through metabolic network as well as biomarker selection providing novel markers. These analysis methods have been utilized for the investigation of fusarium head blight resistance in wheat cultivars as well as analysis of the effect of estrogen receptor and carbonic anhydrase activation and inhibition on breast cancer cell metabolism. Metabolic changes in spikelet’s of wheat cultivars FL62R1, Stettler, MuchMore and Sumai3 following fusarium graminearum infection were explored. Extensive 1D 1H and 2D NMR measurements provided information for detailed metabolite assignment and quantification leading to possible metabolic markers discriminating resistance level in wheat subtypes. Quantification data is compared to results obtained using other published methods. Fusarium infection induced metabolic changes in different wheat varieties are discussed in the context of metabolic network and resistance. Quantitative metabolomics has been used for the investigation of the effect of targeted enzyme inhibition in cancer. In this work, the effect of 17 β -estradiol and ferulic acid on metabolism of ER+ breast cancer cells has been compared to their effect on ER- control cells. The effect of the inhibitors of carbonic anhydrase on the observed metabolic changes resulting from ER activation has also been determined. Metabolic profiles were studied using 1D and 2D metabolomic NMR experiments, combined with the identification and quantification of metabolites, and the annotation of the results is provided in the context of biochemical pathways.

Keywords: metabolic biomarkers, metabolic network, metabolomics, multivariate linear regression, NMR quantification, quantified metabolomics, spectral alignment

Procedia PDF Downloads 330
24397 An Investigation on Physics Teachers’ Views Towards Context Based Learning Approach

Authors: Medine Baran, Abdulkadir Maskan, Mehmet Ikbal Yetişir, Mukadder Baran, Azmi Türkan, Şeyma Yaşar

Abstract:

The purpose of this study was to determine the views of physics teachers from several secondary schools in Turkey towards context-based learning approach. In the study, the context-based learning opinion questionnaire developed by the researchers for use as the data collection tool was piloted with 250 physics teachers. The questionnaire examined by the researchers and field experts was initially made up of 53 items. Following the evaluation process of the questionnaire, it included 37 items. In this way, the reliability and validity process of the measurement tool was completed. In the end, the finalized questionnaire was applied to 144 physics teachers from several secondary schools in different cities in Turkey (F:73, M:71). In the study, the participants were determined based on ease of reaching them. The results revealed no remarkable difference between the views of the physics teachers with respect to their gender, region and school. However, when the items in the questionnaire were considered, it was found that the participants interestingly agreed on some of the choices in the items. Depending on this, it was found that there were high levels of differences between the frequencies of those who agreed and those who disagreed with the 16 items in the questionnaire. Therefore, as the following phase of the present study, further research has been planned using the same questions. Based on these questions, which received opposite responses, physics teachers will be asked for their views about the results of the study using the interview technique, one of qualitative research techniques. In this way, the results will be evaluated both by the researchers and by the participants, and the problems and difficulties will be determined. As a result, related suggestions can be put forward.

Keywords: context bases learning, physics teachers, views

Procedia PDF Downloads 363
24396 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: anomaly detection, autoencoder, data centers, deep learning

Procedia PDF Downloads 182
24395 Emigration Improves Life Standard of Families Left Behind: An Evidence from Rural Area of Gujrat-Pakistan

Authors: Shoaib Rasool

Abstract:

Migration trends in rural areas of Gujrat are increasing day by day among illiterate people as they consider it as a source of attraction and charm of destination. It affects the life standard both positive and negative way to their families left behind in the context of poverty, socio-economic status and life standards. It also promotes material items and as well as social indicators of living, housing conditions, schooling of their children’s, health seeking behavior and to some extent their family environment. The nature of the present study is to analyze socio-economic conditions regarding life standard of emigrant families left behind in rural areas of Gujrat district, Pakistan. A survey design was used on 150 families selected from rural areas of Gujrat districts through purposive sampling technique. A well-structured questionnaire was administered by the researcher to explore the nature of the study and for further data collection process. The measurement tool was pretested on 20 families to check the workability and reliability before the actual data collection. Statistical tests were applied to draw results and conclusion. The preliminary findings of the study show that emigration has left deep social-economic impacts on life standards of rural families left behind in Gujrat. They improved their life status and living standard through remittances. Emigration is one of the major sources of development of economy of household and it also alleviate poverty at house household level as well as community and country level. The rationale behind migration varies individually and geographically. There are popular considered attractions in Pakistan includes securing high status, improvement in health condition, coping other, getting married then to acquire nationality, using the unfair means, opting educational visas etc. Emigrants are not only sending remittances but also returning with newly acquired skills and valuable knowledge to their country of origin because emigrants learn new methods of living and working. There are also women migrants who experience social downward mobility by engaging in jobs that are beneath their educational qualifications.

Keywords: emigration, life standard, families, left behind, rural area, Gujrat

Procedia PDF Downloads 435
24394 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 303
24393 Transforming Data into Knowledge: Mathematical and Statistical Innovations in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in various domains has created a pressing need for effective methods to transform this data into meaningful knowledge. In this era of big data, mathematical and statistical innovations play a crucial role in unlocking insights and facilitating informed decision-making in data analytics. This abstract aims to explore the transformative potential of these innovations and their impact on converting raw data into actionable knowledge. Drawing upon a comprehensive review of existing literature, this research investigates the cutting-edge mathematical and statistical techniques that enable the conversion of data into knowledge. By evaluating their underlying principles, strengths, and limitations, we aim to identify the most promising innovations in data analytics. To demonstrate the practical applications of these innovations, real-world datasets will be utilized through case studies or simulations. This empirical approach will showcase how mathematical and statistical innovations can extract patterns, trends, and insights from complex data, enabling evidence-based decision-making across diverse domains. Furthermore, a comparative analysis will be conducted to assess the performance, scalability, interpretability, and adaptability of different innovations. By benchmarking against established techniques, we aim to validate the effectiveness and superiority of the proposed mathematical and statistical innovations in data analytics. Ethical considerations surrounding data analytics, such as privacy, security, bias, and fairness, will be addressed throughout the research. Guidelines and best practices will be developed to ensure the responsible and ethical use of mathematical and statistical innovations in data analytics. The expected contributions of this research include advancements in mathematical and statistical sciences, improved data analysis techniques, enhanced decision-making processes, and practical implications for industries and policymakers. The outcomes will guide the adoption and implementation of mathematical and statistical innovations, empowering stakeholders to transform data into actionable knowledge and drive meaningful outcomes.

Keywords: data analytics, mathematical innovations, knowledge extraction, decision-making

Procedia PDF Downloads 63
24392 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: instance selection, data reduction, MapReduce, kNN

Procedia PDF Downloads 244
24391 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 64
24390 Experimental Evaluation of Succinct Ternary Tree

Authors: Dmitriy Kuptsov

Abstract:

Tree data structures, such as binary or in general k-ary trees, are essential in computer science. The applications of these data structures can range from data search and retrieval to sorting and ranking algorithms. Naive implementations of these data structures can consume prohibitively large volumes of random access memory limiting their applicability in certain solutions. Thus, in these cases, more advanced representation of these data structures is essential. In this paper we present the design of the compact version of ternary tree data structure and demonstrate the results for the experimental evaluation using static dictionary problem. We compare these results with the results for binary and regular ternary trees. The conducted evaluation study shows that our design, in the best case, consumes up to 12 times less memory (for the dictionary used in our experimental evaluation) than a regular ternary tree and in certain configuration shows performance comparable to regular ternary trees. We have evaluated the performance of the algorithms using both 32 and 64 bit operating systems.

Keywords: algorithms, data structures, succinct ternary tree, per- formance evaluation

Procedia PDF Downloads 154
24389 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement

Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti

Abstract:

Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.

Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing

Procedia PDF Downloads 100
24388 An Efficient Strategy for Relay Selection in Multi-Hop Communication

Authors: Jung-In Baik, Seung-Jun Yu, Young-Min Ko, Hyoung-Kyu Song

Abstract:

This paper proposes an efficient relaying algorithm to obtain diversity for improving the reliability of a signal. The algorithm achieves time or space diversity gain by multiple versions of the same signal through two routes. Relays are separated between a source and destination. The routes between the source and destination are set adaptive in order to deal with different channels and noises. The routes consist of one or more relays and the source transmits its signal to the destination through the routes. The signals from the relays are combined and detected at the destination. The proposed algorithm provides a better performance than the conventional algorithms in bit error rate (BER).

Keywords: multi-hop, OFDM, relay, relaying selection

Procedia PDF Downloads 437
24387 Prosperous Digital Image Watermarking Approach by Using DCT-DWT

Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar

Abstract:

In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacks

Keywords: watermarking, digital, DCT-DWT, security

Procedia PDF Downloads 413
24386 Machine Learning Data Architecture

Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap

Abstract:

Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.

Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning

Procedia PDF Downloads 51
24385 Study and Experimental Analysis of a Photovoltaic Pumping System under Three Operating Modes

Authors: Rekioua D., Mohammedi A., Rekioua T., Mehleb Z.

Abstract:

Photovoltaic water pumping systems is considered as one of the most promising areas in photovoltaic applications, the economy and reliability of solar electric power made it an excellent choice for remote water pumping. Two conventional techniques are currently in use; the first is the directly coupled technique and the second is the battery buffered photovoltaic pumping system. In this paper, we present different performances of a three operation modes of photovoltaic pumping system. The aim of this work is to determine the effect of different parameters influencing the photovoltaic pumping system performances, such as pumping head, System configuration and climatic conditions. The obtained results are presented and discussed.

Keywords: batteries charge mode, photovoltaic pumping system, pumping head, submersible pump

Procedia PDF Downloads 493
24384 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: colour data, local stereo matching, stereo correspondence, disparity map

Procedia PDF Downloads 360
24383 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 420
24382 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design

Authors: Qing K. Zhu

Abstract:

Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.

Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise

Procedia PDF Downloads 241
24381 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset

Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba

Abstract:

We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).

Keywords: autonomous driving, physical traffic model, prediction model, statistical learning process

Procedia PDF Downloads 247
24380 Stigma Associated with Invisible Disabilities and Its Effect on Intended Disclosure in the Workplace

Authors: Jessica Lynne Hicksted

Abstract:

Disability discrimination is a long-standing issue that, despite protections, continues to result in unemployment, underemployment, and lack of advancement for disabled persons. Visible stigma is researched substantially; however, less is known about the impact of stigma associated with identities that can be concealed. Although researchers have investigated this issue, currently there is no tool to measure this phenomenon. The purpose of this quantitative study was to create and validate a new tool to measure stigma associated with invisible disabilities. The study is grounded by Roberts’ conceptual model of professional image construction integrating social identity, impression management, and organizational behavior; Meisenbach’s stigma management communication theory addressing the vulnerabilities and resilience to stigma communication by focusing on how individuals encounter and react to perceived stigmas; and Kelley and Michela’s causal attribution theory. Participants included 1,412 adults in the United States 18 years or older currently employed or who have been employed within the last 5 years. Confirmatory factor analysis of the new Workplace Invisible Disabilities Experience scale showed excellent fit of the factor structure to the data, X₂/df = 1.855, CFI = .955, RMSEA = .045, p = .0001. The scale has three subscales, Ableism, Advocacy, and Acceptance, with excellent internal consistency reliability. Total score, Advocacy, and Acceptance were associated with intention to disclose. Implications for positive social change include helping organizations to understand the extent of invisible disability stigma that can help improve workplace performance and satisfaction.

Keywords: invisible disabilities, accommodations, acceptance, social change, workplace inclusion

Procedia PDF Downloads 60
24379 Variable-Fidelity Surrogate Modelling with Kriging

Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans

Abstract:

Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.

Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients

Procedia PDF Downloads 541
24378 Robust Barcode Detection with Synthetic-to-Real Data Augmentation

Authors: Xiaoyan Dai, Hsieh Yisan

Abstract:

Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.

Keywords: barcode detection, data augmentation, deep learning, image-based processing

Procedia PDF Downloads 153
24377 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence

Authors: Muhammad Bilal Shaikh

Abstract:

Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.

Keywords: multimodal AI, computer vision, NLP, mineral processing, mining

Procedia PDF Downloads 61
24376 Analysis of Delivery of Quad Play Services

Authors: Rahul Malhotra, Anurag Sharma

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: FTTH, quad play, play service, access networks, data rate

Procedia PDF Downloads 394
24375 Level of Sustainability, Environmental Assessment and Life Cycle Assessment of Industrial Technology Research Projects in Carlos Hilado Memorial State College, Alijis Campus, Bacolod City, Negros Occidental, Philippines

Authors: Rene A. Salmingo

Abstract:

In pursuing higher educational institution’s transition to sustainable future, this research initiative was conducted. The study aimed to determine the level of sustainability, environmental impact and life cycle phase assessment of the industrial technology research projects at the Institute of Information Technology, Carlos Hilado Memorial State College (CHMSC), Alijis Campus, Bacolod City, Negros Occidental, Philippines. The research method was descriptive utilizing a researcher made questionnaire to assess the ten (10) industrial technology completed research projects. Mean was used to treat the data and instrument for Good and Scates’ validity through revisions and consultations from the environmental experts, technology specialists; and Cronbach Alpha was used to measure reliability. Results indicated that the level of sustainability and life cycle phase assessment was very high while the environmental impact of the industrial research projects was rated low. Moreover, the current research projects and environmental education courses in the college were relevant to support sustainable industrial technology research projects in the future. Hence, this research initiative will contribute to the transformation of CHMSC as a greening higher educational institution and as a center for sustainable development in the region.

Keywords: environmental impact, industrial technology research projects, life cycle phase assessment, sustainability

Procedia PDF Downloads 178
24374 Effects of Group Cognitive Restructuring and Rational Emotive Behavioral Therapy on Psychological Distress of Awaiting-Trial Inmates in Correctional Centers in North-West, Nigeria

Authors: Muhammad Shafi’U Adamu

Abstract:

This study examined the effects of two groups of Cognitive Behavioral Therapies (CBT) which, includes Cognitive Restructuring (CB) and Rational Emotive Behavioral Therapy (REBT), on the Psychological Distress of awaiting-trial Inmates in Correctional Centers in North-West Nigeria. The study had four specific objectives, four research questions, and four null hypotheses. The study used a quasi-experimental design that involved pre-test and post-test. The population comprised of all 7,962 awaiting-trial inmates in correctional centers in North-west Nigeria. 131 awaiting trial inmates from three intact Correctional Centers were randomly selected using the census technique. The respondents were sampled and randomly put into 3 groups (CR, REBT and Control). Kessler Psychological Distress Scale (K10) was adapted for data collection in the study. The instrument was validated by experts and subjected to a pilot study using Cronbach's Alpha with a reliability coefficient of 0.772. Each group received treatment for 8 consecutive weeks (60 minutes/week). Data collected from the field were subjected to descriptive statistics of mean, standard deviation and mean difference to answer the research questions. Inferential statistics of ANOVA and independent sample t-test were used to test the null hypotheses at P≤ 0.05 level of significance. Results in the study revealed that there was no significant difference among the pre-treatment mean scores of experimental and control groups. Statistical evidence also showed a significant difference among the mean scores of the three groups, and thus, results of the Post Hoc multiple-comparison test indicated the posttreatment reduction of psychological distress in the awaiting-trial inmates. Documented output also showed a significant difference between the post-treatment psychologically distressed mean scores of male and female awaiting-trial inmates, but there was no difference in those exposed to REBT. The research recommends that a standardized structured CBT counseling technique treatment should be designed for correctional centers across Nigeria, and CBT counseling techniques could be used in the treatment of PD in both correctional and clinical settings.

Keywords: awaiting-trial inmates, cognitive restructuring, correctional centers, rational emotive behavioral therapy

Procedia PDF Downloads 61
24373 Development and Optimization of German Diagnostical Tests in Mathematics for Vocational Training

Authors: J. Thiele

Abstract:

Teachers working at vocational Colleges are often confronted with the problem, that many students graduated from different schools and therefore each had a different education. Especially in mathematics many students lack fundamentals or had different priorities at their previous schools. Furthermore, these vocational Colleges have to provide Graduations for many different working-fields, with different core themes. The Colleges are interested in measuring the different Education levels of their students and providing assistance for those who need to catch up. The Project mathe-meistern was initiated to remedy this problem at vocational Colleges. For this purpose, online-tests were developed. The aim of these tests is to evaluate basic mathematical abilities of the students. The tests are online Multiple-Choice-Tests with a total of 65 Items. They are accessed online with a unique Transaction-Number (TAN) for each participant. The content is divided in several Categories (Arithmetic, Algebra, Fractions, Geometry, etc.). After each test, the student gets a personalized summary depicting their strengths and weaknesses in mathematical Basics. Teachers can visit a special website to examine the results of their classes or single students. In total 5830 students did participate so far. For standardization and optimization purposes the tests are being evaluated, using the classic and probabilistic Test-Theory regarding Objectivity, Reliability and Validity, annually since 2015. This Paper is about the Optimization process considering the Rasch-scaling and Standardization of the tests. Additionally, current results using standardized tests will be discussed. To achieve this Competence levels and Types of errors of students attending vocational Colleges in Nordrheinwestfalen, Germany, were determined, using descriptive Data and Distractorevaluations.

Keywords: diagnostical tests in mathematics, distractor devaluation, test-optimization, test-theory

Procedia PDF Downloads 115
24372 Comparing the Knee Kinetics and Kinematics during Non-Steady Movements in Recovered Anterior Cruciate Ligament Injured Badminton Players against an Uninjured Cohort: Case-Control Study

Authors: Anuj Pathare, Aleksandra Birn-Jeffery

Abstract:

Background: The Anterior Cruciate Ligament(ACL) helps stabilize the knee joint minimizing tibial anterior translation. Anterior Cruciate Ligament (ACL) injury is common in racquet sports and often occurs due to sudden acceleration, deceleration or changes of direction. This mechanism in badminton most commonly occurs during landing after an overhead stroke. Knee biomechanics during dynamic movements such as walking, running and stair negotiation, do not return to normal for more than a year after an ACL reconstruction. This change in the biomechanics may lead to re-injury whilst performing non-steady movements during sports, where these injuries are most prevalent. Aims: To compare if the knee kinetics and kinematics in ACL injury recovered athletes return to the same level as those from an uninjured cohort during standard movements used for clinical assessment and badminton shots. Objectives: The objectives of the study were to determine: Knee valgus during the single leg squat, vertical drop jump, net shot and drop shot; Degree of internal or external rotation during the single leg squat, vertical drop jump, net shot and drop shot; Maximum knee flexion during the single leg squat, vertical drop jump and net shot. Methods: This case-control study included 14 participants with three ACL injury recovered athletes and 11 uninjured participants. The participants performed various functional tasks including vertical drop jump, single leg squat; the forehand net shot and the forehand drop shot. The data was analysed using the two-way ANOVA test, and the reliability of the data was evaluated using the Intra Class Coefficient. Results: The data showed a significant decrease in the range of knee rotation in ACL injured participants as compared to the uninjured cohort (F₇,₅₅₆=2.37; p=0.021). There was also a decrease in the maximum knee flexion angles and an increase in knee valgus angles in ACL injured participants although they were not statistically significant. Conclusion: There was a significant decrease in the knee rotation angles in the ACL injured participants which could be a potential cause for re-injury in these athletes in the future. Although the results for decrease in maximum knee flexion angles and increase in knee valgus angles were not significant, this may be due to a limited sample of ACL injured participants; there is potential for it to be identified as a variable of interest in the rehabilitation of ACL injuries. These changes in the knee biomechanics could be vital in the rehabilitation of ACL injured athletes in the future, and an inclusion of sports based tasks, e.g., Net shot along with standard protocol movements for ACL assessment would provide a better measure of the rehabilitation of the athlete.

Keywords: ACL, biomechanics, knee injury, racquet sport

Procedia PDF Downloads 163
24371 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network

Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson

Abstract:

The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.

Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0

Procedia PDF Downloads 166
24370 Attribute Analysis of Quick Response Code Payment Users Using Discriminant Non-negative Matrix Factorization

Authors: Hironori Karachi, Haruka Yamashita

Abstract:

Recently, the system of quick response (QR) code is getting popular. Many companies introduce new QR code payment services and the services are competing with each other to increase the number of users. For increasing the number of users, we should grasp the difference of feature of the demographic information, usage information, and value of users between services. In this study, we conduct an analysis of real-world data provided by Nomura Research Institute including the demographic data of users and information of users’ usages of two services; LINE Pay, and PayPay. For analyzing such data and interpret the feature of them, Nonnegative Matrix Factorization (NMF) is widely used; however, in case of the target data, there is a problem of the missing data. EM-algorithm NMF (EMNMF) to complete unknown values for understanding the feature of the given data presented by matrix shape. Moreover, for comparing the result of the NMF analysis of two matrices, there is Discriminant NMF (DNMF) shows the difference of users features between two matrices. In this study, we combine EMNMF and DNMF and also analyze the target data. As the interpretation, we show the difference of the features of users between LINE Pay and Paypay.

Keywords: data science, non-negative matrix factorization, missing data, quality of services

Procedia PDF Downloads 121
24369 Development of the Internal Educational Quality Assurance System of Suan Sunandha Rajabhat University

Authors: Nipawan Tharasak, Sajeewan Darbavasu

Abstract:

This research aims 1) to study the opinion, problems and obstacles to internal educational quality assurance system for individual and the university levels, 2) to propose an approach to the development of quality assurance system of Suan Sunandha Rajabhat University. A study of problems and obstacles to internal educational quality assurance system of the university conducted with sample group consisting of staff and quality assurance committee members of the year 2010. There were 152 respondents. 5 executives were interviewed. Tool used in the research was document analysis. The structure of the interview questions and questionnaires with 5-rate scale. Reliability was 0.981. Data analysis were percentage, mean and standard deviation with content analysis. Results can be divided into 3 main points: (1) The implementation of the internal quality assurance system of the university. It was found that in overall, input, process and output factors received high scores. Each item is considered, the preparation, planning, monitoring and evaluation. The results of evaluation to improve the reporting and improvement according to an evaluation received high scores. However, the process received an average score. (2) Problems and obstacles. It was found that the personnel responsible for the duty still lack understanding of indicators and criteria of the quality assurance. (3) Development approach: -Staff should be encouraged to develop a better understanding of the quality assurance system. -Database system for quality assurance should be developed. -The results and suggestions should be applied in the next year development planning.

Keywords: development system, internal quality assurance, education, educational quality assurance

Procedia PDF Downloads 283