Search results for: microarray data mining
24650 Examining Coping Resources and Ways of Strategic Coping for Individuals with Spinal Cord Injury During the COVID-19 Crisis
Authors: Se-Hyuk Park, Hee-Jung Seo
Abstract:
Previous studies have investigated effective coping strategies for excessive stress, positive adaptation, resilience, mental health, and personal growth. However, to the best of the authors' knowledge, little research has been conducted to investigate how Koreans with physical disabilities deal with the COVID-19 pandemic. The purpose of this study was to identify coping strategies and coping resources that Koreans with physical disabilities utilized during the COVID-19 crisis. This study used semi-structured, in-depth interviews with 15 participants. Data were qualitatively analyzed using the constant comparative method with content mapping and content mining questions. We identified three salient themes that were used by participants as coping strategies to deal with various COVID-related challenges: (a) engagement in meaningful activities, (b) improvement of social and emotional support, and (c) experience of resilience. The findings of the present study highlighted that Korean adults with SCI actively engaged in various leisure activities, maintained and developed closer social relationships, and experienced resilience to face COVID-19-related stressors. These coping strategies were noted as a catalyst for physical health as well as psychological well-being of individuals with SCI.Keywords: spinal cord injury, covid-19 pandemic, coping strategies, coping resources, leisure
Procedia PDF Downloads 4224649 Interpreting Privacy Harms from a Non-Economic Perspective
Authors: Christopher Muhawe, Masooda Bashir
Abstract:
With increased Internet Communication Technology(ICT), the virtual world has become the new normal. At the same time, there is an unprecedented collection of massive amounts of data by both private and public entities. Unfortunately, this increase in data collection has been in tandem with an increase in data misuse and data breach. Regrettably, the majority of data breach and data misuse claims have been unsuccessful in the United States courts for the failure of proof of direct injury to physical or economic interests. The requirement to express data privacy harms from an economic or physical stance negates the fact that not all data harms are physical or economic in nature. The challenge is compounded by the fact that data breach harms and risks do not attach immediately. This research will use a descriptive and normative approach to show that not all data harms can be expressed in economic or physical terms. Expressing privacy harms purely from an economic or physical harm perspective negates the fact that data insecurity may result into harms which run counter the functions of privacy in our lives. The promotion of liberty, selfhood, autonomy, promotion of human social relations and the furtherance of the existence of a free society. There is no economic value that can be placed on these functions of privacy. The proposed approach addresses data harms from a psychological and social perspective.Keywords: data breach and misuse, economic harms, privacy harms, psychological harms
Procedia PDF Downloads 19524648 Reduction in Hot Metal Silicon through Statistical Analysis at G-Blast Furnace, Tata Steel Jamshedpur
Authors: Shoumodip Roy, Ankit Singhania, Santanu Mallick, Abhiram Jha, M. K. Agarwal, R. V. Ramna, Uttam Singh
Abstract:
The quality of hot metal at any blast furnace is judged by the silicon content in it. Lower hot metal silicon not only enhances process efficiency at steel melting shops but also reduces hot metal costs. The Hot metal produced at G-Blast furnace Tata Steel Jamshedpur has a significantly higher Si content than Benchmark Blast furnaces. The higher content of hot metal Si is mainly due to inferior raw material quality than those used in benchmark blast furnaces. With minimum control over raw material quality, the only option left to control hot metal Si is via optimizing the furnace parameters. Therefore, in order to identify the levers to reduce hot metal Si, Data mining was carried out, and multiple regression models were developed. The statistical analysis revealed that Slag B3{(CaO+MgO)/SiO2}, Slag Alumina and Hot metal temperature are key controllable parameters affecting hot metal silicon. Contour Plots were used to determine the optimum range of levels identified through statistical analysis. A trial plan was formulated to operate relevant parameters, at G blast furnace, in the identified range to reduce hot metal silicon. This paper details out the process followed and subsequent reduction in hot metal silicon by 15% at G blast furnace.Keywords: blast furnace, optimization, silicon, statistical tools
Procedia PDF Downloads 22124647 Machine Learning Analysis of Student Success in Introductory Calculus Based Physics I Course
Authors: Chandra Prayaga, Aaron Wade, Lakshmi Prayaga, Gopi Shankar Mallu
Abstract:
This paper presents the use of machine learning algorithms to predict the success of students in an introductory physics course. Data having 140 rows pertaining to the performance of two batches of students was used. The lack of sufficient data to train robust machine learning models was compensated for by generating synthetic data similar to the real data. CTGAN and CTGAN with Gaussian Copula (Gaussian) were used to generate synthetic data, with the real data as input. To check the similarity between the real data and each synthetic dataset, pair plots were made. The synthetic data was used to train machine learning models using the PyCaret package. For the CTGAN data, the Ada Boost Classifier (ADA) was found to be the ML model with the best fit, whereas the CTGAN with Gaussian Copula yielded Logistic Regression (LR) as the best model. Both models were then tested for accuracy with the real data. ROC-AUC analysis was performed for all the ten classes of the target variable (Grades A, A-, B+, B, B-, C+, C, C-, D, F). The ADA model with CTGAN data showed a mean AUC score of 0.4377, but the LR model with the Gaussian data showed a mean AUC score of 0.6149. ROC-AUC plots were obtained for each Grade value separately. The LR model with Gaussian data showed consistently better AUC scores compared to the ADA model with CTGAN data, except in two cases of the Grade value, C- and A-.Keywords: machine learning, student success, physics course, grades, synthetic data, CTGAN, gaussian copula CTGAN
Procedia PDF Downloads 4324646 Data Access, AI Intensity, and Scale Advantages
Authors: Chuping Lo
Abstract:
This paper presents a simple model demonstrating that ceteris paribus countries with lower barriers to accessing global data tend to earn higher incomes than other countries. Therefore, large countries that inherently have greater data resources tend to have higher incomes than smaller countries, such that the former may be more hesitant than the latter to liberalize cross-border data flows to maintain this advantage. Furthermore, countries with higher artificial intelligence (AI) intensity in production technologies tend to benefit more from economies of scale in data aggregation, leading to higher income and more trade as they are better able to utilize global data.Keywords: digital intensity, digital divide, international trade, scale of economics
Procedia PDF Downloads 6624645 Secured Transmission and Reserving Space in Images Before Encryption to Embed Data
Authors: G. R. Navaneesh, E. Nagarajan, C. H. Rajam Raju
Abstract:
Nowadays the multimedia data are used to store some secure information. All previous methods allocate a space in image for data embedding purpose after encryption. In this paper, we propose a novel method by reserving space in image with a boundary surrounded before encryption with a traditional RDH algorithm, which makes it easy for the data hider to reversibly embed data in the encrypted images. The proposed method can achieve real time performance, that is, data extraction and image recovery are free of any error. A secure transmission process is also discussed in this paper, which improves the efficiency by ten times compared to other processes as discussed.Keywords: secure communication, reserving room before encryption, least significant bits, image encryption, reversible data hiding
Procedia PDF Downloads 41024644 Identity Verification Using k-NN Classifiers and Autistic Genetic Data
Authors: Fuad M. Alkoot
Abstract:
DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN).Keywords: biometrics, genetic data, identity verification, k nearest neighbor
Procedia PDF Downloads 25324643 A Review on Intelligent Systems for Geoscience
Authors: R Palson Kennedy, P.Kiran Sai
Abstract:
This article introduces machine learning (ML) researchers to the hurdles that geoscience problems present, as well as the opportunities for improvement in both ML and geosciences. This article presents a review from the data life cycle perspective to meet that need. Numerous facets of geosciences present unique difficulties for the study of intelligent systems. Geosciences data is notoriously difficult to analyze since it is frequently unpredictable, intermittent, sparse, multi-resolution, and multi-scale. The first half addresses data science’s essential concepts and theoretical underpinnings, while the second section contains key themes and sharing experiences from current publications focused on each stage of the data life cycle. Finally, themes such as open science, smart data, and team science are considered.Keywords: Data science, intelligent system, machine learning, big data, data life cycle, recent development, geo science
Procedia PDF Downloads 13324642 Relevance Feedback within CBIR Systems
Authors: Mawloud Mosbah, Bachir Boucheham
Abstract:
We present here the results for a comparative study of some techniques, available in the literature, related to the relevance feedback mechanism in the case of a short-term learning. Only one method among those considered here is belonging to the data mining field which is the K-Nearest Neighbours Algorithm (KNN) while the rest of the methods is related purely to the information retrieval field and they fall under the purview of the following three major axes: Shifting query, Feature Weighting and the optimization of the parameters of similarity metric. As a contribution, and in addition to the comparative purpose, we propose a new version of the KNN algorithm referred to as an incremental KNN which is distinct from the original version in the sense that besides the influence of the seeds, the rate of the actual target image is influenced also by the images already rated. The results presented here have been obtained after experiments conducted on the Wang database for one iteration and utilizing colour moments on the RGB space. This compact descriptor, Colour Moments, is adequate for the efficiency purposes needed in the case of interactive systems. The results obtained allow us to claim that the proposed algorithm proves good results; it even outperforms a wide range of techniques available in the literature.Keywords: CBIR, category search, relevance feedback, query point movement, standard Rocchio’s formula, adaptive shifting query, feature weighting, original KNN, incremental KNN
Procedia PDF Downloads 27824641 Particle Size Analysis of Itagunmodi Southwestern Nigeria Alluvial Gold Ore Sample by Gaudin Schumann Method
Authors: Olaniyi Awe, Adelana R. Adetunji, Abraham Adeleke
Abstract:
Mining of alluvial gold ore by artisanal miners has been going on for decades at Itagunmodi, Southwestern Nigeria. In order to optimize the traditional panning gravity separation method commonly used in the area, a mineral particle size analysis study is critical. This study analyzed alluvial gold ore samples collected at identified five different locations in the area with a view to determine the ore particle size distributions. 500g measured of as-received alluvial gold ore sample was introduced into the uppermost sieve of an electrical sieve shaker consisting of sieves arranged in the order of decreasing nominal apertures of 5600μm, 3350μm, 2800μm, 355μm, 250μm, 125μm and 90μm, and operated for 20 minutes. The amount of material retained on each sieve was measured and tabulated for analysis. A screen analysis graph using the Gaudin Schuman method was drawn for each of the screen tests on the alluvial samples. The study showed that the percentages of fine particle size -125+90 μm fraction were 45.00%, 36.00%, 39.60%, 43.00% and 36.80% for the selected samples. These primary ore characteristic results provide reference data for the alluvial gold ore processing method selection, process performance measurement and optimization.Keywords: alluvial gold ore, sieve shaker, particle size, Gaudin Schumann
Procedia PDF Downloads 6024640 The Use of Artificial Intelligence in Diagnosis of Mastitis in Cows
Authors: Djeddi Khaled, Houssou Hind, Miloudi Abdellatif, Rabah Siham
Abstract:
In the field of veterinary medicine, there is a growing application of artificial intelligence (AI) for diagnosing bovine mastitis, a prevalent inflammatory disease in dairy cattle. AI technologies, such as automated milking systems, have streamlined the assessment of key metrics crucial for managing cow health during milking and identifying prevalent diseases, including mastitis. These automated milking systems empower farmers to implement automatic mastitis detection by analyzing indicators like milk yield, electrical conductivity, fat, protein, lactose, blood content in the milk, and milk flow rate. Furthermore, reports highlight the integration of somatic cell count (SCC), thermal infrared thermography, and diverse systems utilizing statistical models and machine learning techniques, including artificial neural networks, to enhance the overall efficiency and accuracy of mastitis detection. According to a review of 15 publications, machine learning technology can predict the risk and detect mastitis in cattle with an accuracy ranging from 87.62% to 98.10% and sensitivity and specificity ranging from 84.62% to 99.4% and 81.25% to 98.8%, respectively. Additionally, machine learning algorithms and microarray meta-analysis are utilized to identify mastitis genes in dairy cattle, providing insights into the underlying functional modules of mastitis disease. Moreover, AI applications can assist in developing predictive models that anticipate the likelihood of mastitis outbreaks based on factors such as environmental conditions, herd management practices, and animal health history. This proactive approach supports farmers in implementing preventive measures and optimizing herd health. By harnessing the power of artificial intelligence, the diagnosis of bovine mastitis can be significantly improved, enabling more effective management strategies and ultimately enhancing the health and productivity of dairy cattle. The integration of artificial intelligence presents valuable opportunities for the precise and early detection of mastitis, providing substantial benefits to the dairy industry.Keywords: artificial insemination, automatic milking system, cattle, machine learning, mastitis
Procedia PDF Downloads 6424639 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh
Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila
Abstract:
Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.Keywords: data culture, data-driven organization, data mesh, data quality for business success
Procedia PDF Downloads 13324638 Clean Coal Using Coal Bed Methane: A Pollution Control Mechanism
Authors: Arish Iqbal, Santosh Kumar Singh
Abstract:
Energy from coal is one of the major source of energy throughout the world but taking into consideration its effect on environment 'Clean Coal Technologies' (CCT) came into existence. In this paper we have we studied why CCT’s are essential and what are the different types of CCT’s. Also, the coal and CCT scenario in India is introduced. Coal Bed Methane one of major CCT area is studied in detail. Different types of coal bed methane and its methods of extraction are discussed. The different problem areas during the extraction of CBM are identified and discussed. How CBM can be used as a fuel for future is also discussed.Keywords: CBM (coal bed methane), CCS (carbon capture and storage), CCT (clean coal technology), CMM (coal mining methane)
Procedia PDF Downloads 23924637 Big Data Analysis with RHadoop
Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim
Abstract:
It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop
Procedia PDF Downloads 43524636 A Mutually Exclusive Task Generation Method Based on Data Augmentation
Authors: Haojie Wang, Xun Li, Rui Yin
Abstract:
In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.Keywords: data augmentation, mutex task generation, meta-learning, text classification.
Procedia PDF Downloads 9124635 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network
Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan
Abstract:
Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.Keywords: aggregation point, data communication, data aggregation, wireless sensor network
Procedia PDF Downloads 15524634 Spatial Econometric Approaches for Count Data: An Overview and New Directions
Authors: Paula Simões, Isabel Natário
Abstract:
This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data
Procedia PDF Downloads 59124633 A NoSQL Based Approach for Real-Time Managing of Robotics's Data
Authors: Gueidi Afef, Gharsellaoui Hamza, Ben Ahmed Samir
Abstract:
This paper deals with the secret of the continual progression data that new data management solutions have been emerged: The NoSQL databases. They crossed several areas like personalization, profile management, big data in real-time, content management, catalog, view of customers, mobile applications, internet of things, digital communication and fraud detection. Nowadays, these database management systems are increasing. These systems store data very well and with the trend of big data, a new challenge’s store demands new structures and methods for managing enterprise data. The new intelligent machine in the e-learning sector, thrives on more data, so smart machines can learn more and faster. The robotics are our use case to focus on our test. The implementation of NoSQL for Robotics wrestle all the data they acquire into usable form because with the ordinary type of robotics; we are facing very big limits to manage and find the exact information in real-time. Our original proposed approach was demonstrated by experimental studies and running example used as a use case.Keywords: NoSQL databases, database management systems, robotics, big data
Procedia PDF Downloads 35224632 Structural Strength Evaluation and Wear Prediction of Double Helix Steel Wire Ropes for Heavy Machinery
Authors: Krunal Thakar
Abstract:
Wire ropes combine high tensile strength and flexibility as compared to other general steel products. They are used in various application areas such as cranes, mining, elevators, bridges, cable cars, etc. The earliest reported use of wire ropes was for mining hoist application in 1830s. Over the period, there have been substantial advancement in the design of wire ropes for various application areas. Under operational conditions, wire ropes are subjected to varying tensile loads and bending loads resulting in material wear and eventual structural failure due to fretting fatigue. The conventional inspection methods to determine wire failure is only limited to outer wires of rope. However, till date, there is no effective mathematical model to examine the inter wire contact forces and wear characteristics. The scope of this paper is to present a computational simulation technique to evaluate inter wire contact forces and wear, which are in many cases responsible for rope failure. Two different type of ropes, IWRC-6xFi(29) and U3xSeS(48) were taken for structural strength evaluation and wear prediction. Both ropes have a double helix twisted wire profile as per JIS standards and are mainly used in cranes. CAD models of both ropes were developed in general purpose design software using in house developed formulation to generate double helix profile. Numerical simulation was done under two different load cases (a) Axial Tension and (b) Bending over Sheave. Different parameters such as stresses, contact forces, wear depth, load-elongation, etc., were investigated and compared between both ropes. Numerical simulation method facilitates the detailed investigation of inter wire contact and wear characteristics. In addition, various selection parameters like sheave diameter, rope diameter, helix angle, swaging, maximum load carrying capacity, etc., can be quickly analyzed.Keywords: steel wire ropes, numerical simulation, material wear, structural strength, axial tension, bending over sheave
Procedia PDF Downloads 15124631 A Mutually Exclusive Task Generation Method Based on Data Augmentation
Authors: Haojie Wang, Xun Li, Rui Yin
Abstract:
In order to solve the memorization overfitting in the model-agnostic meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to an exponential growth of computation, this paper also proposes a key data extraction method that only extract part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.Keywords: mutex task generation, data augmentation, meta-learning, text classification.
Procedia PDF Downloads 14124630 “It Isn’t a State Problem”: The Minas Conga Mine Controversy and Exemplifying the Need for Binding International Obligations on Corporate Actors
Authors: Cindy Woods
Abstract:
After years of implacable neoliberal globalization, multinational corporations have moved from the periphery to the center of the international legal agenda. Human rights advocates have long called for greater corporate accountability in the international arena. The creation of the Global Compact in 2000, while aimed at fostering greater corporate respect for human rights, did not silence these calls. After multiple unsuccessful attempts to adopt a set of norms relating to the human rights responsibilities of transnational corporations, the United Nations succeeded in 2008 with the Guiding Principles on Business and Human Rights (Guiding Principles). The Guiding Principles, praised by some within the international human rights community for their recognition of an individual corporate responsibility to respect human rights, have not escaped their share of criticism. Many view the Guiding Principles to be toothless, failing to directly impose obligations upon corporations, and call for binding international obligations on corporate entities. After decades of attempting to promulgate human rights obligations for multinational corporations, the existing legal frameworks in place fall short of protecting individuals from the human rights abuses of multinational corporations. The Global Compact and Guiding Principles are proof of the United Nations’ unwillingness to impose international legal obligations on corporate actors. In June 2014, the Human Rights Council adopted a resolution to draft international legally binding human rights norms for business entities; however, key players in the international arena have already announced they will not cooperate with such efforts. This Note, through an overview of the existing corporate accountability frameworks and a study of Newmont Mining’s Minas Conga project in Peru, argues that binding international human rights obligations on corporations are necessary to fully protect human rights. Where states refuse to or simply cannot uphold their duty to protect individuals from transnational businesses’ human rights transgressions, there must exist mechanisms to pursue justice directly against the multinational corporation.Keywords: business and human rights, Latin America, international treaty on business and human rights, mining, human rights
Procedia PDF Downloads 49924629 Revolutionizing Traditional Farming Using Big Data/Cloud Computing: A Review on Vertical Farming
Authors: Milind Chaudhari, Suhail Balasinor
Abstract:
Due to massive deforestation and an ever-increasing population, the organic content of the soil is depleting at a much faster rate. Due to this, there is a big chance that the entire food production in the world will drop by 40% in the next two decades. Vertical farming can help in aiding food production by leveraging big data and cloud computing to ensure plants are grown naturally by providing the optimum nutrients sunlight by analyzing millions of data points. This paper outlines the most important parameters in vertical farming and how a combination of big data and AI helps in calculating and analyzing these millions of data points. Finally, the paper outlines how different organizations are controlling the indoor environment by leveraging big data in enhancing food quantity and quality.Keywords: big data, IoT, vertical farming, indoor farming
Procedia PDF Downloads 17324628 Data Challenges Facing Implementation of Road Safety Management Systems in Egypt
Authors: A. Anis, W. Bekheet, A. El Hakim
Abstract:
Implementing a Road Safety Management System (SMS) in a crowded developing country such as Egypt is a necessity. Beginning a sustainable SMS requires a comprehensive reliable data system for all information pertinent to road crashes. In this paper, a survey for the available data in Egypt and validating it for using in an SMS in Egypt. The research provides some missing data, and refer to the unavailable data in Egypt, looking forward to the contribution of the scientific society, the authorities, and the public in solving the problem of missing or unreliable crash data. The required data for implementing an SMS in Egypt are divided into three categories; the first is available data such as fatality and injury rates and it is proven in this research that it may be inconsistent and unreliable, the second category of data is not available, but it may be estimated, an example of estimating vehicle cost is available in this research, the third is not available and can be measured case by case such as the functional and geometric properties of a facility. Some inquiries are provided in this research for the scientific society, such as how to improve the links among stakeholders of road safety in order to obtain a consistent, non-biased, and reliable data system.Keywords: road safety management system, road crash, road fatality, road injury
Procedia PDF Downloads 14524627 Big Data-Driven Smart Policing: Big Data-Based Patrol Car Dispatching in Abu Dhabi, UAE
Authors: Oualid Walid Ben Ali
Abstract:
Big Data has become one of the buzzwords today. The recent explosion of digital data has led the organization, either private or public, to a new era towards a more efficient decision making. At some point, business decided to use that concept in order to learn what make their clients tick with phrases like ‘sales funnel’ analysis, ‘actionable insights’, and ‘positive business impact’. So, it stands to reason that Big Data was viewed through green (read: money) colored lenses. Somewhere along the line, however someone realized that collecting and processing data doesn’t have to be for business purpose only, but also could be used for other purposes to assist law enforcement or to improve policing or in road safety. This paper presents briefly, how Big Data have been used in the fields of policing order to improve the decision making process in the daily operation of the police. As example, we present a big-data driven system which is sued to accurately dispatch the patrol cars in a geographic environment. The system is also used to allocate, in real-time, the nearest patrol car to the location of an incident. This system has been implemented and applied in the Emirate of Abu Dhabi in the UAE.Keywords: big data, big data analytics, patrol car allocation, dispatching, GIS, intelligent, Abu Dhabi, police, UAE
Procedia PDF Downloads 49024626 Model Order Reduction for Frequency Response and Effect of Order of Method for Matching Condition
Authors: Aref Ghafouri, Mohammad javad Mollakazemi, Farhad Asadi
Abstract:
In this paper, model order reduction method is used for approximation in linear and nonlinearity aspects in some experimental data. This method can be used for obtaining offline reduced model for approximation of experimental data and can produce and follow the data and order of system and also it can match to experimental data in some frequency ratios. In this study, the method is compared in different experimental data and influence of choosing of order of the model reduction for obtaining the best and sufficient matching condition for following the data is investigated in format of imaginary and reality part of the frequency response curve and finally the effect and important parameter of number of order reduction in nonlinear experimental data is explained further.Keywords: frequency response, order of model reduction, frequency matching condition, nonlinear experimental data
Procedia PDF Downloads 40024625 An Empirical Study of the Impacts of Big Data on Firm Performance
Authors: Thuan Nguyen
Abstract:
In the present time, data to a data-driven knowledge-based economy is the same as oil to the industrial age hundreds of years ago. Data is everywhere in vast volumes! Big data analytics is expected to help firms not only efficiently improve performance but also completely transform how they should run their business. However, employing the emergent technology successfully is not easy, and assessing the roles of big data in improving firm performance is even much harder. There was a lack of studies that have examined the impacts of big data analytics on organizational performance. This study aimed to fill the gap. The present study suggested using firms’ intellectual capital as a proxy for big data in evaluating its impact on organizational performance. The present study employed the Value Added Intellectual Coefficient method to measure firm intellectual capital, via its three main components: human capital efficiency, structural capital efficiency, and capital employed efficiency, and then used the structural equation modeling technique to model the data and test the models. The financial fundamental and market data of 100 randomly selected publicly listed firms were collected. The results of the tests showed that only human capital efficiency had a significant positive impact on firm profitability, which highlighted the prominent human role in the impact of big data technology.Keywords: big data, big data analytics, intellectual capital, organizational performance, value added intellectual coefficient
Procedia PDF Downloads 24324624 Automated Test Data Generation For some types of Algorithm
Authors: Hitesh Tahbildar
Abstract:
The cost of test data generation for a program is computationally very high. In general case, no algorithm to generate test data for all types of algorithms has been found. The cost of generating test data for different types of algorithm is different. Till date, people are emphasizing the need to generate test data for different types of programming constructs rather than different types of algorithms. The test data generation methods have been implemented to find heuristics for different types of algorithms. Some algorithms that includes divide and conquer, backtracking, greedy approach, dynamic programming to find the minimum cost of test data generation have been tested. Our experimental results say that some of these types of algorithm can be used as a necessary condition for selecting heuristics and programming constructs are sufficient condition for selecting our heuristics. Finally we recommend the different heuristics for test data generation to be selected for different types of algorithms.Keywords: ongest path, saturation point, lmax, kL, kS
Procedia PDF Downloads 40324623 The Perspective on Data Collection Instruments for Younger Learners
Authors: Hatice Kübra Koç
Abstract:
For academia, collecting reliable and valid data is one of the most significant issues for researchers. However, it is not the same procedure for all different target groups; meanwhile, during data collection from teenagers, young adults, or adults, researchers can use common data collection tools such as questionnaires, interviews, and semi-structured interviews; yet, for young learners and very young ones, these reliable and valid data collection tools cannot be easily designed or applied by the researchers. In this study, firstly, common data collection tools are examined for ‘very young’ and ‘young learners’ participant groups since it is thought that the quality and efficiency of an academic study is mainly based on its valid and correct data collection and data analysis procedure. Secondly, two different data collection instruments for very young and young learners are stated as discussing the efficacy of them. Finally, a suggested data collection tool – a performance-based questionnaire- which is specifically developed for ‘very young’ and ‘young learners’ participant groups in the field of teaching English to young learners as a foreign language is presented in this current study. The designing procedure and suggested items/factors for the suggested data collection tool are accordingly revealed at the end of the study to help researchers have studied with young and very learners.Keywords: data collection instruments, performance-based questionnaire, young learners, very young learners
Procedia PDF Downloads 9024622 Generating Swarm Satellite Data Using Long Short-Term Memory and Generative Adversarial Networks for the Detection of Seismic Precursors
Authors: Yaxin Bi
Abstract:
Accurate prediction and understanding of the evolution mechanisms of earthquakes remain challenging in the fields of geology, geophysics, and seismology. This study leverages Long Short-Term Memory (LSTM) networks and Generative Adversarial Networks (GANs), a generative model tailored to time-series data, for generating synthetic time series data based on Swarm satellite data, which will be used for detecting seismic anomalies. LSTMs demonstrated commendable predictive performance in generating synthetic data across multiple countries. In contrast, the GAN models struggled to generate synthetic data, often producing non-informative values, although they were able to capture the data distribution of the time series. These findings highlight both the promise and challenges associated with applying deep learning techniques to generate synthetic data, underscoring the potential of deep learning in generating synthetic electromagnetic satellite data.Keywords: LSTM, GAN, earthquake, synthetic data, generative AI, seismic precursors
Procedia PDF Downloads 3124621 Generation of Quasi-Measurement Data for On-Line Process Data Analysis
Authors: Hyun-Woo Cho
Abstract:
For ensuring the safety of a manufacturing process one should quickly identify an assignable cause of a fault in an on-line basis. To this end, many statistical techniques including linear and nonlinear methods have been frequently utilized. However, such methods possessed a major problem of small sample size, which is mostly attributed to the characteristics of empirical models used for reference models. This work presents a new method to overcome the insufficiency of measurement data in the monitoring and diagnosis tasks. Some quasi-measurement data are generated from existing data based on the two indices of similarity and importance. The performance of the method is demonstrated using a real data set. The results turn out that the presented methods are able to handle the insufficiency problem successfully. In addition, it is shown to be quite efficient in terms of computational speed and memory usage, and thus on-line implementation of the method is straightforward for monitoring and diagnosis purposes.Keywords: data analysis, diagnosis, monitoring, process data, quality control
Procedia PDF Downloads 480