Search results for: cloud data privacy and integrity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25488

Search results for: cloud data privacy and integrity

23838 Analysis of Production Forecasting in Unconventional Gas Resources Development Using Machine Learning and Data-Driven Approach

Authors: Dongkwon Han, Sangho Kim, Sunil Kwon

Abstract:

Unconventional gas resources have dramatically changed the future energy landscape. Unlike conventional gas resources, the key challenges in unconventional gas have been the requirement that applies to advanced approaches for production forecasting due to uncertainty and complexity of fluid flow. In this study, artificial neural network (ANN) model which integrates machine learning and data-driven approach was developed to predict productivity in shale gas. The database of 129 wells of Eagle Ford shale basin used for testing and training of the ANN model. The Input data related to hydraulic fracturing, well completion and productivity of shale gas were selected and the output data is a cumulative production. The performance of the ANN using all data sets, clustering and variables importance (VI) models were compared in the mean absolute percentage error (MAPE). ANN model using all data sets, clustering, and VI were obtained as 44.22%, 10.08% (cluster 1), 5.26% (cluster 2), 6.35%(cluster 3), and 32.23% (ANN VI), 23.19% (SVM VI), respectively. The results showed that the pre-trained ANN model provides more accurate results than the ANN model using all data sets.

Keywords: unconventional gas, artificial neural network, machine learning, clustering, variables importance

Procedia PDF Downloads 185
23837 Human LACE1 Functions Pro-Apoptotic and Interacts with Mitochondrial YME1L Protease

Authors: Lukas Stiburek, Jana Cesnekova, Josef Houstek, Jiri Zeman

Abstract:

Cellular function depends on mitochondrial function and integrity that is therefore maintained by several classes of proteins possessing chaperone and/or proteolytic activities. In this work, we focused on characterization of LACE1 (lactation elevated 1) function in mitochondrial protein homeostasis maintenance. LACE1 is the human homologue of yeast mitochondrial Afg1 ATPase, a member of SEC18-NSF, PAS1, CDC48-VCP, TBP family. Yeast Afg1 was shown to be involved in mitochondrial complex IV biogenesis, and based on its similarity with CDC48 (p97/VCP) it was suggested to facilitate extraction of polytopic membrane proteins. Here we show that LACE1, which is a mitochondrial integral membrane protein, exists as part of three complexes of approx. 140, 400 and 500 kDa and is essential for maintenance of fused mitochondrial reticulum and lamellar cristae morphology. Using affinity purification of LACE1-FLAG expressed in LACE1 knockdown background we show that the protein physically interacts with mitochondrial inner membrane protease YME1L. We further show that human LACE1 exhibits significant pro-apoptotic activity and that the protein is required for normal function of the mitochondrial respiratory chain. Thus, our work establishes LACE1 as a novel factor with the crucial role in mitochondrial homeostasis maintenance.

Keywords: LACE1, mitochondria, apoptosis, protease

Procedia PDF Downloads 298
23836 Development of a Device for Detecting Fluids in the Esophagus

Authors: F. J. Puertas, M. Castro, A. Tebar, P. J. Fito, R. Gadea, J. M. Monzó, R. J. Colom

Abstract:

There is a great diversity of diseases that affect the integrity of the walls of the esophagus, generally of a digestive nature. Among them, gastroesophageal reflux is a common disease in the general population, affecting the patient's quality of life; however, there are still unmet diagnostic and therapeutic issues. The consequences of untreated or asymptomatic acid reflux on the esophageal mucosa are not only pain, heartburn, and acid regurgitation but also an increased risk of esophageal cancer. Currently, the diagnostic methods to detect problems in the esophageal tract are invasive and annoying, as 24-hour impedance-pH monitoring forces the patient to be uncomfortable for hours to be able to make a correct diagnosis. In this work, the development of a sensor able to measure in depth is proposed, allowing the detection of liquids circulating in the esophageal tract. The multisensor detection system is based on radiofrequency photospectrometry. At an experimental level, consumers representative of the population in terms of sex and age have been used, placing the sensors between the trachea and the diaphragm analyzing the measurements in vacuum, water, orange juice and saline medium. The results obtained have allowed us to detect the appearance of different liquid media in the esophagus, segregating them based on their ionic content.

Keywords: bioimpedance, dielectric spectroscopy, gastroesophageal reflux, GERD

Procedia PDF Downloads 90
23835 The Role of Homocysteine in Bone and Cartilage Regeneration

Authors: Arif İsmailov, Naila Hasanova, Gunay Orujalieva

Abstract:

Homocysteine (HCY) is an indicator of prognostic value in monitoring regenerative processes in osteoporosis and osteoporotic fractures. The osteoporosis is known to be a serious health and economic problem, especially for women in the postmenopausal period. The study was carried out on patients 45-83 years old divided into 3 groups: group I – 14 patients with osteoporosis , group II – 15 patients with non-osteoporotic fractures, group III – 25 patients with osteoporotic fractures. The control group consisted of practically healthy 14 people. A blood sample was taken at 3 stages to monitor the dynamics of HCY level: on the 1st day before treatment, on the 10th day of treatment and 1 month after it. Blood levels of Hcy were determined at a wavelength of 450 nm by the ELİSA(Cloud Clone Corp.Elisa kits,USA). The statistical evaluation was performed by using SPSS 26.0 program (IBM SPSS Inc., USA).The results showed that on the 1st day before the treatment HCY concentration was statistically increased 2.7 times(PU = 0.108) in group I, 5.6 times (PU <0.001) in group II and 6.5 times (PU <0.001) in group III compared to the control group. Thus, the average value of HCY in group I was 1.76 ± 0.56 μg/ml; in group II – 3.57 ± 0.62 μg/ml; in group III – 4.2 ± 0.50 μg/ml. HCY level increases more sharply after fractures,especially in osteoporotic patients. In treatment period Vitamin D plays an important role in synthesis of the Cystathionine β‐synthase enzyme, which regulates HCY metabolism. Increased Hcy levels could lead to an increase in the risk of fracture through the interference in collagen cross-linking.

Keywords: homocysteine, osteoporosis, osteoporotic fractures, Vitamin D

Procedia PDF Downloads 44
23834 Procedure Model for Data-Driven Decision Support Regarding the Integration of Renewable Energies into Industrial Energy Management

Authors: M. Graus, K. Westhoff, X. Xu

Abstract:

The climate change causes a change in all aspects of society. While the expansion of renewable energies proceeds, industry could not be convinced based on general studies about the potential of demand side management to reinforce smart grid considerations in their operational business. In this article, a procedure model for a case-specific data-driven decision support for industrial energy management based on a holistic data analytics approach is presented. The model is executed on the example of the strategic decision problem, to integrate the aspect of renewable energies into industrial energy management. This question is induced due to considerations of changing the electricity contract model from a standard rate to volatile energy prices corresponding to the energy spot market which is increasingly more affected by renewable energies. The procedure model corresponds to a data analytics process consisting on a data model, analysis, simulation and optimization step. This procedure will help to quantify the potentials of sustainable production concepts based on the data from a factory. The model is validated with data from a printer in analogy to a simple production machine. The overall goal is to establish smart grid principles for industry via the transformation from knowledge-driven to data-driven decisions within manufacturing companies.

Keywords: data analytics, green production, industrial energy management, optimization, renewable energies, simulation

Procedia PDF Downloads 428
23833 Dissimilarity-Based Coloring for Symbolic and Multivariate Data Visualization

Authors: K. Umbleja, M. Ichino, H. Yaguchi

Abstract:

In this paper, we propose a coloring method for multivariate data visualization by using parallel coordinates based on dissimilarity and tree structure information gathered during hierarchical clustering. The proposed method is an extension for proximity-based coloring that suffers from a few undesired side effects if hierarchical tree structure is not balanced tree. We describe the algorithm by assigning colors based on dissimilarity information, show the application of proposed method on three commonly used datasets, and compare the results with proximity-based coloring. We found our proposed method to be especially beneficial for symbolic data visualization where many individual objects have already been aggregated into a single symbolic object.

Keywords: data visualization, dissimilarity-based coloring, proximity-based coloring, symbolic data

Procedia PDF Downloads 160
23832 Smart-Textile Containers for Urban Mobility

Authors: René Vieroth, Christian Dils, M. V. Krshiwoblozki, Christine Kallmayer, Martin Schneider-Ramelow, Klaus-Dieter Lang

Abstract:

Green urban mobility in commercial and private contexts is one of the great challenges for the continuously growing cities all over the world. Bicycle based solutions are already and since a long time the key to success. Modern developments like e-bikes and high-end cargo-bikes complement the portfolio. Weight, aerodynamic drag, and security for the transported goods are the key factors for working solutions. Recent achievements in the field of smart-textiles allowed the creation of a totally new generation of intelligent textile cargo containers, which fulfill those demands. The fusion of technical textiles, design and electrical engineering made it possible to create an ecological solution which is very near to become a product. This paper shows all the details of this solution that includes an especially developed sensor textile for cut detection, a protective textile layer for intrusion prevention, an universal-charging-unit for energy harvesting from diverse sources and a low-energy alarm system with GSM/GPRS connection, GPS location and RFID interface.

Keywords: cargo-bike, cut-detection, e-bike, energy-harvesting, green urban mobility, logistics, smart-textiles, textile-integrity sensor

Procedia PDF Downloads 305
23831 The Impact of Data Science on Geography: A Review

Authors: Roberto Machado

Abstract:

We conducted a systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology, analyzing 2,996 studies and synthesizing 41 of them to explore the evolution of data science and its integration into geography. By employing optimization algorithms, we accelerated the review process, significantly enhancing the efficiency and precision of literature selection. Our findings indicate that data science has developed over five decades, facing challenges such as the diversified integration of data and the need for advanced statistical and computational skills. In geography, the integration of data science underscores the importance of interdisciplinary collaboration and methodological innovation. Techniques like large-scale spatial data analysis and predictive algorithms show promise in natural disaster management and transportation route optimization, enabling faster and more effective responses. These advancements highlight the transformative potential of data science in geography, providing tools and methodologies to address complex spatial problems. The relevance of this study lies in the use of optimization algorithms in systematic reviews and the demonstrated need for deeper integration of data science into geography. Key contributions include identifying specific challenges in combining diverse spatial data and the necessity for advanced computational skills. Examples of connections between these two fields encompass significant improvements in natural disaster management and transportation efficiency, promoting more effective and sustainable environmental solutions with a positive societal impact.

Keywords: data science, geography, systematic review, optimization algorithms, supervised learning

Procedia PDF Downloads 3
23830 Developing Structured Sizing Systems for Manufacturing Ready-Made Garments of Indian Females Using Decision Tree-Based Data Mining

Authors: Hina Kausher, Sangita Srivastava

Abstract:

In India, there is a lack of standard, systematic sizing approach for producing readymade garments. Garments manufacturing companies use their own created size tables by modifying international sizing charts of ready-made garments. The purpose of this study is to tabulate the anthropometric data which covers the variety of figure proportions in both height and girth. 3,000 data has been collected by an anthropometric survey undertaken over females between the ages of 16 to 80 years from some states of India to produce the sizing system suitable for clothing manufacture and retailing. This data is used for the statistical analysis of body measurements, the formulation of sizing systems and body measurements tables. Factor analysis technique is used to filter the control body dimensions from a large number of variables. Decision tree-based data mining is used to cluster the data. The standard and structured sizing system can facilitate pattern grading and garment production. Moreover, it can exceed buying ratios and upgrade size allocations to retail segments.

Keywords: anthropometric data, data mining, decision tree, garments manufacturing, sizing systems, ready-made garments

Procedia PDF Downloads 124
23829 A Framework on Data and Remote Sensing for Humanitarian Logistics

Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini

Abstract:

Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.

Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making

Procedia PDF Downloads 368
23828 Facility Data Model as Integration and Interoperability Platform

Authors: Nikola Tomasevic, Marko Batic, Sanja Vranes

Abstract:

Emerging Semantic Web technologies can be seen as the next step in evolution of the intelligent facility management systems. Particularly, this considers increased usage of open source and/or standardized concepts for data classification and semantic interpretation. To deliver such facility management systems, providing the comprehensive integration and interoperability platform in from of the facility data model is a prerequisite. In this paper, one of the possible modelling approaches to provide such integrative facility data model which was based on the ontology modelling concept was presented. Complete ontology development process, starting from the input data acquisition, ontology concepts definition and finally ontology concepts population, was described. At the beginning, the core facility ontology was developed representing the generic facility infrastructure comprised of the common facility concepts relevant from the facility management perspective. To develop the data model of a specific facility infrastructure, first extension and then population of the core facility ontology was performed. For the development of the full-blown facility data models, Malpensa and Fiumicino airports in Italy, two major European air-traffic hubs, were chosen as a test-bed platform. Furthermore, the way how these ontology models supported the integration and interoperability of the overall airport energy management system was analyzed as well.

Keywords: airport ontology, energy management, facility data model, ontology modeling

Procedia PDF Downloads 437
23827 The Use of PD and Tanδ Characteristics as Diagnostic Technique for the Insulation Integrity of XLPE Insulated Cable Joints

Authors: Mazen Al-Bulaihed, Nissar Wani, Abdulrahman Al-Arainy, Yasin Khan

Abstract:

Partial Discharge (PD) measurements are widely used for diagnostic purposes in electrical equipment used in power systems. The main cause of these measurements is to prevent large power failures as cables are prone to aging, which usually results in embrittlement, cracking and eventual failure of the insulating and sheathing materials, exposing the conductor and risking a potential short circuit, a likely cause of the electrical fire. Many distribution networks rely heavily on medium voltage (MV) power cables. The presence of joints in these networks is a vital part of serving the consumer demand for electricity continuously. Such measurements become even more important when the extent of dependence increases. Moreover, it is known that the partial discharge in joints and termination are difficult to track and are the most crucial point of failures in large power systems. This paper discusses the diagnostic techniques of four samples of XLPE insulated cable joints, each included with a different type of defect. Experiments were carried out by measuring PD and tanδ at very low frequency applied high voltage. The results show the importance of combining PD and tanδ for effective cable assessment.

Keywords: partial discharge, tan delta, very low frequency, XLPE cable

Procedia PDF Downloads 147
23826 Detectability Analysis of Typical Aerial Targets from Space-Based Platforms

Authors: Yin Zhang, Kai Qiao, Xiyang Zhi, Jinnan Gong, Jianming Hu

Abstract:

In order to achieve effective detection of aerial targets over long distances from space-based platforms, the mechanism of interaction between the radiation characteristics of the aerial targets and the complex scene environment including the sunlight conditions, underlying surfaces and the atmosphere are analyzed. A large simulated database of space-based radiance images is constructed considering several typical aerial targets, target working modes (flight velocity and altitude), illumination and observation angles, background types (cloud, ocean, and urban areas) and sensor spectrums ranging from visible to thermal infrared. The target detectability is characterized by the signal-to-clutter ratio (SCR) extracted from the images. The influence laws of the target detectability are discussed under different detection bands and instantaneous fields of view (IFOV). Furthermore, the optimal center wavelengths and widths of the detection bands are suggested, and the minimum IFOV requirements are proposed. The research can provide theoretical support and scientific guidance for the design of space-based detection systems and on-board information processing algorithms.

Keywords: space-based detection, aerial targets, detectability analysis, scene environment

Procedia PDF Downloads 137
23825 Telomere Length Genetics: Biomarker of Early Age Metabolic Activities and Oxidative Impact in Broiler Chicken (Gallus gallus domesticus)

Authors: Kazeem Ajasa Badmus, Zulkifli Idrus, Goh Yong Meng, Kamalludin Mamat-Hamidi

Abstract:

This study was aimed at evaluating the roles played by early age in performance, organs weights, meat quality traits, and telomere length integrity. One hundred male Cobb 500® broiler chickens were grouped into ten replicates of ten chickens each. Growth performance, measurement of telomere length, weights of organs, and meat quality traits were determined on days 14, 28, and 42 of the experiment. There were significant (p < 0.05) differences obtained in the chicken growth performance across ages. Telomere length of blood, muscle, liver, and heart on day 14 were significantly (p < 0.05) shorter than telomere length obtained on days 28 and 42 of the age. Weights of organs on day 14 were significantly (p < 0.05) higher than those obtained on days 28 and 42. In this study, birds slaughtered on day 14 presented the highest (p < 0.05) pH, drip loss, redness, and yellowness. They, however, showed lower (p < 0.05) cooking loss, shear force, and lightness. There was a significant association between age, telomere length, and meat quality traits. It is therefore concluded that telomere length attrition is associated with early age metabolic activities and could be used to measure chicks' welfare.

Keywords: age, telomere length, organ weights, meat quality

Procedia PDF Downloads 82
23824 Non-Singular Gravitational Collapse of a Homogeneous Scalar Field in Deformed Phase Space

Authors: Amir Hadi Ziaie

Abstract:

In the present work, we revisit the collapse process of a spherically symmetric homogeneous scalar field (in FRW background) minimally coupled to gravity, when the phase-space deformations are taken into account. Such a deformation is mathematically introduced as a particular type of noncommutativity between the canonical momenta of the scale factor and of the scalar field. In the absence of such deformation, the collapse culminates in a spacetime singularity. However, when the phase-space is deformed, we find that the singularity is removed by a non-singular bounce, beyond which the collapsing cloud re-expands to infinity. More precisely, for negative values of the deformation parameter, we identify the appearance of a negative pressure, which decelerates the collapse to finally avoid the singularity formation. While in the un-deformed case, the horizon curve monotonically decreases to finally cover the singularity, in the deformed case the horizon has a minimum value that this value depends on deformation parameter and initial configuration of the collapse. Such a setting predicts a threshold mass for black hole formation in stellar collapse and manifests the role of non-commutative geometry in physics and especially in stellar collapse and supernova explosion.

Keywords: gravitational collapse, non-commutative geometry, spacetime singularity, black hole physics

Procedia PDF Downloads 333
23823 Conceptualizing Personalized Learning: Review of Literature 2007-2017

Authors: Ruthanne Tobin

Abstract:

As our data-driven, cloud-based, knowledge-centric lives become ever more global, mobile, and digital, educational systems everywhere are struggling to keep pace. Schools need to prepare students to become critical-thinking, tech-savvy, life-long learners who are engaged and adaptable enough to find their unique calling in a post-industrial world of work. Recognizing that no nation can afford poor achievement or high dropout rates without jeopardizing its social and economic future, the thirty-two nations of the OECD are launching initiatives to redesign schools, generally under the banner of Personalized Learning or 21st Century Learning. Their intention is to transform education by situating students as co-enquirers and co-contributors with their teachers of what, when, and how learning happens for each individual. In this focused review of the 2007-2017 literature on personalized learning, the author sought answers to two main questions: “What are the theoretical frameworks that guide personalized learning?” and “What is the conceptual understanding of the model?” Ultimately, the review reveals that, although the research area is overly theorized and under-substantiated, it does provide a significant body of knowledge about this potentially transformative educational restructuring. For example, it addresses the following questions: a) What components comprise a PL model? b) How are teachers facilitating agency (voice & choice) in their students? c) What kinds of systems, processes and procedures are being used to guide the innovation? d) How is learning organized, monitored and assessed? e) What role do inquiry based models play? f) How do teachers integrate the three types of knowledge: Content, pedagogical and technological? g) Which kinds of forces enable, and which impede, personalizing learning? h) What is the nature of the collaboration among teachers? i) How do teachers co-regulate differentiated tasks? One finding of the review shows that while technology can dramatically expand access to information, expectations of its impact on teaching and learning are often disappointing unless the technologies are paired with excellent pedagogies in order to address students’ needs, interests and aspirations. This literature review fills a significant gap in this emerging field of research, as it serves to increase conceptual clarity that has hampered both the theorizing and the classroom implementation of a personalized learning model.

Keywords: curriculum change, educational innovation, personalized learning, school reform

Procedia PDF Downloads 210
23822 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 91
23821 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines

Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma

Abstract:

Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.

Keywords: support vector mechanism (SVM), machine learning (ML), support vector machines (SVM), department of transportation (DFT)

Procedia PDF Downloads 260
23820 A Relational Data Base for Radiation Therapy

Authors: Raffaele Danilo Esposito, Domingo Planes Meseguer, Maria Del Pilar Dorado Rodriguez

Abstract:

As far as we know, it is still unavailable a commercial solution which would allow to manage, openly and configurable up to user needs, the huge amount of data generated in a modern Radiation Oncology Department. Currently, available information management systems are mainly focused on Record & Verify and clinical data, and only to a small extent on physical data. Thus, results in a partial and limited use of the actually available information. In the present work we describe the implementation at our department of a centralized information management system based on a web server. Our system manages both information generated during patient planning and treatment, and information of general interest for the whole department (i.e. treatment protocols, quality assurance protocols etc.). Our objective it to be able to analyze in a simple and efficient way all the available data and thus to obtain quantitative evaluations of our treatments. This would allow us to improve our work flow and protocols. To this end we have implemented a relational data base which would allow us to use in a practical and efficient way all the available information. As always we only use license free software.

Keywords: information management system, radiation oncology, medical physics, free software

Procedia PDF Downloads 226
23819 A Study of Safety of Data Storage Devices of Graduate Students at Suan Sunandha Rajabhat University

Authors: Komol Phaisarn, Natcha Wattanaprapa

Abstract:

This research is a survey research with an objective to study the safety of data storage devices of graduate students of academic year 2013, Suan Sunandha Rajabhat University. Data were collected by questionnaire on the safety of data storage devices according to CIA principle. A sample size of 81 was drawn from population by purposive sampling method. The results show that most of the graduate students of academic year 2013 at Suan Sunandha Rajabhat University use handy drive to store their data and the safety level of the devices is at good level.

Keywords: security, safety, storage devices, graduate students

Procedia PDF Downloads 342
23818 Simulation of a Cost Model Response Requests for Replication in Data Grid Environment

Authors: Kaddi Mohammed, A. Benatiallah, D. Benatiallah

Abstract:

Data grid is a technology that has full emergence of new challenges, such as the heterogeneity and availability of various resources and geographically distributed, fast data access, minimizing latency and fault tolerance. Researchers interested in this technology address the problems of the various systems related to the industry such as task scheduling, load balancing and replication. The latter is an effective solution to achieve good performance in terms of data access and grid resources and better availability of data cost. In a system with duplication, a coherence protocol is used to impose some degree of synchronization between the various copies and impose some order on updates. In this project, we present an approach for placing replicas to minimize the cost of response of requests to read or write, and we implement our model in a simulation environment. The placement techniques are based on a cost model which depends on several factors, such as bandwidth, data size and storage nodes.

Keywords: response time, query, consistency, bandwidth, storage capacity, CERN

Procedia PDF Downloads 261
23817 Prompt Design for Code Generation in Data Analysis Using Large Language Models

Authors: Lu Song Ma Li Zhi

Abstract:

With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.

Keywords: large language models, prompt design, data analysis, code generation

Procedia PDF Downloads 12
23816 Comparison of Different Methods to Produce Fuzzy Tolerance Relations for Rainfall Data Classification in the Region of Central Greece

Authors: N. Samarinas, C. Evangelides, C. Vrekos

Abstract:

The aim of this paper is the comparison of three different methods, in order to produce fuzzy tolerance relations for rainfall data classification. More specifically, the three methods are correlation coefficient, cosine amplitude and max-min method. The data were obtained from seven rainfall stations in the region of central Greece and refers to 20-year time series of monthly rainfall height average. Three methods were used to express these data as a fuzzy relation. This specific fuzzy tolerance relation is reformed into an equivalence relation with max-min composition for all three methods. From the equivalence relation, the rainfall stations were categorized and classified according to the degree of confidence. The classification shows the similarities among the rainfall stations. Stations with high similarity can be utilized in water resource management scenarios interchangeably or to augment data from one to another. Due to the complexity of calculations, it is important to find out which of the methods is computationally simpler and needs fewer compositions in order to give reliable results.

Keywords: classification, fuzzy logic, tolerance relations, rainfall data

Procedia PDF Downloads 305
23815 Customer Satisfaction and Effective HRM Policies: Customer and Employee Satisfaction

Authors: S. Anastasiou, C. Nathanailides

Abstract:

The purpose of this study is to examine the possible link between employee and customer satisfaction. The service provided by employees, help to build a good relationship with customers and can help at increasing their loyalty. Published data for job satisfaction and indicators of customer services were gathered from relevant published works which included data from five different countries. The reviewed data indicate a significant correlation between indicators of customer and employee satisfaction in the Banking sector. There was a significant correlation between the two parameters (Pearson correlation R2=0.52 P<0.05) The reviewed data provide evidence that there is some practical evidence which links these two parameters.

Keywords: job satisfaction, job performance, customer’ service, banks, human resources management

Procedia PDF Downloads 312
23814 Adsorption and Corrosion Inhibition of New Synthesized Thiophene Schiff Base on Mild Steel in HCL Solution

Authors: H. Elmsellem, A. Aouniti, S. Radi, A. Chetouani, B. Hammouti

Abstract:

The synthesis of new organic molecules offers various molecular structures containing heteroatoms and substituents for corrosion protection in acid pickling of metals. The most synthesized compounds are the nitrogen heterocyclic compounds, which are known to be excellent complex or chelate forming substances with metals. The choice of the inhibitor is based on two considerations: first it could be synthesized conveniently from relatively cheap raw materials, secondly, it contains the electron cloud on the aromatic ring or, the electro negative atoms such as nitrogen and oxygen in the relatively long chain compounds. In the present study, (NE)‐2‐methyl‐N‐(thiophen‐2‐ylmethylidene) aniline(T) was synthesized and its inhibiting action on the corrosion of mild steel in 1 M hydrochloric acid was examined by different corrosion methods, such as weight loss, potentiodynamic polarization and electrochemical impedance spectroscopy (EIS). The experimental results suggest that this compound is an efficient corrosion inhibitor and the inhibition efficiency increases with the increase in inhibitor concentration. Adsorption of this compound on mild steel surface obeys Langmuir’s isotherm. Correlation between quantum chemical calculations and inhibition efficiency of the investigated compound is discussed using the Density Functional Theory method (DFT).

Keywords: mild steel, Schiff base, inhibition, corrosion, HCl, quantum chemical

Procedia PDF Downloads 317
23813 Generation of Automated Alarms for Plantwide Process Monitoring

Authors: Hyun-Woo Cho

Abstract:

Earlier detection of incipient abnormal operations in terms of plant-wide process management is quite necessary in order to improve product quality and process safety. And generating warning signals or alarms for operating personnel plays an important role in process automation and intelligent plant health monitoring. Various methodologies have been developed and utilized in this area such as expert systems, mathematical model-based approaches, multivariate statistical approaches, and so on. This work presents a nonlinear empirical monitoring methodology based on the real-time analysis of massive process data. Unfortunately, the big data includes measurement noises and unwanted variations unrelated to true process behavior. Thus the elimination of such unnecessary patterns of the data is executed in data processing step to enhance detection speed and accuracy. The performance of the methodology was demonstrated using simulated process data. The case study showed that the detection speed and performance was improved significantly irrespective of the size and the location of abnormal events.

Keywords: detection, monitoring, process data, noise

Procedia PDF Downloads 238
23812 Meanings and Concepts of Standardization in Systems Medicine

Authors: Imme Petersen, Wiebke Sick, Regine Kollek

Abstract:

In systems medicine, high-throughput technologies produce large amounts of data on different biological and pathological processes, including (disturbed) gene expressions, metabolic pathways and signaling. The large volume of data of different types, stored in separate databases and often located at different geographical sites have posed new challenges regarding data handling and processing. Tools based on bioinformatics have been developed to resolve the upcoming problems of systematizing, standardizing and integrating the various data. However, the heterogeneity of data gathered at different levels of biological complexity is still a major challenge in data analysis. To build multilayer disease modules, large and heterogeneous data of disease-related information (e.g., genotype, phenotype, environmental factors) are correlated. Therefore, a great deal of attention in systems medicine has been put on data standardization, primarily to retrieve and combine large, heterogeneous datasets into standardized and incorporated forms and structures. However, this data-centred concept of standardization in systems medicine is contrary to the debate in science and technology studies (STS) on standardization that rather emphasizes the dynamics, contexts and negotiations of standard operating procedures. Based on empirical work on research consortia that explore the molecular profile of diseases to establish systems medical approaches in the clinic in Germany, we trace how standardized data are processed and shaped by bioinformatics tools, how scientists using such data in research perceive such standard operating procedures and which consequences for knowledge production (e.g. modeling) arise from it. Hence, different concepts and meanings of standardization are explored to get a deeper insight into standard operating procedures not only in systems medicine, but also beyond.

Keywords: data, science and technology studies (STS), standardization, systems medicine

Procedia PDF Downloads 329
23811 Statistical Randomness Testing of Some Second Round Candidate Algorithms of CAESAR Competition

Authors: Fatih Sulak, Betül A. Özdemir, Beyza Bozdemir

Abstract:

In order to improve symmetric key research, several competitions had been arranged by organizations like National Institute of Standards and Technology (NIST) and International Association for Cryptologic Research (IACR). In recent years, the importance of authenticated encryption has rapidly increased because of the necessity of simultaneously enabling integrity, confidentiality and authenticity. Therefore, at January 2013, IACR announced the Competition for Authenticated Encryption: Security, Applicability, and Robustness (CAESAR Competition) which will select secure and efficient algorithms for authenticated encryption. Cryptographic algorithms are anticipated to behave like random mappings; hence, it is important to apply statistical randomness tests to the outputs of the algorithms. In this work, the statistical randomness tests in the NIST Test Suite and the other recently designed randomness tests are applied to six second round algorithms of the CAESAR Competition. It is observed that AEGIS achieves randomness after 3 rounds, Ascon permutation function achieves randomness after 1 round, Joltik encryption function achieves randomness after 9 rounds, Morus state update function achieves randomness after 3 rounds, Pi-cipher achieves randomness after 1 round, and Tiaoxin achieves randomness after 1 round.

Keywords: authenticated encryption, CAESAR competition, NIST test suite, statistical randomness tests

Procedia PDF Downloads 310
23810 Integrated On-Board Diagnostic-II and Direct Controller Area Network Access for Vehicle Monitoring System

Authors: Kavian Khosravinia, Mohd Khair Hassan, Ribhan Zafira Abdul Rahman, Syed Abdul Rahman Al-Haddad

Abstract:

The CAN (controller area network) bus is introduced as a multi-master, message broadcast system. The messages sent on the CAN are used to communicate state information, referred as a signal between different ECUs, which provides data consistency in every node of the system. OBD-II Dongles that are based on request and response method is the wide-spread solution for extracting sensor data from cars among researchers. Unfortunately, most of the past researches do not consider resolution and quantity of their input data extracted through OBD-II technology. The maximum feasible scan rate is only 9 queries per second which provide 8 data points per second with using ELM327 as well-known OBD-II dongle. This study aims to develop and design a programmable, and latency-sensitive vehicle data acquisition system that improves the modularity and flexibility to extract exact, trustworthy, and fresh car sensor data with higher frequency rates. Furthermore, the researcher must break apart, thoroughly inspect, and observe the internal network of the vehicle, which may cause severe damages to the expensive ECUs of the vehicle due to intrinsic vulnerabilities of the CAN bus during initial research. Desired sensors data were collected from various vehicles utilizing Raspberry Pi3 as computing and processing unit with using OBD (request-response) and direct CAN method at the same time. Two types of data were collected for this study. The first, CAN bus frame data that illustrates data collected for each line of hex data sent from an ECU and the second type is the OBD data that represents some limited data that is requested from ECU under standard condition. The proposed system is reconfigurable, human-readable and multi-task telematics device that can be fitted into any vehicle with minimum effort and minimum time lag in the data extraction process. The standard operational procedure experimental vehicle network test bench is developed and can be used for future vehicle network testing experiment.

Keywords: CAN bus, OBD-II, vehicle data acquisition, connected cars, telemetry, Raspberry Pi3

Procedia PDF Downloads 188
23809 Big Data in Construction Project Management: The Colombian Northeast Case

Authors: Sergio Zabala-Vargas, Miguel Jiménez-Barrera, Luz VArgas-Sánchez

Abstract:

In recent years, information related to project management in organizations has been increasing exponentially. Performance data, management statistics, indicator results have forced the collection, analysis, traceability, and dissemination of project managers to be essential. In this sense, there are current trends to facilitate efficient decision-making in emerging technology projects, such as: Machine Learning, Data Analytics, Data Mining, and Big Data. The latter is the most interesting in this project. This research is part of the thematic line Construction methods and project management. Many authors present the relevance that the use of emerging technologies, such as Big Data, has taken in recent years in project management in the construction sector. The main focus is the optimization of time, scope, budget, and in general mitigating risks. This research was developed in the northeastern region of Colombia-South America. The first phase was aimed at diagnosing the use of emerging technologies (Big-Data) in the construction sector. In Colombia, the construction sector represents more than 50% of the productive system, and more than 2 million people participate in this economic segment. The quantitative approach was used. A survey was applied to a sample of 91 companies in the construction sector. Preliminary results indicate that the use of Big Data and other emerging technologies is very low and also that there is interest in modernizing project management. There is evidence of a correlation between the interest in using new data management technologies and the incorporation of Building Information Modeling BIM. The next phase of the research will allow the generation of guidelines and strategies for the incorporation of technological tools in the construction sector in Colombia.

Keywords: big data, building information modeling, tecnology, project manamegent

Procedia PDF Downloads 120