Search results for: data sensitivity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26513

Search results for: data sensitivity

25223 Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano

Authors: Guo Wenyu, Qu Youli

Abstract:

A practical and simple self-indexing data structure, Partitioned Elias-Fano (PEF) - Compressed Suffix Arrays (CSA), is built in linear time for the CSA based on PEF indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, Ferragina and Manzini implementation (FMI) and Sad-CSA on different type and size files in Pizza & Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count, and locates time except for the evenly distributed data such as proteins data. The observations of the experiments are that the distribution of the φ is more important than the alphabet size on the compression ratio. Unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.

Keywords: compressed suffix array, self-indexing, partitioned Elias-Fano, PEF-CSA

Procedia PDF Downloads 252
25222 Cross-Cultural Conflict Management in Transnational Business Relationships: A Qualitative Study with Top Executives in Chinese, German and Middle Eastern Cases

Authors: Sandra Hartl, Meena Chavan

Abstract:

This paper presents the outcome of a four year Ph.D. research on cross-cultural conflict management in transnational business relationships. An important and complex problem about managing conflicts that arise across cultures in business relationships is investigated, and conflict resolution strategies are identified. This paper particularly focuses on transnational relationships within a Chinese, German and Middle Eastern framework. Unlike many papers on this issue which have been built on experiments with international MBA students, this research provides real-life cases of cross-cultural conflicts which are not easy to capture. Its uniqueness is underpinned as the real case data was gathered by interviewing top executives at management positions in large multinational corporations through a qualitative case study method approach. This paper makes a valuable contribution to the theory of cross-cultural conflicts, and despite the sensitivity, this research primarily presents real-time business data about breaches of contracts between two counterparties engaged in transnational operating organizations. The overarching aim of this research is to identify the degree of significance for the cultural factors and the communication factors embedded in cross-cultural business conflicts. It questions from a cultural perspective what factors lead to the conflicts in each of the cases, what the causes are and the role of culture in identifying effective strategies for resolving international disputes in an increasingly globalized business world. The results of 20 face to face interviews are outlined, which were conducted, recorded, transcribed and then analyzed using the NVIVO qualitative data analysis system. The outcomes make evident that the factors leading to conflicts are broadly organized under seven themes, which are communication, cultural difference, environmental issues, work structures, knowledge and skills, cultural anxiety and personal characteristics. When evaluating the causes of the conflict it is to notice that these are rather multidimensional. Irrespective of the conflict types (relationship or task-based conflict or due to individual personal differences), relationships are almost always an element of all conflicts. Cultural differences, which are a critical factor for conflicts, result from different cultures placing different levels of importance on relationships. Communication issues which are another cause of conflict also reflect different relationships styles favored by different cultures. In identifying effective strategies for solving cross-cultural business conflicts this research identifies that solutions need to consider the national cultures (country specific characteristics), organizational cultures and individual culture, of the persons engaged in the conflict and how these are interlinked to each other. Outcomes identify practical dispute resolution strategies to resolve cross-cultural business conflicts in reference to communication, empathy and training to improve cultural understanding and cultural competence, through the use of mediation. To conclude, the findings of this research will not only add value to academic knowledge of cross-cultural conflict management across transnational businesses but will also add value to numerous cross-border business relationships worldwide. Above all it identifies the influence of cultures and communication and cross-cultural competence in reducing cross-cultural business conflicts in transnational business.

Keywords: business conflict, conflict management, cross-cultural communication, dispute resolution

Procedia PDF Downloads 162
25221 Data, Digital Identity and Antitrust Law: An Exploratory Study of Facebook’s Novi Digital Wallet

Authors: Wanjiku Karanja

Abstract:

Facebook has monopoly power in the social networking market. It has grown and entrenched its monopoly power through the capture of its users’ data value chains. However, antitrust law’s consumer welfare roots have prevented it from effectively addressing the role of data capture in Facebook’s market dominance. These regulatory blind spots are augmented in Facebook’s proposed Diem cryptocurrency project and its Novi Digital wallet. Novi, which is Diem’s digital identity component, shall enable Facebook to collect an unprecedented volume of consumer data. Consequently, Novi has seismic implications on internet identity as the network effects of Facebook’s large user base could establish it as the de facto internet identity layer. Moreover, the large tracts of data Facebook shall collect through Novi shall further entrench Facebook's market power. As such, the attendant lock-in effects of this project shall be very difficult to reverse. Urgent regulatory action is therefore required to prevent this expansion of Facebook’s data resources and monopoly power. This research thus highlights the importance of data capture to competition and market health in the social networking industry. It utilizes interviews with key experts to empirically interrogate the impact of Facebook’s data capture and control of its users’ data value chains on its market power. This inquiry is contextualized against Novi’s expansive effect on Facebook’s data value chains. It thus addresses the novel antitrust issues arising at the nexus of Facebook’s monopoly power and the privacy of its users’ data. It also explores the impact of platform design principles, specifically data portability and data portability, in mitigating Facebook’s anti-competitive practices. As such, this study finds that Facebook is a powerful monopoly that dominates the social media industry to the detriment of potential competitors. Facebook derives its power from its size, annexure of the consumer data value chain, and control of its users’ social graphs. Additionally, the platform design principles of data interoperability and data portability are not a panacea to restoring competition in the social networking market. Their success depends on the establishment of robust technical standards and regulatory frameworks.

Keywords: antitrust law, data protection law, data portability, data interoperability, digital identity, Facebook

Procedia PDF Downloads 123
25220 Data Quality Enhancement with String Length Distribution

Authors: Qi Xiu, Hiromu Hota, Yohsuke Ishii, Takuya Oda

Abstract:

Recently, collectable manufacturing data are rapidly increasing. On the other hand, mega recall is getting serious as a social problem. Under such circumstances, there are increasing needs for preventing mega recalls by defect analysis such as root cause analysis and abnormal detection utilizing manufacturing data. However, the time to classify strings in manufacturing data by traditional method is too long to meet requirement of quick defect analysis. Therefore, we present String Length Distribution Classification method (SLDC) to correctly classify strings in a short time. This method learns character features, especially string length distribution from Product ID, Machine ID in BOM and asset list. By applying the proposal to strings in actual manufacturing data, we verified that the classification time of strings can be reduced by 80%. As a result, it can be estimated that the requirement of quick defect analysis can be fulfilled.

Keywords: string classification, data quality, feature selection, probability distribution, string length

Procedia PDF Downloads 318
25219 Detection of Bcl2 Polymorphism in Patient with Hepatocellular carcinoma

Authors: Mohamed Abdel-Hamid, Olfat Gamil Shaker, Doha El-Sayed Ellakwa, Eman Fathy Abdel-Maksoud

Abstract:

Introduction: Despite advances in the knowledge of the molecular virology of hepatitis C virus (HCV), the mechanisms of hepatocellular injury in HCV infection are not completely understood. Hepatitis C viral infection (HCV) influences the susceptibility to apoptosis. This could lead to insufficient antiviral immune response and persistent viral infection. Aim of this study: was to examine whether BCL-2 gene polymorphism at codon 43 (+127G/A or Ala43Thr) has an impact on development of hepatocellular carcinoma caused by chronic hepatitis C Egyptian patients. Subjects and Methods: The study included three groups; group 1: composing of 30 patients with hepatocellular carcinoma (HCC), group 2 composing of 30 patients with HCV, group 3 composing of 30 healthy subjects matching the same age and socioeconomic status were taken as a control group. Gene polymorphism of BCL2 (Ala43Thr) were evaluated by PCR-RFLP technique and measured for all patients and controls. Results: The summed 43Thr genotype was more frequent and statistically significant in HCC patients as compared to control group. This genotype of BCL2 gene may inhibit the programmed cell death which leads to disturbance in tissue and cells homeostasis and reduction in immune regulation. This result leads to viral replication and HCV persistence. Moreover, virus produces variety of mechanisms to block genes participated in apoptosis. This mechanism proves that HCV patients who have 43Thr genotype are more susceptible to HCC. Conclusion: The data suggest for the first time that the BCL2 polymorphism is associated with the susceptibility to HCC in Egyptian populations and might be used as molecular markers for evaluating HCC risk. This study clearly demonstrated that Chronic HCV exhibit a deregulation of apoptosis with the disease progression. This provides an insight into the pathogenesis of chronic HCV infection, and may contribute to the therapy.

Keywords: BCL2 gene, Hepatitis C Virus, Hepatocellular carcinoma, sensitivity, specificity, apoptosis

Procedia PDF Downloads 508
25218 Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data

Authors: Salam Khalifa, Naveed Ahmed

Abstract:

We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.

Keywords: 3D video, 3D animation, RGB-D video, temporally coherent 3D animation

Procedia PDF Downloads 373
25217 Gender Gap in Returns to Social Entrepreneurship

Authors: Saul Estrin, Ute Stephan, Suncica Vujic

Abstract:

Background and research question: Gender differences in pay are present at all organisational levels, including at the very top. One possible way for women to circumvent organizational norms and discrimination is to engage in entrepreneurship because, as CEOs of their own organizations, entrepreneurs largely determine their own pay. While commercial entrepreneurship plays an important role in job creation and economic growth, social entrepreneurship has come to prominence because of its promise of addressing societal challenges such as poverty, social exclusion, or environmental degradation through market-based rather than state-sponsored activities. This opens the research question whether social entrepreneurship might be a form of entrepreneurship in which the pay of men and women is the same, or at least more similar; that is to say there is little or no gender pay gap. If the gender gap in pay persists also at the top of social enterprises, what are the factors, which might explain these differences? Methodology: The Oaxaca-Blinder Decomposition (OBD) is the standard approach of decomposing the gender pay gap based on the linear regression model. The OBD divides the gender pay gap into the ‘explained’ part due to differences in labour market characteristics (education, work experience, tenure, etc.), and the ‘unexplained’ part due to differences in the returns to those characteristics. The latter part is often interpreted as ‘discrimination’. There are two issues with this approach. (i) In many countries there is a notable convergence in labour market characteristics across genders; hence the OBD method is no longer revealing, since the largest portion of the gap remains ‘unexplained’. (ii) Adding covariates to a base model sequentially either to test a particular coefficient’s ‘robustness’ or to account for the ‘effects’ on this coefficient of adding covariates might be problematic, due to sequence-sensitivity when added covariates are correlated. Gelbach’s decomposition (GD) addresses latter by using the omitted variables bias formula, which constructs a conditional decomposition thus accounting for sequence-sensitivity when added covariates are correlated. We use GD to decompose the differences in gaps of pay (annual and hourly salary), size of the organisation (revenues), effort (weekly hours of work), and sources of finances (fees and sales, grants and donations, microfinance and loans, and investors’ capital) between men and women leading social enterprises. Database: Our empirical work is made possible by our collection of a unique dataset using respondent driven sampling (RDS) methods to address the problem that there is as yet no information on the underlying population of social entrepreneurs. The countries that we focus on are the United Kingdom, Spain, Romania and Hungary. Findings and recommendations: We confirm the existence of a gender pay gap between men and women leading social enterprises. This gap can be explained by differences in the accumulation of human capital, psychological and social factors, as well as cross-country differences. The results of this study contribute to a more rounded perspective, highlighting that although social entrepreneurship may be a highly satisfying occupation, it also perpetuates gender pay inequalities.

Keywords: Gelbach’s decomposition, gender gap, returns to social entrepreneurship, values and preferences

Procedia PDF Downloads 244
25216 Determining Abnomal Behaviors in UAV Robots for Trajectory Control in Teleoperation

Authors: Kiwon Yeom

Abstract:

Change points are abrupt variations in a data sequence. Detection of change points is useful in modeling, analyzing, and predicting time series in application areas such as robotics and teleoperation. In this paper, a change point is defined to be a discontinuity in one of its derivatives. This paper presents a reliable method for detecting discontinuities within a three-dimensional trajectory data. The problem of determining one or more discontinuities is considered in regular and irregular trajectory data from teleoperation. We examine the geometric detection algorithm and illustrate the use of the method on real data examples.

Keywords: change point, discontinuity, teleoperation, abrupt variation

Procedia PDF Downloads 167
25215 Hydrothermal Synthesis of Mesoporous Carbon Nanospheres and Their Electrochemical Properties for Glucose Detection

Authors: Ali Akbar Kazemi Asl, Mansour Rahsepar

Abstract:

Mesoporous carbon nanospheres (MCNs) with uniform particle size distribution having an average of 290 nm and large specific surface area (274.4 m²/g) were synthesized by a one-step hydrothermal method followed by the calcination process and then utilized as an enzyme-free glucose biosensor. Morphology, crystal structure, and porous nature of the synthesized nanospheres were characterized by scanning electron microscopy (SEM), X-Ray diffraction (XRD), and Brunauer–Emmett–Teller (BET) analysis, respectively. Also, the electrochemical performance of the MCNs@GCE electrode for the measurement of glucose concentration in alkaline media was investigated by electrochemical impedance spectroscopy (EIS), cyclic voltammetry (CV), and chronoamperometry (CA). MCNs@GCE electrode shows good sensing performance, including a rapid glucose oxidation response within 3.1 s, a wide linear range of 0.026-12 mM, a sensitivity of 212.34 μA.mM⁻¹.cm⁻², and a detection limit of 25.7 μM with excellent selectivity.

Keywords: biosensor, electrochemical, glucose, mesoporous carbon, non-enzymatic

Procedia PDF Downloads 190
25214 Multidimensional Item Response Theory Models for Practical Application in Large Tests Designed to Measure Multiple Constructs

Authors: Maria Fernanda Ordoñez Martinez, Alvaro Mauricio Montenegro

Abstract:

This work presents a statistical methodology for measuring and founding constructs in Latent Semantic Analysis. This approach uses the qualities of Factor Analysis in binary data with interpretations present on Item Response Theory. More precisely, we propose initially reducing dimensionality with specific use of Principal Component Analysis for the linguistic data and then, producing axes of groups made from a clustering analysis of the semantic data. This approach allows the user to give meaning to previous clusters and found the real latent structure presented by data. The methodology is applied in a set of real semantic data presenting impressive results for the coherence, speed and precision.

Keywords: semantic analysis, factorial analysis, dimension reduction, penalized logistic regression

Procedia PDF Downloads 443
25213 Analysis of Production Forecasting in Unconventional Gas Resources Development Using Machine Learning and Data-Driven Approach

Authors: Dongkwon Han, Sangho Kim, Sunil Kwon

Abstract:

Unconventional gas resources have dramatically changed the future energy landscape. Unlike conventional gas resources, the key challenges in unconventional gas have been the requirement that applies to advanced approaches for production forecasting due to uncertainty and complexity of fluid flow. In this study, artificial neural network (ANN) model which integrates machine learning and data-driven approach was developed to predict productivity in shale gas. The database of 129 wells of Eagle Ford shale basin used for testing and training of the ANN model. The Input data related to hydraulic fracturing, well completion and productivity of shale gas were selected and the output data is a cumulative production. The performance of the ANN using all data sets, clustering and variables importance (VI) models were compared in the mean absolute percentage error (MAPE). ANN model using all data sets, clustering, and VI were obtained as 44.22%, 10.08% (cluster 1), 5.26% (cluster 2), 6.35%(cluster 3), and 32.23% (ANN VI), 23.19% (SVM VI), respectively. The results showed that the pre-trained ANN model provides more accurate results than the ANN model using all data sets.

Keywords: unconventional gas, artificial neural network, machine learning, clustering, variables importance

Procedia PDF Downloads 196
25212 Implementing the WHO Air Quality Guideline for PM2.5 Worldwide can Prevent Millions of Premature Deaths Per Year

Authors: Despina Giannadaki, Jos Lelieveld, Andrea Pozzer, John Evans

Abstract:

Outdoor air pollution by fine particles ranks among the top ten global health risk factors that can lead to premature mortality. Epidemiological cohort studies, mainly conducted in United States and Europe, have shown that the long-term exposure to PM2.5 (particles with an aerodynamic diameter less than 2.5μm) is associated with increased mortality from cardiovascular, respiratory diseases and lung cancer. Fine particulates can cause health impacts even at very low concentrations. Previously, no concentration level has been defined below which health damage can be fully prevented. The World Health Organization ambient air quality guidelines suggest an annual mean PM2.5 concentration limit of 10μg/m3. Populations in large parts of the world, especially in East and Southeast Asia, and in the Middle East, are exposed to high levels of fine particulate pollution that by far exceeds the World Health Organization guidelines. The aim of this work is to evaluate the implementation of recent air quality standards for PM2.5 in the EU, the US and other countries worldwide and estimate what measures will be needed to substantially reduce premature mortality. We investigated premature mortality attributed to fine particulate matter (PM2.5) under adults ≥ 30yrs and children < 5yrs, applying a high-resolution global atmospheric chemistry model combined with epidemiological concentration-response functions. The latter are based on the methodology of the Global Burden of Disease for 2010, assuming a ‘safe’ annual mean PM2.5 threshold of 7.3μg/m3. We estimate the global premature mortality by PM2.5 at 3.15 million/year in 2010. China is the leading country with about 1.33 million, followed by India with 575 thousand and Pakistan with 105 thousand. For the European Union (EU) we estimate 173 thousand and the United States (US) 52 thousand in 2010. Based on sensitivity calculations we tested the gains from PM2.5 control by applying the air quality guidelines (AQG) and standards of the World Health Organization (WHO), the EU, the US and other countries. To estimate potential reductions in mortality rates we take into consideration the deaths that cannot be avoided after the implementation of PM2.5 upper limits, due to the contribution of natural sources to total PM2.5 and therefore to mortality (mainly airborne desert dust). The annual mean EU limit of 25μg/m3 would reduce global premature mortality by 18%, while within the EU the effect is negligible, indicating that the standard is largely met and that stricter limits are needed. The new US standard of 12μg/m3 would reduce premature mortality by 46% worldwide, 4% in the US and 20% in the EU. Implementing the AQG by the WHO of 10μg/m3 would reduce global premature mortality by 54%, 76% in China and 59% in India. In the EU and US, the mortality would be reduced by 36% and 14%, respectively. Hence, following the WHO guideline will prevent 1.7 million premature deaths per year. Sensitivity calculations indicate that even small changes at the lower PM2.5 standards can have major impacts on global mortality rates.

Keywords: air quality guidelines, outdoor air pollution, particulate matter, premature mortality

Procedia PDF Downloads 310
25211 Procedure Model for Data-Driven Decision Support Regarding the Integration of Renewable Energies into Industrial Energy Management

Authors: M. Graus, K. Westhoff, X. Xu

Abstract:

The climate change causes a change in all aspects of society. While the expansion of renewable energies proceeds, industry could not be convinced based on general studies about the potential of demand side management to reinforce smart grid considerations in their operational business. In this article, a procedure model for a case-specific data-driven decision support for industrial energy management based on a holistic data analytics approach is presented. The model is executed on the example of the strategic decision problem, to integrate the aspect of renewable energies into industrial energy management. This question is induced due to considerations of changing the electricity contract model from a standard rate to volatile energy prices corresponding to the energy spot market which is increasingly more affected by renewable energies. The procedure model corresponds to a data analytics process consisting on a data model, analysis, simulation and optimization step. This procedure will help to quantify the potentials of sustainable production concepts based on the data from a factory. The model is validated with data from a printer in analogy to a simple production machine. The overall goal is to establish smart grid principles for industry via the transformation from knowledge-driven to data-driven decisions within manufacturing companies.

Keywords: data analytics, green production, industrial energy management, optimization, renewable energies, simulation

Procedia PDF Downloads 435
25210 Dissimilarity-Based Coloring for Symbolic and Multivariate Data Visualization

Authors: K. Umbleja, M. Ichino, H. Yaguchi

Abstract:

In this paper, we propose a coloring method for multivariate data visualization by using parallel coordinates based on dissimilarity and tree structure information gathered during hierarchical clustering. The proposed method is an extension for proximity-based coloring that suffers from a few undesired side effects if hierarchical tree structure is not balanced tree. We describe the algorithm by assigning colors based on dissimilarity information, show the application of proposed method on three commonly used datasets, and compare the results with proximity-based coloring. We found our proposed method to be especially beneficial for symbolic data visualization where many individual objects have already been aggregated into a single symbolic object.

Keywords: data visualization, dissimilarity-based coloring, proximity-based coloring, symbolic data

Procedia PDF Downloads 170
25209 The Impact of Data Science on Geography: A Review

Authors: Roberto Machado

Abstract:

We conducted a systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology, analyzing 2,996 studies and synthesizing 41 of them to explore the evolution of data science and its integration into geography. By employing optimization algorithms, we accelerated the review process, significantly enhancing the efficiency and precision of literature selection. Our findings indicate that data science has developed over five decades, facing challenges such as the diversified integration of data and the need for advanced statistical and computational skills. In geography, the integration of data science underscores the importance of interdisciplinary collaboration and methodological innovation. Techniques like large-scale spatial data analysis and predictive algorithms show promise in natural disaster management and transportation route optimization, enabling faster and more effective responses. These advancements highlight the transformative potential of data science in geography, providing tools and methodologies to address complex spatial problems. The relevance of this study lies in the use of optimization algorithms in systematic reviews and the demonstrated need for deeper integration of data science into geography. Key contributions include identifying specific challenges in combining diverse spatial data and the necessity for advanced computational skills. Examples of connections between these two fields encompass significant improvements in natural disaster management and transportation efficiency, promoting more effective and sustainable environmental solutions with a positive societal impact.

Keywords: data science, geography, systematic review, optimization algorithms, supervised learning

Procedia PDF Downloads 29
25208 Developing Structured Sizing Systems for Manufacturing Ready-Made Garments of Indian Females Using Decision Tree-Based Data Mining

Authors: Hina Kausher, Sangita Srivastava

Abstract:

In India, there is a lack of standard, systematic sizing approach for producing readymade garments. Garments manufacturing companies use their own created size tables by modifying international sizing charts of ready-made garments. The purpose of this study is to tabulate the anthropometric data which covers the variety of figure proportions in both height and girth. 3,000 data has been collected by an anthropometric survey undertaken over females between the ages of 16 to 80 years from some states of India to produce the sizing system suitable for clothing manufacture and retailing. This data is used for the statistical analysis of body measurements, the formulation of sizing systems and body measurements tables. Factor analysis technique is used to filter the control body dimensions from a large number of variables. Decision tree-based data mining is used to cluster the data. The standard and structured sizing system can facilitate pattern grading and garment production. Moreover, it can exceed buying ratios and upgrade size allocations to retail segments.

Keywords: anthropometric data, data mining, decision tree, garments manufacturing, sizing systems, ready-made garments

Procedia PDF Downloads 133
25207 A Framework on Data and Remote Sensing for Humanitarian Logistics

Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini

Abstract:

Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.

Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making

Procedia PDF Downloads 378
25206 Facility Data Model as Integration and Interoperability Platform

Authors: Nikola Tomasevic, Marko Batic, Sanja Vranes

Abstract:

Emerging Semantic Web technologies can be seen as the next step in evolution of the intelligent facility management systems. Particularly, this considers increased usage of open source and/or standardized concepts for data classification and semantic interpretation. To deliver such facility management systems, providing the comprehensive integration and interoperability platform in from of the facility data model is a prerequisite. In this paper, one of the possible modelling approaches to provide such integrative facility data model which was based on the ontology modelling concept was presented. Complete ontology development process, starting from the input data acquisition, ontology concepts definition and finally ontology concepts population, was described. At the beginning, the core facility ontology was developed representing the generic facility infrastructure comprised of the common facility concepts relevant from the facility management perspective. To develop the data model of a specific facility infrastructure, first extension and then population of the core facility ontology was performed. For the development of the full-blown facility data models, Malpensa and Fiumicino airports in Italy, two major European air-traffic hubs, were chosen as a test-bed platform. Furthermore, the way how these ontology models supported the integration and interoperability of the overall airport energy management system was analyzed as well.

Keywords: airport ontology, energy management, facility data model, ontology modeling

Procedia PDF Downloads 448
25205 The Modulatory Effect of Some Antioxidants on Animal Model of Metabolic Syndrome Induced by High Fructose Fed Diet

Authors: Hala M. Abdelkarem, Abeer H. Gafeer

Abstract:

The metabolic syndrome (Mts) is a constellation of risk factors. The main objective of this study is to compare the ameliorating effect of metformin, lipitor, orilstate, lipoic acid and carnitin on insulin, lipid profile, leptin, adenonectin levels in metabolic syndrom (high fructose fed rats HF). Seventy male albino rats were divided into seven groups. G1: normal control. G2: G7 rats fed HF for 8wks. After four wk HF feeding, G3, G4, G5, G6, and G7 were orally administered (200 mg/kg daily) metformin, lipitor, orilstate, lipoic acid and carnitin respectively. All drugs were adminiseterd once daily. After 8 weeks of feeding, a significant increase in blood glucose level was observed in HF fed rats compared to normal rats, but this increase was significantly decreased after administration of metformin and lipitor. The raised of serum insulin level in HF fed rats was significantly decreased after administration of lipoic, carnitin, metformin. Significant higher concentrations of triglycerides (TG), total cholesterol & low density lipoprotein cholesterol (LDL- C) were observed in HF fed rats and these increases were significantly lowered after the administration of all the previous drugs. There was a significant decrease in serum high density lipoprotein cholesterol (HDL-C) in HF group administration of all drugs alleviates this reduction. The increased of serum leptin level in HF group was decreased significantly in met and orilstate groups. Whereas the reduction of serum adiponectin level in HF fed rats was increased in Lipitor, carnitin, orilstate groups. These data suggested that benefial effect of metformin, lipitor, orilstate, lipoic acid carnitin in reducing risk for people with decreased insulin sensitivity, increased oxidative stress and hyperlipidemia such as those with the metabolic syndrome or type 2 diabetes.

Keywords: metabolic syndrome, diabetes, proinflammation, antioxidants

Procedia PDF Downloads 323
25204 Vibroacoustic Modulation with Chirp Signal

Authors: Dong Liu

Abstract:

By sending a high-frequency probe wave and a low-frequency pump wave to a specimen, the vibroacoustic method evaluates the defect’s severity according to the modulation index of the received signal. Many studies experimentally proved the significant sensitivity of the modulation index to the tiny contact type defect. However, it has also been found that the modulation index was highly affected by the frequency of probe or pump waves. Therefore, the chirp signal has been introduced to the VAM method since it can assess multiple frequencies in a relatively short time duration, so the robustness of the VAM method could be enhanced. Consequently, the signal processing method needs to be modified accordingly. Various studies utilized different algorithms or combinations of algorithms for processing the VAM signal method by chirp excitation. These signal process methods were compared and used for processing a VAM signal acquired from the steel samples.

Keywords: vibroacoustic modulation, nonlinear acoustic modulation, nonlinear acoustic NDT&E, signal processing, structural health monitoring

Procedia PDF Downloads 99
25203 Study of Germs Responsible of Nosocomial Infections in Hospital of Guelma

Authors: Wissem Abdaoui, Ilhem Mokhtari, Adel Gouri, Benouareth Djamel Eddine

Abstract:

Contracted in a health facility, hospital-acquired infections are a major public health problem in recent years. The increase of nosocomial infections is partly related to diagnostic and therapeutic advances in medicine. The aim of our study was to isolate and diagnose some types of bacteria that are circulating in the hospital by performing different samples at two medical services: Pulmonary and Infectious Diseases. The antibiotic susceptibility tests were performed for bacterial isolates. The results have shown that there is a predominance of enterobacteria followed by the staphylococcus with its two species epidermidis ans saprophyticus. The study of the antibiogramme identified that some of these bacteria have a resistant profile against all the tested antibiotics. The fight against nosocomial infections is difficult because it must act on several factors: quality of care, safety of the hospital environment, hygiene, wearing gloves etc. are all areas that should be of heightened vigilance and preventive measures.

Keywords: nosocomial infection, isolation, identification, sensitivity and resistance to antibiotics

Procedia PDF Downloads 380
25202 Material Detection by Phase Shift Cavity Ring-Down Spectroscopy

Authors: Rana Muhammad Armaghan Ayaz, Yigit Uysallı, Nima Bavili, Berna Morova, Alper Kiraz

Abstract:

Traditional optical methods for example resonance wavelength shift and cavity ring-down spectroscopy used for material detection and sensing have disadvantages, for example, less resistance to laser noise, temperature fluctuations and extraction of the required information can be a difficult task like ring downtime in case of cavity ring-down spectroscopy. Phase shift cavity ring down spectroscopy is not only easy to use but is also capable of overcoming the said problems. This technique compares the phase difference between the signal coming out of the cavity with the reference signal. Detection of any material is made by the phase difference between them. By using this technique, air, water, and isopropyl alcohol can be recognized easily. This Methodology has far-reaching applications and can be used in air pollution detection, human breath analysis and many more.

Keywords: materials, noise, phase shift, resonance wavelength, sensitivity, time domain approach

Procedia PDF Downloads 149
25201 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 105
25200 The Effect of Cinnamaldehyde on Escherichia coli Survival during Low Temperature Long Time Cooking

Authors: Fuji Astuti, Helen Onyeaka

Abstract:

The aim of the study was to investigate the combine effects of cinnamaldehyde (0.25 and 0.45% v/v) on thermal resistance of pathogenic Escherichia coli during low temperature long time (LT-LT) cooking below 60℃. Three different static temperatures (48, 53 and 50℃) were performed, and the number of viable cells was studied. The starting concentrations of cells were 10⁸ CFU/ml. In this experiment, heat treatment efficiency for safe reduction indicated by decimal logarithm reduction of viable recovered cells, which was monitored for heating over 6 hours. Thermal inactivation was measured by means of establishing the death curves between the mean log surviving cells (log₁₀ CFU/ml) and designated time points (minutes) for each temperature test. The findings depicted that addition of cinnamaldehyde exhibited to elevate the thermal sensitivity of E. coli. However, the injured cells found to be well-adapted to all temperature tests after certain time point of cooking, in which they grew to more than 10⁵ CFU/ml.

Keywords: cinnamaldehyde, decimal logarithm reduction, Escherichia coli, LT-LT cooking

Procedia PDF Downloads 358
25199 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines

Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma

Abstract:

Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.

Keywords: support vector mechanism (SVM), machine learning (ML), support vector machines (SVM), department of transportation (DFT)

Procedia PDF Downloads 274
25198 A Relational Data Base for Radiation Therapy

Authors: Raffaele Danilo Esposito, Domingo Planes Meseguer, Maria Del Pilar Dorado Rodriguez

Abstract:

As far as we know, it is still unavailable a commercial solution which would allow to manage, openly and configurable up to user needs, the huge amount of data generated in a modern Radiation Oncology Department. Currently, available information management systems are mainly focused on Record & Verify and clinical data, and only to a small extent on physical data. Thus, results in a partial and limited use of the actually available information. In the present work we describe the implementation at our department of a centralized information management system based on a web server. Our system manages both information generated during patient planning and treatment, and information of general interest for the whole department (i.e. treatment protocols, quality assurance protocols etc.). Our objective it to be able to analyze in a simple and efficient way all the available data and thus to obtain quantitative evaluations of our treatments. This would allow us to improve our work flow and protocols. To this end we have implemented a relational data base which would allow us to use in a practical and efficient way all the available information. As always we only use license free software.

Keywords: information management system, radiation oncology, medical physics, free software

Procedia PDF Downloads 241
25197 A Study of Safety of Data Storage Devices of Graduate Students at Suan Sunandha Rajabhat University

Authors: Komol Phaisarn, Natcha Wattanaprapa

Abstract:

This research is a survey research with an objective to study the safety of data storage devices of graduate students of academic year 2013, Suan Sunandha Rajabhat University. Data were collected by questionnaire on the safety of data storage devices according to CIA principle. A sample size of 81 was drawn from population by purposive sampling method. The results show that most of the graduate students of academic year 2013 at Suan Sunandha Rajabhat University use handy drive to store their data and the safety level of the devices is at good level.

Keywords: security, safety, storage devices, graduate students

Procedia PDF Downloads 353
25196 Simulation of a Cost Model Response Requests for Replication in Data Grid Environment

Authors: Kaddi Mohammed, A. Benatiallah, D. Benatiallah

Abstract:

Data grid is a technology that has full emergence of new challenges, such as the heterogeneity and availability of various resources and geographically distributed, fast data access, minimizing latency and fault tolerance. Researchers interested in this technology address the problems of the various systems related to the industry such as task scheduling, load balancing and replication. The latter is an effective solution to achieve good performance in terms of data access and grid resources and better availability of data cost. In a system with duplication, a coherence protocol is used to impose some degree of synchronization between the various copies and impose some order on updates. In this project, we present an approach for placing replicas to minimize the cost of response of requests to read or write, and we implement our model in a simulation environment. The placement techniques are based on a cost model which depends on several factors, such as bandwidth, data size and storage nodes.

Keywords: response time, query, consistency, bandwidth, storage capacity, CERN

Procedia PDF Downloads 271
25195 Enabling Non-invasive Diagnosis of Thyroid Nodules with High Specificity and Sensitivity

Authors: Sai Maniveer Adapa, Sai Guptha Perla, Adithya Reddy P.

Abstract:

Thyroid nodules can often be diagnosed with ultrasound imaging, although differentiating between benign and malignant nodules can be challenging for medical professionals. This work suggests a novel approach to increase the precision of thyroid nodule identification by combining machine learning and deep learning. The new approach first extracts information from the ultrasound pictures using a deep learning method known as a convolutional autoencoder. A support vector machine, a type of machine learning model, is then trained using these features. With an accuracy of 92.52%, the support vector machine can differentiate between benign and malignant nodules. This innovative technique may decrease the need for pointless biopsies and increase the accuracy of thyroid nodule detection.

Keywords: thyroid tumor diagnosis, ultrasound images, deep learning, machine learning, convolutional auto-encoder, support vector machine

Procedia PDF Downloads 58
25194 Prompt Design for Code Generation in Data Analysis Using Large Language Models

Authors: Lu Song Ma Li Zhi

Abstract:

With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.

Keywords: large language models, prompt design, data analysis, code generation

Procedia PDF Downloads 39