Search results for: cardio data analysis
41366 Biomechanical Performance of the Synovial Capsule of the Glenohumeral Joint with a BANKART Lesion through Finite Element Analysis
Authors: Duvert A. Puentes T., Javier A. Maldonado E., Ivan Quintero., Diego F. Villegas
Abstract:
Mechanical Computation is a great tool to study the performance of complex models. An example of it is the study of the human body structure. This paper took advantage of different types of software to make a 3D model of the glenohumeral joint and apply a finite element analysis. The main objective was to study the change in the biomechanical properties of the joint when it presents an injury. Specifically, a BANKART lesion, which consists in the detachment of the anteroinferior labrum from the glenoid. Stress and strain distribution of the soft tissues were the focus of this study. First, a 3D model was made of a joint without any pathology, as a control sample, using segmentation software for the bones with the support of medical imagery and a cadaveric model to represent the soft tissue. The joint was built to simulate a compression and external rotation test using CAD to prepare the model in the adequate position. When the healthy model was finished, it was submitted to a finite element analysis and the results were validated with experimental model data. With the validated model, it was sensitized to obtain the best mesh measurement. Finally, the geometry of the 3D model was changed to imitate a BANKART lesion. Then, the contact zone of the glenoid with the labrum was slightly separated simulating a tissue detachment. With this new geometry, the finite element analysis was applied again, and the results were compared with the control sample created initially. With the data gathered, this study can be used to improve understanding of the labrum tears. Nevertheless, it is important to remember that the computational analysis are approximations and the initial data was taken from an in vitro assay.Keywords: biomechanics, computational model, finite elements, glenohumeral joint, bankart lesion, labrum
Procedia PDF Downloads 16141365 Financial Reports and Common Ownership: An Analysis of the Mechanisms Common Owners Use to Induce Anti-Competitive Behavior
Authors: Kevin Smith
Abstract:
Publicly traded company in the US are legally obligated to host earnings calls that discuss their most recent financial reports. During these calls, investors are able to ask these companies questions about these financial reports and on the future direction of the company. This paper examines whether common institutional owners use these calls as a way to indirectly signal to companies in their portfolio to not take actions that could hurt the common owner's interests. This paper uses transcripts taken from the earnings calls of the six largest health insurance companies in the US from 2014 to 2019. This data is analyzed using text analysis and sentiment analysis to look for patterns in the statements made by common owners. The analysis found that common owners where more likely to recommend against direct price competition and instead redirect the insurance companies towards more passive actions, like investing in new technologies. This result indicates a mechanism that common owners use to reduce competition in the health insurance market.Keywords: common ownership, text analysis, sentiment analysis, machine learning
Procedia PDF Downloads 7441364 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy
Authors: Chhabi Nigam, S. Ramakrishnan
Abstract:
This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR
Procedia PDF Downloads 21841363 Automatic Lead Qualification with Opinion Mining in Customer Relationship Management Projects
Authors: Victor Radich, Tania Basso, Regina Moraes
Abstract:
Lead qualification is one of the main procedures in Customer Relationship Management (CRM) projects. Its main goal is to identify potential consumers who have the ideal characteristics to establish a profitable and long-term relationship with a certain organization. Social networks can be an important source of data for identifying and qualifying leads since interest in specific products or services can be identified from the users’ expressed feelings of (dis)satisfaction. In this context, this work proposes the use of machine learning techniques and sentiment analysis as an extra step in the lead qualification process in order to improve it. In addition to machine learning models, sentiment analysis or opinion mining can be used to understand the evaluation that the user makes of a particular service, product, or brand. The results obtained so far have shown that it is possible to extract data from social networks and combine the techniques for a more complete classification.Keywords: lead qualification, sentiment analysis, opinion mining, machine learning, CRM, lead scoring
Procedia PDF Downloads 8541362 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities
Authors: Salman Naseer
Abstract:
One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission
Procedia PDF Downloads 14241361 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks
Authors: Sulemana Ibrahim
Abstract:
Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks
Procedia PDF Downloads 6241360 Forecasting Cancers Cases in Algeria Using Double Exponential Smoothing Method
Authors: Messis A., Adjebli A., Ayeche R., Talbi M., Tighilet K., Louardiane M.
Abstract:
Cancers are the second cause of death worldwide. Prevalence and incidence of cancers is getting increased by aging and population growth. This study aims to predict and modeling the evolution of breast, Colorectal, Lung, Bladder and Prostate cancers over the period of 2014-2019. In this study, data were analyzed using time series analysis with double exponential smoothing method to forecast the future pattern. To describe and fit the appropriate models, Minitab statistical software version 17 was used. Between 2014 and 2019, the overall trend in the raw number of new cancer cases registered has been increasing over time; the change in observations over time has been increasing. Our forecast model is validated since we have good prediction for the period 2020 and data not available for 2021 and 2022. Time series analysis showed that the double exponential smoothing is an efficient tool to model the future data on the raw number of new cancer cases.Keywords: cancer, time series, prediction, double exponential smoothing
Procedia PDF Downloads 8841359 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components
Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea
Abstract:
Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.Keywords: assessment, part of speech, sentiment analysis, student feedback
Procedia PDF Downloads 14241358 Predictive Analytics in Oil and Gas Industry
Authors: Suchitra Chnadrashekhar
Abstract:
Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.Keywords: hydrocarbon, information technology, SAS, predictive analytics
Procedia PDF Downloads 36041357 Mathematics Bridging Theory and Applications for a Data-Driven World
Authors: Zahid Ullah, Atlas Khan
Abstract:
In today's data-driven world, the role of mathematics in bridging the gap between theory and applications is becoming increasingly vital. This abstract highlights the significance of mathematics as a powerful tool for analyzing, interpreting, and extracting meaningful insights from vast amounts of data. By integrating mathematical principles with real-world applications, researchers can unlock the full potential of data-driven decision-making processes. This abstract delves into the various ways mathematics acts as a bridge connecting theoretical frameworks to practical applications. It explores the utilization of mathematical models, algorithms, and statistical techniques to uncover hidden patterns, trends, and correlations within complex datasets. Furthermore, it investigates the role of mathematics in enhancing predictive modeling, optimization, and risk assessment methodologies for improved decision-making in diverse fields such as finance, healthcare, engineering, and social sciences. The abstract also emphasizes the need for interdisciplinary collaboration between mathematicians, statisticians, computer scientists, and domain experts to tackle the challenges posed by the data-driven landscape. By fostering synergies between these disciplines, novel approaches can be developed to address complex problems and make data-driven insights accessible and actionable. Moreover, this abstract underscores the importance of robust mathematical foundations for ensuring the reliability and validity of data analysis. Rigorous mathematical frameworks not only provide a solid basis for understanding and interpreting results but also contribute to the development of innovative methodologies and techniques. In summary, this abstract advocates for the pivotal role of mathematics in bridging theory and applications in a data-driven world. By harnessing mathematical principles, researchers can unlock the transformative potential of data analysis, paving the way for evidence-based decision-making, optimized processes, and innovative solutions to the challenges of our rapidly evolving society.Keywords: mathematics, bridging theory and applications, data-driven world, mathematical models
Procedia PDF Downloads 7541356 The Comparison Study of Methanol and Water Extract of Chuanxiong Rhizoma: A Fingerprint Analysis
Authors: Li Chun Zhao, Zhi Chao Hu, Xi Qiang Liu, Man Lai Lee, Chak Shing Yeung, Man Fei Xu, Yuen Yee Kwan, Alan H. M. Ho, Nickie W. K. Chan, Bin Deng, Zhong Zhen Zhao, Min Xu
Abstract:
Background: Chuangxiong Rhizoma (Chuangxion, CX) is one of the most frequently used herbs in Chinese medicine because of its wide therapeutic effects such as vasorelaxation and anti-inflammation. Aim: The purposes of this study are (1) to perform non-targeted / targeted analyses of CX methanol extract and water extract, and compare the present data with previously LC-MS or GC-MS fingerprints; (2) to examine the difference between CX methanol extract and water extract for preliminarily evaluating whether current compound markers of methanol extract from crude CX materials could be suitable for quality control of CX water extract. Method: CX methanol extract was prepared according to the Hong Kong Chinese Materia Medica Standards. DG water extract was prepared by boiling with pure water for three times (one hour each). UHPLC-Q-TOF-MS/MS fingerprint analysis was performed by C18 column (1.7 µm, 2.1 × 100 mm) with Agilent 1290 Infinity system. Experimental data were analyzed by Agilent MassHunter Software. A database was established based on 13 published LC-MS and GC-MS CX fingerprint analyses. Total 18 targeted compounds in database were selected as markers to compare present data with previous data, and these markers also used to compare CX methanol extract and water extract. Result: (1) Non-targeted analysis indicated that there were 133 compounds identified in CX methanol extract, while 325 compounds in CX water extract that was more than double of CX methanol extract. (2) Targeted analysis further indicated that 9 in 18 targeted compounds were identified in CX methanol extract, while 12 in 18 targeted compounds in CX water extract that showed a lower lose-rate of water extract when compared with methanol extract. (3) By comparing CX methanol extract and water extract, Senkyunolide A (+1578%), Ferulic acid (+529%) and Senkyunolide H (+169%) were significantly higher in water extract when compared with methanol extract. (4) Other bioactive compounds such as Tetramethylpyrazine were only found in CX water extract. Conclusion: Many new compounds in both CX methanol and water extracts were found by using UHPLC Q-TOF MS/MS analysis when compared with previous published reports. A new standard reference including non-targeted compound profiling and targeted markers functioned especially for quality control of CX water extract (herbal decoction) should be established in future. (This project was supported by Hong Kong Baptist University (FRG2/14-15/109) & Natural Science Foundation of Guangdong Province (2014A030313414)).Keywords: Chuanxiong rhizoma, fingerprint analysis, targeted analysis, quality control
Procedia PDF Downloads 49541355 The Concentration Analysis of CO2 Using ALOHA Code for Kuosheng Nuclear Power Plant
Authors: W. S. Hsu, Y. Chiang, H. C. Chen, J. R. Wang, S. W. Chen, J. H. Yang, C. Shih
Abstract:
Not only radiation materials, but also the normal chemical material stored in the power plant can cause a risk to the residents. In this research, the ALOHA code was used to perform the concentration analysis under the CO2 storage burst or leakage conditions for Kuosheng nuclear power plant (NPP). The Final Safety Analysis Report (FSAR) and data were used in this study. Additionally, the analysis results of ALOHA code were compared with the R.G. 1.78 failure criteria in order to confirm the control room habitability. The comparison results show that the ALOHA result for burst case was 0.923 g/m3 which was below the criteria. However, the ALOHA results for leakage case was 11.3 g/m3.Keywords: BWR, ALOHA, habitability, Kuosheng
Procedia PDF Downloads 35941354 Analysis of the Secondary Stationary Flow Around an Oscillating Circular Cylinder
Authors: Artem Nuriev, Olga Zaitseva
Abstract:
This paper is devoted to the study of a viscous incompressible flow around a circular cylinder performing harmonic oscillations, especially the steady streaming phenomenon. The research methodology is based on the asymptotic explanation method combined with the computational bifurcation analysis. Present studies allow to identify several regimes of the secondary streaming with different flow structures. The results of the research are in good agreement with experimental and numerical simulation data.Keywords: oscillating cylinder, secondary streaming, flow regimes, asymptotic and bifurcation analysis
Procedia PDF Downloads 43541353 The Effect of Artificial Intelligence on the Production of Agricultural Lands and Labor
Authors: Ibrahim Makram Ibrahim Salib
Abstract:
Agriculture plays an essential role in providing food for the world's population. It also offers numerous benefits to countries, including non-food products, transportation, and environmental balance. Precision agriculture, which employs advanced tools to monitor variability and manage inputs, can help achieve these benefits. The increasing demand for food security puts pressure on decision-makers to ensure sufficient food production worldwide. To support sustainable agriculture, unmanned aerial vehicles (UAVs) can be utilized to manage farms and increase yields. This paper aims to provide an understanding of UAV usage and its applications in agriculture. The objective is to review the various applications of UAVs in agriculture. Based on a comprehensive review of existing research, it was found that different sensors provide varying analyses for agriculture applications. Therefore, the purpose of the project must be determined before using UAV technology for better data quality and analysis. In conclusion, identifying a suitable sensor and UAV is crucial to gather accurate data and precise analysis when using UAVs in agriculture.Keywords: agriculture land, agriculture land loss, Kabul city, urban land expansion, urbanization agriculture yield growth, agriculture yield prediction, explorative data analysis, predictive models, regression models drone, precision agriculture, farmer income
Procedia PDF Downloads 7441352 A Study on the Conspicuous Consumption, Involvement and Physical and Mental Health of Pet Owners
Authors: Chi-Yueh Hsu, Hsuan-Liang Hsu, Hsiu-Hui Chiang
Abstract:
This study is to explore the relationship between the conspicuous consumption, leisure involvement and physical and mental health, and to understand the prediction of conspicuous consumption and leisure involvement to physical and mental health. The data was collected and analysed by purposive sampling, and the research objects were the dog walkers in Taiwan area. A total of 300 questionnaires were issued and after shaving the invalid questionnaire, a total of 246 valid samples were collected, and the effective rate was 82%.. The data were analyzed by correlation analysis and multiple stepwise regression analysis. The results showed that there was a significant correlation between conspicuous consumption and leisure involvement, and the conspicuous consumption and leisure involvement of dog walkers have a significant impact on physical and mental health, especially in self-expression, attractiveness and centrality of leisure involvement have a significant impact on physical and mental health.Keywords: walking dog, attractiveness, self-expression, multiple stepwise regression analysis
Procedia PDF Downloads 26141351 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators
Authors: M. A. Okezue, K. L. Clase, S. R. Byrn
Abstract:
The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets
Procedia PDF Downloads 16941350 Geothermal Energy Evaluation of Lower Benue Trough Using Spectral Analysis of Aeromagnetic Data
Authors: Stella C. Okenu, Stephen O. Adikwu, Martins E. Okoro
Abstract:
The geothermal energy resource potential of the Lower Benue Trough (LBT) in Nigeria was evaluated in this study using spectral analysis of high-resolution aeromagnetic (HRAM) data. The reduced to the equator aeromagnetic data was divided into sixteen (16) overlapping blocks, and each of the blocks was analyzed to obtain the radial averaged power spectrum which enabled the computation of the top and centroid depths to magnetic sources. The values were then used to assess the Curie Point Depth (CPD), geothermal gradients, and heat flow variations in the study area. Results showed that CPD varies from 7.03 to 18.23 km, with an average of 12.26 km; geothermal gradient values vary between 31.82 and 82.50°C/km, with an average of 51.21°C/km, while heat flow variations range from 79.54 to 206.26 mW/m², with an average of 128.02 mW/m². Shallow CPD zones that run from the eastern through the western and southwestern parts of the study area correspond to zones of high geothermal gradient values and high subsurface heat flow distributions. These areas signify zones associated with anomalous subsurface thermal conditions and are therefore recommended for detailed geothermal energy exploration studies.Keywords: geothermal energy, curie-point depth, geothermal gradient, heat flow, aeromagnetic data, LBT
Procedia PDF Downloads 7641349 Geospatial Data Complexity in Electronic Airport Layout Plan
Authors: Shyam Parhi
Abstract:
Airports GIS program collects Airports data, validate and verify it, and stores it in specific database. Airports GIS allows authorized users to submit changes to airport data. The verified data is used to develop several engineering applications. One of these applications is electronic Airport Layout Plan (eALP) whose primary aim is to move from paper to digital form of ALP. The first phase of development of eALP was completed recently and it was tested for a few pilot program airports across different regions. We conducted gap analysis and noticed that a lot of development work is needed to fine tune at least six mandatory sheets of eALP. It is important to note that significant amount of programming is needed to move from out-of-box ArcGIS to a much customized ArcGIS which will be discussed. The ArcGIS viewer capability to display essential features like runway or taxiway or the perpendicular distance between them will be discussed. An enterprise level workflow which incorporates coordination process among different lines of business will be highlighted.Keywords: geospatial data, geology, geographic information systems, aviation
Procedia PDF Downloads 41641348 Assessing the Danger Factors Correlated With Dental Fear: An Observational Study
Authors: Mimoza Canga, Irene Malagnino, Giulia Malagnino, Alketa Qafmolla, Ruzhdie Qafmolla, Vito Antonio Malagnino
Abstract:
The goal of the present study was to analyze the risk factors regarding dental fear. This observational study was conducted during the period of February 2020 - April 2022 in Albania. The sample was composed of 200 participants, of which 40% were males and 60% were females. The participants' age range varied from 35 to 75 years old. We divided them into four age groups: 35-45, 46-55, 56-65, and 66-75 years old. Statistical analysis was performed using IBM SPSS Statistics 23.0. Data were scrutinized by the Post Hoc LSD test in analysis of variance (ANOVA). The P ≤ 0.05 values were considered significant. Data analysis included Confidence Interval (95% CI). The prevailing age range in the sample was mostly from 55 to 65 years old, 35.6% of the patients. In all, 50% of the patients had extreme fear about the fact that the dentist may be infected with Covid-19, 12.2% of them had low dental fear, and 37.8% had extreme dental fear. However, data collected from the current study indicated that a large proportion of patients 49.5% of them had high dental fear regarding the dentist not respecting the quarantine due to COVID-19, in comparison with 37.2% of them who had low dental fear and 13.3% who had extreme dental fear. The present study confirmed that 22.2% of the participants had an extreme fear of poor hygiene practices of the dentist that have been associated with the transmission of COVID-19 infection, 57.8% had high dental fear, and 20% of them had low dental fear. The present study showed that 50% of the patients stated that another factor that causes extreme fear was that the patients feel pain after interventions in the oral cavity. Strong associations were observed between dental fear and pain 95% CI; 0.24-0.52, P-value ˂ .0001. The results of the present study confirmed strong associations between dental fear and the fact that the dentist may be infected with Covid-19 (95% CI; 0.46-0.70, P-value ˂ .0001). Similarly, the analysis of the present study demonstrated that there was a statistically significant correlation between dental fear and poor hygiene practices of the dentist with 95% CI; 0.82-1.02, P-value ˂ .0001. On the basis of our statistical data analysis, the dentist did not respect the quarantine due to COVID-19 having a significant impact on dental fear with a P-value of ˂ .0001. This study shows important risk factors that significantly increase dental fear.Keywords: Covid-19, dental fear, pain, past dreadful experiences
Procedia PDF Downloads 14041347 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets
Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe
Abstract:
Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.Keywords: biomedical research, genomics, information systems, software
Procedia PDF Downloads 27041346 Uncertainty and Optimization Analysis Using PETREL RE
Authors: Ankur Sachan
Abstract:
The ability to make quick yet intelligent and value-added decisions to develop new fields has always been of great significance. In situations where the capital expenses and subsurface risk are high, carefully analyzing the inherent uncertainties in the reservoir and how they impact the predicted hydrocarbon accumulation and production becomes a daunting task. The problem is compounded in offshore environments, especially in the presence of heavy oils and disconnected sands where the margin for error is small. Uncertainty refers to the degree to which the data set may be in error or stray from the predicted values. To understand and quantify the uncertainties in reservoir model is important when estimating the reserves. Uncertainty parameters can be geophysical, geological, petrophysical etc. Identification of these parameters is necessary to carry out the uncertainty analysis. With so many uncertainties working at different scales, it becomes essential to have a consistent and efficient way of incorporating them into our analysis. Ranking the uncertainties based on their impact on reserves helps to prioritize/ guide future data gathering and uncertainty reduction efforts. Assigning probabilistic ranges to key uncertainties also enables the computation of probabilistic reserves. With this in mind, this paper, with the help the uncertainty and optimization process in petrel RE shows how the most influential uncertainties can be determined efficiently and how much impact so they have on the reservoir model thus helping in determining a cost effective and accurate model of the reservoir.Keywords: uncertainty, reservoir model, parameters, optimization analysis
Procedia PDF Downloads 65141345 Comparative Analysis of the Computer Methods' Usage for Calculation of Hydrocarbon Reserves in the Baltic Sea
Authors: Pavel Shcherban, Vlad Golovanov
Abstract:
Nowadays, the depletion of hydrocarbon deposits on the land of the Kaliningrad region leads to active geological exploration and development of oil and natural gas reserves in the southeastern part of the Baltic Sea. LLC 'Lukoil-Kaliningradmorneft' implements a comprehensive program for the development of the region's shelf in 2014-2023. Due to heterogeneity of reservoir rocks in various open fields, as well as with ambiguous conclusions on the contours of deposits, additional geological prospecting and refinement of the recoverable oil reserves are carried out. The key element is use of an effective technique of computer stock modeling at the first stage of processing of the received data. The following step uses information for the cluster analysis, which makes it possible to optimize the field development approaches. The article analyzes the effectiveness of various methods for reserves' calculation and computer modelling methods of the offshore hydrocarbon fields. Cluster analysis allows to measure influence of the obtained data on the development of a technical and economic model for mining deposits. The relationship between the accuracy of the calculation of recoverable reserves and the need of modernization of existing mining infrastructure, as well as the optimization of the scheme of opening and development of oil deposits, is observed.Keywords: cluster analysis, computer modelling of deposits, correction of the feasibility study, offshore hydrocarbon fields
Procedia PDF Downloads 16641344 Application of Association Rule Using Apriori Algorithm for Analysis of Industrial Accidents in 2013-2014 in Indonesia
Authors: Triano Nurhikmat
Abstract:
Along with the progress of science and technology, the development of the industrialized world in Indonesia took place very rapidly. This leads to a process of industrialization of society Indonesia faster with the establishment of the company and the workplace are diverse. Development of the industry relates to the activity of the worker. Where in these work activities do not cover the possibility of an impending crash on either the workers or on a construction project. The cause of the occurrence of industrial accidents was the fault of electrical damage, work procedures, and error technique. The method of an association rule is one of the main techniques in data mining and is the most common form used in finding the patterns of data collection. In this research would like to know how relations of the association between the incidence of any industrial accidents. Therefore, by using methods of analysis association rule patterns associated with combination obtained two iterations item set (2 large item set) when every factor of industrial accidents with a West Jakarta so industrial accidents caused by the occurrence of an electrical value damage = 0.2 support and confidence value = 1, and the reverse pattern with value = 0.2 support and confidence = 0.75.Keywords: association rule, data mining, industrial accidents, rules
Procedia PDF Downloads 29941343 Analytical Modelling of Surface Roughness during Compacted Graphite Iron Milling Using Ceramic Inserts
Authors: Ş. Karabulut, A. Güllü, A. Güldaş, R. Gürbüz
Abstract:
This study investigates the effects of the lead angle and chip thickness variation on surface roughness during the machining of compacted graphite iron using ceramic cutting tools under dry cutting conditions. Analytical models were developed for predicting the surface roughness values of the specimens after the face milling process. Experimental data was collected and imported to the artificial neural network model. A multilayer perceptron model was used with the back propagation algorithm employing the input parameters of lead angle, cutting speed and feed rate in connection with chip thickness. Furthermore, analysis of variance was employed to determine the effects of the cutting parameters on surface roughness. Artificial neural network and regression analysis were used to predict surface roughness. The values thus predicted were compared with the collected experimental data, and the corresponding percentage error was computed. Analysis results revealed that the lead angle is the dominant factor affecting surface roughness. Experimental results indicated an improvement in the surface roughness value with decreasing lead angle value from 88° to 45°.Keywords: CGI, milling, surface roughness, ANN, regression, modeling, analysis
Procedia PDF Downloads 44841342 Trend Analysis of Africa’s Entrepreneurial Framework Conditions
Authors: Sheng-Hung Chen, Grace Mmametena Mahlangu, Hui-Cheng Wang
Abstract:
This study aims to explore the trends of the Entrepreneurial Framework Conditions (EFCs) in the five African regions. The Global Entrepreneur Monitor (GEM) is the primary source of data. The data drawn were organized into a panel (2000-2021) and obtained from the National Expert Survey (NES) databases as harmonized by the (GEM). The Methodology used is descriptive and uses mainly charts and tables; this is in line with the approach used by the GEM. The GEM draws its data from the National Expert Survey (NES). The survey by the NES is administered to experts in each country. The GEM collects entrepreneurship data specific to each country. It provides information about entrepreneurial ecosystems and their impact on entrepreneurship. The secondary source is from the literature review. This study focuses on the following GEM indicators: Financing for Entrepreneurs, Government support and Policies, Taxes and Bureaucracy, Government programs, Basic School Entrepreneurial Education and Training, Post school Entrepreneurial Education and Training, R&D Transfer, Commercial And Professional Infrastructure, Internal Market Dynamics, Internal Market Openness, Physical and Service Infrastructure, and Cultural And Social Norms, based on GEM Report 2020/21. The limitation of the study is the lack of updated data from some countries. Countries have to fund their own regional studies; African countries do not regularly participate due to a lack of resources.Keywords: trend analysis, entrepreneurial framework conditions (EFCs), African region, government programs
Procedia PDF Downloads 7141341 Analysis of Airborne Data Using Range Migration Algorithm for the Spotlight Mode of Synthetic Aperture Radar
Authors: Peter Joseph Basil Morris, Chhabi Nigam, S. Ramakrishnan, P. Radhakrishna
Abstract:
This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data using the Range Migration Algorithm (RMA) for the spotlight mode of operation. Unlike in polar format algorithm (PFA), space-variant defocusing and geometric distortion effects are mitigated in RMA since it does not assume that the illuminating wave-fronts are planar. This facilitates the use of RMA for imaging scenarios involving severe differential range curvatures enabling the imaging of larger scenes at fine resolution and at shorter ranges with low center frequencies. The RMA algorithm for the spotlight mode of SAR is analyzed in this paper using the airborne data. Pre-processing operations viz: - range de-skew and motion compensation to a line are performed on the raw data before being fed to the RMA component. Various stages of the RMA viz:- 2D Matched Filtering, Along Track Fourier Transform and Slot Interpolation are analyzed to find the performance limits and the dependence of the imaging geometry on the resolution of the final image. The ability of RMA to compensate for severe differential range curvatures in the two-dimensional spatial frequency domain are also illustrated in this paper.Keywords: range migration algorithm, spotlight SAR, synthetic aperture radar, matched filtering, slot interpolation
Procedia PDF Downloads 24141340 The Establishment and Application of TRACE/FRAPTRAN Model for Kuosheng Nuclear Power Plant
Authors: S. W. Chen, W. K. Lin, J. R. Wang, C. Shih, H. T. Lin, H. C. Chang, W. Y. Li
Abstract:
Kuosheng nuclear power plant (NPP) is a BWR/6 type NPP and located on the northern coast of Taiwan. First, Kuosheng NPP TRACE model were developed in this research. In order to assess the system response of Kuosheng NPP TRACE model, startup tests data were used to evaluate Kuosheng NPP TRACE model. Second, the over pressurization transient analysis of Kuosheng NPP TRACE model was performed. Besides, in order to confirm the mechanical property and integrity of fuel rods, FRAPTRAN analysis was also performed in this study.Keywords: TRACE, safety analysis, BWR/6, FRAPTRA
Procedia PDF Downloads 56341339 The Impact of Data Science on Geography: A Review
Authors: Roberto Machado
Abstract:
We conducted a systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology, analyzing 2,996 studies and synthesizing 41 of them to explore the evolution of data science and its integration into geography. By employing optimization algorithms, we accelerated the review process, significantly enhancing the efficiency and precision of literature selection. Our findings indicate that data science has developed over five decades, facing challenges such as the diversified integration of data and the need for advanced statistical and computational skills. In geography, the integration of data science underscores the importance of interdisciplinary collaboration and methodological innovation. Techniques like large-scale spatial data analysis and predictive algorithms show promise in natural disaster management and transportation route optimization, enabling faster and more effective responses. These advancements highlight the transformative potential of data science in geography, providing tools and methodologies to address complex spatial problems. The relevance of this study lies in the use of optimization algorithms in systematic reviews and the demonstrated need for deeper integration of data science into geography. Key contributions include identifying specific challenges in combining diverse spatial data and the necessity for advanced computational skills. Examples of connections between these two fields encompass significant improvements in natural disaster management and transportation efficiency, promoting more effective and sustainable environmental solutions with a positive societal impact.Keywords: data science, geography, systematic review, optimization algorithms, supervised learning
Procedia PDF Downloads 2941338 Spatially Random Sampling for Retail Food Risk Factors Study
Authors: Guilan Huang
Abstract:
In 2013 and 2014, the U.S. Food and Drug Administration (FDA) collected data from selected fast food restaurants and full service restaurants for tracking changes in the occurrence of foodborne illness risk factors. This paper discussed how we customized spatial random sampling method by considering financial position and availability of FDA resources, and how we enriched restaurants data with location. Location information of restaurants provides opportunity for quantitatively determining random sampling within non-government units (e.g.: 240 kilometers around each data-collector). Spatial analysis also could optimize data-collectors’ work plans and resource allocation. Spatial analytic and processing platform helped us handling the spatial random sampling challenges. Our method fits in FDA’s ability to pinpoint features of foodservice establishments, and reduced both time and expense on data collection.Keywords: geospatial technology, restaurant, retail food risk factor study, spatially random sampling
Procedia PDF Downloads 35041337 Identification of CLV for Online Shoppers Using RFM Matrix: A Case Based on Features of B2C Architecture
Authors: Riktesh Srivastava
Abstract:
Online Shopping have established an astonishing evolution in the last few years. And it is now apparent that B2C architecture is becoming progressively imperative channel for even traditional brick and mortar type traders as well. In this completion knowing customers and predicting behavior are extremely important. More important, when any customer logs onto the B2C architecture, the traces of their buying patterns can be stored and used for future predictions. Such a prediction is called Customer Lifetime Value (CLV). Earlier, we used Net Present Value to do so, however, it ignores two important aspects of B2C architecture, “market risks” and “big amount of customer data”. Now, we use RFM- Recency, Frequency and Monetary Value to estimate the CLV, and as the term exemplifies, market risks, is well sheltered. Big Data Analysis is also roofed in RFM, which gives real exploration of the Big Data and lead to a better estimation for future cash flow from customers. In the present paper, 6 factors (collected from varied sources) are used to determine as to what attracts the customers to the B2C architecture. For these 6 factors, RFM is computed for 3 years (2013, 2014 and 2015) respectively. CLV and Revenue are the two parameters defined using RFM analysis, which gives the clear picture of the future predictions.Keywords: CLV, RFM, revenue, recency, frequency, monetary value
Procedia PDF Downloads 220