Search results for: linked data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26408

Search results for: linked data

22358 Digital Twin for Retail Store Security

Authors: Rishi Agarwal

Abstract:

Digital twins are emerging as a strong technology used to imitate and monitor physical objects digitally in real time across sectors. It is not only dealing with the digital space, but it is also actuating responses in the physical space in response to the digital space processing like storage, modeling, learning, simulation, and prediction. This paper explores the application of digital twins for enhancing physical security in retail stores. The retail sector still relies on outdated physical security practices like manual monitoring and metal detectors, which are insufficient for modern needs. There is a lack of real-time data and system integration, leading to ineffective emergency response and preventative measures. As retail automation increases, new digital frameworks must control safety without human intervention. To address this, the paper proposes implementing an intelligent digital twin framework. This collects diverse data streams from in-store sensors, surveillance, external sources, and customer devices and then Advanced analytics and simulations enable real-time monitoring, incident prediction, automated emergency procedures, and stakeholder coordination. Overall, the digital twin improves physical security through automation, adaptability, and comprehensive data sharing. The paper also analyzes the pros and cons of implementation of this technology through an Emerging Technology Analysis Canvas that analyzes different aspects of this technology through both narrow and wide lenses to help decision makers in their decision of implementing this technology. On a broader scale, this showcases the value of digital twins in transforming legacy systems across sectors and how data sharing can create a safer world for both retail store customers and owners.

Keywords: digital twin, retail store safety, digital twin in retail, digital twin for physical safety

Procedia PDF Downloads 75
22357 Determinant Factor Analysis of Foreign Direct Investment in Asean-6 Countries Period 2004-2012

Authors: Eleonora Sofilda, Ria Amalia, Muhammad Zilal Hamzah

Abstract:

Foreign direct investment is one of the sources of financing or capital that important for a country, especially for developing countries. This investment also provides a great contribution to development through the transfer of assets, management improving, and transfer of technology in enhancing the economy of a country. In the other side currently in ASEAN countries emerge the interesting phenomenon where some big producers are re-locate their basic production among those countries. This research is aimed to analyze the factors that affect capital inflows of foreign direct investment into the 6 ASEAN countries (Indonesia, Malaysia, Singapore, Thailand, Philippines, and Vietnam) in period 2004-2012. This study uses panel data analysis to determine the factors that affect of foreign direct investment in 6 ASEAN. The factors that affect of foreign direct investment (FDI) are the gross domestic product (GDP), global competitiveness (GCI), interest rate, exchange rate and trade openness (TO). Result of panel data analysis show that three independent variables (GCI, GDP, and TO) have a significant effect to the FDI in 6 ASEAN Countries.

Keywords: foreign direct investment, the gross domestic product, global competitiveness, interest rate, exchange rate, trade openness, panel data analysis

Procedia PDF Downloads 477
22356 Comparison of Propofol versus Ketamine-Propofol Combination as an Anesthetic Agent in Supratentorial Tumors: A Randomized Controlled Study

Authors: Jakkireddy Sravani

Abstract:

Introduction: The maintenance of hemodynamic stability is of pivotal importance in supratentorial surgeries. Anesthesia for supratentorial tumors requires an understanding of localized or generalized rising ICP, regulation, and maintenance of intracerebral perfusion, and avoidance of secondary systemic ischemic insults. We aimed to compare the effects of the combination of ketamine and propofol with propofol alone when used as an induction and maintenance anesthetic agent during supratentorial tumors. Methodology: This prospective, randomized, double-blinded controlled study was conducted at AIIMS Raipur after obtaining the institute Ethics Committee approval (1212/IEC-AIIMSRPR/2022 dated 15/10/2022), CTRI/2023/01/049298 registration and written informed consent. Fifty-two supratentorial tumor patients posted for craniotomy and excision were included in the study. The patients were randomized into two groups. One group received a combination of ketamine and propofol, and the other group received propofol for induction and maintenance of anesthesia. Intraoperative hemodynamic stability and quality of brain relaxation were studied in both groups. Statistical analysis and technique: An MS Excel spreadsheet program was used to code and record the data. Data analysis was done using IBM Corp SPSS v23. The independent sample "t" test was applied for continuously dispersed data when two groups were compared, the chi-square test for categorical data, and the Wilcoxon test for not normally distributed data. Results: The patients were comparable in terms of demographic profile, duration of the surgery, and intraoperative input-output status. The trends in BIS over time were similar between the two groups (p-value = 1.00). Intraoperative hemodynamics (SBP, DBP, MAP) were better maintained in the ketamine and propofol combination group during induction and maintenance (p-value < 0.01). The quality of brain relaxation was comparable between the two groups (p-value = 0.364). Conclusion: Ketamine and propofol combination for the induction and maintenance of anesthesia was associated with superior hemodynamic stability, required fewer vasopressors during excision of supratentorial tumors, provided adequate brain relaxation, and some degree of neuroprotection compared to propofol alone.

Keywords: supratentorial tumors, hemodynamic stability, brain relaxation, ketamine, propofol

Procedia PDF Downloads 33
22355 Assessment of Dimensions and Gully Recovery With GPS Receiver and RPA (Drone)

Authors: Mariana Roberta Ribeiro, Isabela de Cássia Caramello, Roberto Saverio Souza Costa

Abstract:

Currently, one of the most important environmental problems is soil degradation. This wear is the result of inadequate agricultural practices, with water erosion as the main agent. As the runoff water is concentrated in certain points, it can reach a more advanced stage, which are the gullies. In view of this, the objective of this work was to evaluate which methodology is most suitable for the purpose of elaborating a project for the recovery of a gully, relating work time, data reliability, and the final cost. The work was carried out on a rural road in Monte Alto - SP, where there is 0.30 hectares of area under the influence of a gully. For the evaluation, an aerophotogrammetric survey was used with RPA, with georeferenced points, and with a GNSS L1/L2 receiver. To assess the importance of georeferenced points, there was a comparison of altimetric data using the support points with altimetric data using only the aircraft's internal GPS. Another method used was the survey by conventional topography, where coordinates were collected by total station and L1/L2 Geodetic GPS receiver. Statistical analysis was performed using analysis of variance (ANOVA) using the F test (p<0.05), and the means between treatments were compared using the Tukey test (p<0.05). The results showed that the surveys carried out by aerial photogrammetry and by conventional topography showed no significant difference for the analyzed parameters. Considering the data presented, it is possible to conclude that, when comparing the parameters of accuracy, the final volume of the gully, and cost, for the purpose of elaborating a project for the recovery of a gully, the methodologies of aerial photogrammetric survey and conventional topography do not differ significantly. However, when working time, use of labor, and project detail are compared, the aerial photogrammetric survey proves to be more viable.

Keywords: drones, erosion, soil conservation, technology in agriculture

Procedia PDF Downloads 119
22354 Hydrocarbon Source Rocks of the Maragh Low

Authors: Elhadi Nasr, Ibrahim Ramadan

Abstract:

Biostratigraphical analyses of well sections from the Maragh Low in the Eastern Sirt Basin has allowed high resolution correlations to be undertaken. Full integration of this data with available palaeoenvironmental, lithological, gravity, seismic, aeromagnetic, igneous, radiometric and wireline log information and a geochemical analysis of source rock quality and distribution has led to a more detailed understanding of the geological and the structural history of this area. Pre Sirt Unconformity two superimposed rifting cycles have been identified. The oldest is represented by the Amal Group of sediments and is of Late Carboniferous, Kasimovian / Gzelian to Middle Triassic, Anisian age. Unconformably overlying is a younger rift cycle which is represented the Sarir Group of sediments and is of Early Cretaceous, late Neocomian to Aptian in age. Overlying the Sirt Unconformity is the marine Late Cretaceous section. An assessment of pyrolysis results and a palynofacies analysis has allowed hydrocarbon source facies and quality to be determined. There are a number of hydrocarbon source rock horizons in the Maragh Low, these are sometimes vertically stacked and they are of fair to excellent quality. The oldest identified source rock is the Triassic Shale, this unit is unconformably overlain by sandstones belonging to the Sarir Group and conformably overlies a Triassic Siltstone unit. Palynological dating of the Triassic Shale unit indicates a Middle Triassic, Anisian age. The Triassic Shale is interpreted to have been deposited in a lacustrine palaeoenvironment. This particularly is evidenced by the dark, fine grained, organic rich nature of the sediment and is supported by palynofacies analysis and by the recovery of fish fossils. Geochemical analysis of the Triassic Shale indicates total organic carbon varying between 1.37 and 3.53. S2 pyrolysate yields vary between 2.15 mg/g and 6.61 mg/g and hydrogen indices vary between 156.91 and 278.91. The source quality of the Triassic Shale varies from being of fair to very good / rich. Linked to thermal maturity it is now a very good source for light oil and gas. It was once a very good to rich oil source. The Early Barremian Shale was also deposited in a lacustrine palaeoenvironment. Recovered palynomorphs indicate an Early Cretaceous, late Neocomian to early Barremian age. The Early Barremian Shale is conformably underlain and overlain by sandstone units belonging to the Sarir Group of sediments which are also of Early Cretaceous age. Geochemical analysis of the Early Barremian Shale indicates that it is a good oil source and was originally very good. Total organic carbon varies between 3.59% and 7%. S2 varies between 6.30 mg/g and 10.39 mg/g and the hydrogen indices vary between 148.4 and 175.5. A Late Barremian Shale unit of this age has also been identified in the central Maragh Low. Geochemical analyses indicate that total organic carbon varies between 1.05 and 2.38%, S2 pyrolysate between 1.6 and 5.34 mg/g and the hydrogen index between 152.4 and 224.4. It is a good oil source rock which is now mature. In addition to the non marine hydrocarbon source rocks pre Sirt Unconformity, three formations in the overlying Late Cretaceous section also provide hydrocarbon quality source rocks. Interbedded shales within the Rachmat Formation of Late Cretaceous, early Campanian age have total organic carbon ranging between, 0.7 and 1.47%, S2 pyrolysate varying between 1.37 and 4.00 mg/g and hydrogen indices varying between 195.7 and 272.1. The indication is that this unit would provide a fair gas source to a good oil source. Geochemical analyses of the overlying Tagrifet Limestone indicate that total organic carbon varies between 0.26% and 1.01%. S2 pyrolysate varies between 1.21 and 2.16 mg/g and hydrogen indices vary between 195.7 and 465.4. For the overlying Sirt Shale Formation of Late Cretaceous, late Campanian age, total organic carbon varies between 1.04% and 1.51%, S2 pyrolysate varies between 4.65 mg/g and 6.99 mg/g and the hydrogen indices vary between 151 and 462.9. The study has proven that both the Sirt Shale Formation and the Tagrifet Limestone are good to very good and rich sources for oil in the Maragh Low. High resolution biostratigraphical interpretations have been integrated and calibrated with thermal maturity determinations (Vitrinite Reflectance (%Ro), Spore Colour Index (SCI) and Tmax (ºC) and the determined present day geothermal gradient of 25ºC / Km for the Maragh Low. Interpretation of generated basin modelling profiles allows a detailed prediction of timing of maturation development of these source horizons and leads to a determination of amounts of missing section at major unconformities. From the results the top of the oil window (0.72% Ro) is picked as high as 10,700’ and the base of the oil window (1.35% Ro) assuming a linear trend and by projection is picked as low as 18,000’ in the Maragh Low. For the Triassic Shale the early phase of oil generation was in the Late Palaeocene / Early to Middle Eocene and the main phase of oil generation was in the Middle to Late Eocene. The Early Barremian Shale reached the main phase of oil generation in the Early Oligocene with late generation being reached in the Middle Miocene. For the Rakb Group section (Rachmat Formation, Tagrifet Limestone and Sirt Shale Formation) the early phase of oil generation started in the Late Eocene with the main phase of generation being between the Early Oligocene and the Early Miocene. From studying maturity profiles and from regional considerations it can be predicted that up to 500’ of sediment may have been deposited and eroded by the Sirt Unconformity in the central Maragh Low while up to 2000’ of sediment may have been deposited and then eroded to the south of the trough.

Keywords: Geochemical analysis of the source rocks from wells in Eastern Sirt Basin.

Procedia PDF Downloads 410
22353 Analyzing the Contamination of Some Food Crops Due to Mineral Deposits in Ondo State, Nigeria

Authors: Alexander Chinyere Nwankpa, Nneka Ngozi Nwankpa

Abstract:

In Nigeria, the Federal government is trying to make sure that everyone has access to enough food that is nutritiously adequate and safe. But in the southwest of Nigeria, notably in Ondo State, the most valuable minerals such as oil and gas, bitumen, kaolin, limestone talc, columbite, tin, gold, coal, and phosphate are abundant. Therefore, some regions of Ondo State are now linked to large quantities of natural radioactivity as a result of the mineral presence. In this work, the baseline radioactivity levels in some of the most important food crops in Ondo State were analyzed, allowing for the prediction of probable radiological health impacts. To this effect, maize (Zea mays), yam (Dioscorea alata) and cassava (Manihot esculenta) tubers were collected from the farmlands in the State because they make up the majority of food's nutritional needs. Ondo State was divided into eight zones in order to provide comprehensive coverage of the research region. At room temperature, the maize (Zea mays), yam (Dioscorea alata), and cassava (Manihot esculenta) samples were dried until they reached a consistent weight. They were pulverized, homogenized, and 250 g packed in a 1-liter Marinelli beaker and kept for 28 days to achieve secular equilibrium. The activity concentrations of Radium-226 (Ra-226), Thorium-232 (Th-232), and Potassium-40 (K-40) were determined in the food samples using Gamma-ray spectrometry. Firstly, the Hyper Pure Germanium detector was calibrated using standard radioactive sources. The gamma counting, which lasted for 36000s for each sample, was carried out in the Centre for Energy Research and Development, Obafemi Awolowo University, Ile-Ife, Nigeria. The mean activity concentration of Ra-226, Th-232 and K-40 for yam were 1.91 ± 0.10 Bq/kg, 2.34 ± 0.21 Bq/kg and 48.84 ± 3.14 Bq/kg, respectively. The content of the radionuclides in maize gave a mean value of 2.83 ± 0.21 Bq/kg for Ra-226, 2.19 ± 0.07 Bq/kg for Th-232 and 41.11 ± 2.16 Bq/kg for K-40. The mean activity concentrations in cassava were 2.52 ± 0.31 Bq/kg for Ra-226, 1.94 ± 0.21 Bq/kg for Th-232 and 45.12 ± 3.31 Bq/kg for K-40. The average committed effective doses in zones 6-8 were 0.55 µSv/y for the consumption of yam, 0.39 µSv/y for maize, and 0.49 µSv/y for cassava. These values are higher than the annual dose guideline of 0.35 µSv/y for the general public. Therefore, the values obtained in this work show that there is radiological contamination of some foodstuffs consumed in some parts of Ondo State. However, we recommend that systematic and appropriate methods also need to be established for the measurement of gamma-emitting radionuclides since these constitute important contributors to the internal exposure of man through ingestion, inhalation, or wound on the body.

Keywords: contamination, environment, radioactivity, radionuclides

Procedia PDF Downloads 108
22352 Novel GPU Approach in Predicting the Directional Trend of the S&P500

Authors: A. J. Regan, F. J. Lidgey, M. Betteridge, P. Georgiou, C. Toumazou, K. Hayatleh, J. R. Dibble

Abstract:

Our goal is development of an algorithm capable of predicting the directional trend of the Standard and Poor’s 500 index (S&P 500). Extensive research has been published attempting to predict different financial markets using historical data testing on an in-sample and trend basis, with many authors employing excessively complex mathematical techniques. In reviewing and evaluating these in-sample methodologies, it became evident that this approach was unable to achieve sufficiently reliable prediction performance for commercial exploitation. For these reasons, we moved to an out-of-sample strategy based on linear regression analysis of an extensive set of financial data correlated with historical closing prices of the S&P 500. We are pleased to report a directional trend accuracy of greater than 55% for tomorrow (t+1) in predicting the S&P 500.

Keywords: financial algorithm, GPU, S&P 500, stock market prediction

Procedia PDF Downloads 352
22351 Impact of Grade Sensitivity on Learning Motivation and Academic Performance

Authors: Salwa Aftab, Sehrish Riaz

Abstract:

The objective of this study was to check the impact of grade sensitivity on learning motivation and academic performance of students and to remove the degree of difference that exists among students regarding the cause of their learning motivation and also to gain knowledge about this matter since it has not been adequately researched. Data collection was primarily done through the academic sector of Pakistan and was depended upon the responses given by students solely. A sample size of 208 university students was selected. Both paper and online surveys were used to collect data from respondents. The results of the study revealed that grade sensitivity has a positive relationship with the learning motivation of students and their academic performance. These findings were carried out through systematic correlation and regression analysis.

Keywords: academic performance, correlation, grade sensitivity, learning motivation, regression

Procedia PDF Downloads 406
22350 Optimization of Electric Vehicle (EV) Charging Station Allocation Based on Multiple Data - Taking Nanjing (China) as an Example

Authors: Yue Huang, Yiheng Feng

Abstract:

Due to the global pressure on climate and energy, many countries are vigorously promoting electric vehicles and building charging (public) charging facilities. Faced with the supply-demand gap of existing electric vehicle charging stations and unreasonable space usage in China, this paper takes the central city of Nanjing as an example, establishes a site selection model through multivariate data integration, conducts multiple linear regression SPSS analysis, gives quantitative site selection results, and provides optimization models and suggestions for charging station layout planning.

Keywords: electric vehicle, charging station, allocation optimization, urban mobility, urban infrastructure, nanjing

Procedia PDF Downloads 97
22349 New Evaluation of the Richness of Cactus (Opuntia) in Active Biomolecules and their Use in Agri-Food, Cosmetic, and Pharmaceutical

Authors: Lazhar Zourgui

Abstract:

Opuntia species are used as local medicinal interventions for chronic diseases and as food sources, mainly because they possess nutritional properties and biological activities. Opuntia ficus-indica (L.) Mill, commonly known as prickly pear or nopal cactus, is the most economically valuable plant in the Cactaceae family worldwide. It is a tropical or subtropical plant native to tropical and subtropical America, which can grow in arid and semi-arid climates. It belongs to the family of angiosperms dicotyledons Cactaceae of which about 1500 species of cacti are known. The Opuntia plant is distributed throughout the world and has great economic potential. There are differences in the phytochemical composition of Opuntia species between wild and domesticated species and within the same species. It is an interesting source of plant bioactive compounds. Bioactive compounds are compounds with nutritional benefits and are generally classified into phenolic and non-phenolic compounds and pigments. Opuntia species are able to grow in almost all climates, for example, arid, temperate, and tropical climates, and their bioactive compound profiles change depending on the species, cultivar, and climatic conditions. Therefore, there is an opportunity for the discovery of new compounds from different Opuntia cultivars. Health benefits of prickly pear are widely demonstrated: There is ample evidence of the health benefits of consuming prickly pear due to its source of nutrients and vitamins and its antioxidant properties due to its content of bioactive compounds. In addition, prickly pear is used in the treatment of hyperglycemia and high cholesterol levels, and its consumption is linked to a lower incidence of coronary heart disease and certain types of cancer. It may be effective in insulin-independent type 2 diabetes mellitus. Opuntia ficus-Indica seed oil has shown potent antioxidant and prophylactic effects. Industrial applications of these bioactive compounds are increasing. In addition to their application in the pharmaceutical industries, bioactive compounds are used in the food industry for the production of nutraceuticals and new food formulations (juices, drinks, jams, sweeteners). In my lecture, I will review in a comprehensive way the phytochemical, nutritional, and bioactive compound composition of the different aerial and underground parts of Opuntia species. The biological activities and applications of Opuntia compounds are also discussed.

Keywords: medicinal plants, cactus, Opuntia, actives biomolecules, biological activities

Procedia PDF Downloads 111
22348 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 265
22347 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective

Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou

Abstract:

The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.

Keywords: mortality map, spatial patterns, statistical area, variation

Procedia PDF Downloads 262
22346 Multi Object Tracking for Predictive Collision Avoidance

Authors: Bruk Gebregziabher

Abstract:

The safe and efficient operation of Autonomous Mobile Robots (AMRs) in complex environments, such as manufacturing, logistics, and agriculture, necessitates accurate multiobject tracking and predictive collision avoidance. This paper presents algorithms and techniques for addressing these challenges using Lidar sensor data, emphasizing ensemble Kalman filter. The developed predictive collision avoidance algorithm employs the data provided by lidar sensors to track multiple objects and predict their velocities and future positions, enabling the AMR to navigate safely and effectively. A modification to the dynamic windowing approach is introduced to enhance the performance of the collision avoidance system. The overall system architecture encompasses object detection, multi-object tracking, and predictive collision avoidance control. The experimental results, obtained from both simulation and real-world data, demonstrate the effectiveness of the proposed methods in various scenarios, which lays the foundation for future research on global planners, other controllers, and the integration of additional sensors. This thesis contributes to the ongoing development of safe and efficient autonomous systems in complex and dynamic environments.

Keywords: autonomous mobile robots, multi-object tracking, predictive collision avoidance, ensemble Kalman filter, lidar sensors

Procedia PDF Downloads 87
22345 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence

Authors: Sogand Barghi

Abstract:

The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.

Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting

Procedia PDF Downloads 77
22344 Human Absorbed Dose Estimation of a New In-111 Imaging Agent Based on Rat Data

Authors: H. Yousefnia, S. Zolghadri

Abstract:

The measurement of organ radiation exposure dose is one of the most important steps to be taken initially, for developing a new radiopharmaceutical. In this study, the dosimetric studies of a novel agent for SPECT-imaging of the bone metastasis, 111In-1,4,7,10-tetraazacyclododecane-1,4,7,10 tetraethylene phosphonic acid (111In-DOTMP) complex, have been carried out to estimate the dose in human organs based on the data derived from rats. The radiolabeled complex was prepared with high radiochemical purity in the optimal conditions. Biodistribution studies of the complex was investigated in the male Syrian rats at selected times after injection (2, 4, 24 and 48 h). The human absorbed dose estimation of the complex was made based on data derived from the rats by the radiation absorbed dose assessment resource (RADAR) method. 111In-DOTMP complex was prepared with high radiochemical purity of >99% (ITLC). Total body effective absorbed dose for 111In-DOTMP was 0.061 mSv/MBq. This value is comparable to the other 111In clinically used complexes. The results show that the dose with respect to the critical organs is satisfactory within the acceptable range for diagnostic nuclear medicine procedures. Generally, 111In-DOTMP has interesting characteristics and can be considered as a viable agent for SPECT-imaging of the bone metastasis in the near future.

Keywords: In-111, DOTMP, Internal Dosimetry, RADAR

Procedia PDF Downloads 409
22343 The Relations between Seismic Results and Groundwater near the Gokpinar Damp Area, Denizli, Turkey

Authors: Mahmud Gungor, Ali Aydin, Erdal Akyol, Suat Tasdelen

Abstract:

The understanding of geotechnical characteristics of near-surface material and the effects of the groundwater is very important problem in such as site studies. For showing the relations between seismic data and groundwater we selected about 25 km2 as the study area. It has been presented which is a detailed work of seismic data and groundwater depths of Gokpinar Damp area. Seismic waves velocity (Vp and Vs) are very important parameters showing the soil properties. The seismic records were used the method of the multichannel analysis of surface waves near area of Gokpinar Damp area. Sixty sites in this area have been investigated with survey lines about 60 m in length. MASW (Multichannel analysis of surface wave) method has been used to generate one-dimensional shear wave velocity profile at locations. These shear wave velocities are used to estimate equivalent shear wave velocity in the study area at every 2 and 5 m intervals up to a depth of 45 m. Levels of equivalent shear wave velocity of soil are used the classified of the study area. After the results of the study, it must be considered as components of urban planning and building design of Gokpinar Damp area, Denizli and the application and use of these results should be required and enforced by municipal authorities.

Keywords: seismic data, Gokpinar Damp, urban planning, Denizli

Procedia PDF Downloads 292
22342 Efficient Chiller Plant Control Using Modern Reinforcement Learning

Authors: Jingwei Du

Abstract:

The need of optimizing air conditioning systems for existing buildings calls for control methods designed with energy-efficiency as a primary goal. The majority of current control methods boil down to two categories: empirical and model-based. To be effective, the former heavily relies on engineering expertise and the latter requires extensive historical data. Reinforcement Learning (RL), on the other hand, is a model-free approach that explores the environment to obtain an optimal control strategy often referred to as “policy”. This research adopts Proximal Policy Optimization (PPO) to improve chiller plant control, and enable the RL agent to collaborate with experienced engineers. It exploits the fact that while the industry lacks historical data, abundant operational data is available and allows the agent to learn and evolve safely under human supervision. Thanks to the development of language models, renewed interest in RL has led to modern, online, policy-based RL algorithms such as the PPO. This research took inspiration from “alignment”, a process that utilizes human feedback to finetune the pretrained model in case of unsafe content. The methodology can be summarized into three steps. First, an initial policy model is generated based on minimal prior knowledge. Next, the prepared PPO agent is deployed so feedback from both critic model and human experts can be collected for future finetuning. Finally, the agent learns and adapts itself to the specific chiller plant, updates the policy model and is ready for the next iteration. Besides the proposed approach, this study also used traditional RL methods to optimize the same simulated chiller plants for comparison, and it turns out that the proposed method is safe and effective at the same time and needs less to no historical data to start up.

Keywords: chiller plant, control methods, energy efficiency, proximal policy optimization, reinforcement learning

Procedia PDF Downloads 34
22341 Modelling of Pervaporation Separation of Butanol from Aqueous Solutions Using Polydimethylsiloxane Mixed Matrix Membranes

Authors: Arian Ebneyamini, Hoda Azimi, Jules Thibaults, F. Handan Tezel

Abstract:

In this study, a modification of Hennepe model for pervaporation separation of butanol from aqueous solutions using Polydimethylsiloxane (PDMS) mixed matrix membranes has been introduced and validated by experimental data. The model was compared to the original Hennepe model and few other models which are applicable for membrane gas separation processes such as Maxwell, Lewis Nielson and Pal. Theoretical modifications for non-ideal interface morphology have been offered to predict the permeability in case of interface void, interface rigidification and pore-blockage. The model was in a good agreement with experimental data.

Keywords: butanol, PDMS, modeling, pervaporation, mixed matrix membranes

Procedia PDF Downloads 224
22340 The Impact of a Living Wage on the UK Hotel Sector

Authors: Andreas Walmsley, Shobana Partington, Rebecca Armstrong, Harold Goodwin

Abstract:

In the UK, more than 1 in 5 workers earn less than a living wage. The hospitality sector is particularly affected where it has been claimed two thirds of workers earn less than the living wage. The UK Government is set to introduce (April 2016) a national living wage (NLW) which is therefore likely to have a significant impact on the hospitality sector. To date limited data exists that focus on how hotels are tackling the issue, what stakeholder perceptions are towards the change in legislation, and how the NLW may affect working patterns in the sector. This study draws on interviews with a range of key stakeholders such as hotel HR and general managers as well as industry representatives to explore these issues within the broader context of responsible tourism. Data collection is still ongoing and is scheduled to be completed by the end of June 2016.

Keywords: hospitality, living wage, responsible tourism, tourism employment

Procedia PDF Downloads 390
22339 Soil Degradati̇on Mapping Using Geographic Information System, Remote Sensing and Laboratory Analysis in the Oum Er Rbia High Basin, Middle Atlas, Morocco

Authors: Aafaf El Jazouli, Ahmed Barakat, Rida Khellouk

Abstract:

Mapping of soil degradation is derived from field observations, laboratory measurements, and remote sensing data, integrated quantitative methods to map the spatial characteristics of soil properties at different spatial and temporal scales to provide up-to-date information on the field. Since soil salinity, texture and organic matter play a vital role in assessing topsoil characteristics and soil quality, remote sensing can be considered an effective method for studying these properties. The main objective of this research is to asses soil degradation by combining remote sensing data and laboratory analysis. In order to achieve this goal, the required study of soil samples was taken at 50 locations in the upper basin of Oum Er Rbia in the Middle Atlas in Morocco. These samples were dried, sieved to 2 mm and analyzed in the laboratory. Landsat 8 OLI imagery was analyzed using physical or empirical methods to derive soil properties. In addition, remote sensing can serve as a supporting data source. Deterministic potential (Spline and Inverse Distance weighting) and probabilistic interpolation methods (ordinary kriging and universal kriging) were used to produce maps of each grain size class and soil properties using GIS software. As a result, a correlation was found between soil texture and soil organic matter content. This approach developed in ongoing research will improve the prospects for the use of remote sensing data for mapping soil degradation in arid and semi-arid environments.

Keywords: Soil degradation, GIS, interpolation methods (spline, IDW, kriging), Landsat 8 OLI, Oum Er Rbia high basin

Procedia PDF Downloads 171
22338 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 237
22337 Ochratoxin-A in Traditional Meat Products from Croatian Households

Authors: Jelka Pleadin, Nina Kudumija, Ana Vulic, Manuela Zadravec, Tina Lesic, Mario Skrivanko, Irena Perkovic, Nada Vahcic

Abstract:

Products of animal origin, such as meat and meat products, can contribute to human mycotoxins’ intake coming as a result of either indirect transfer from farm animals exposed to naturally contaminated grains and feed (carry-over effects) or direct contamination with moulds or naturally contaminated spice mixtures used in meat production. Ochratoxin A (OTA) is mycotoxin considered to be of the outermost importance from the public health standpoint in connection with meat products. The aim of this study was to investigate the occurrence of OTA in different traditional meat products circulating on Croatian markets during 2018, produced by a large number of households situated in eastern and north Croatian regions using a variety of technologies. Concentrations of OTA were determined in traditional meat products (n = 70), including dry fermented sausages (Slavonian kulen, Slavonian sausage, Istrian sausage and domestic sausage; n = 28), dry-cured meat products (pancetta, pork rack and ham; n = 22) and cooked sausages (liver sausages, black pudding sausages and pate; n = 20). OTA was analyzed by use of quantitative screening immunoassay method (ELISA) and confirmed for positive samples (higher than the limit of detection) by liquid chromatography tandem mass spectrometry (LC-MS/MS) method. Whereas the bacon samples contaminated with OTA were not found, its level in dry fermented sausages ranged from 0.22 to 2.17 µg/kg and in dry-cured meat products from 0.47 to 5.35 µg/kg, with in total 9% of positive samples. Besides possible primary contamination of these products arising due to improper manufacturing or/and storage conditions, observed OTA contamination could also be the consequence of secondary contamination that comes as a result of contaminated feed the animals were fed on. OTA levels obtained in cooked sausages ranged from 0.32 to 4.12 µg/kg (5% of positives) and could probably be linked to the contaminated raw materials (liver, kidney and spices) used in the sausages production. The results showed an occasional OTA contamination of traditional meat products, pointing that to avoid such contamination on households these products should be produced and processed under standardized and well-controlled conditions. Further investigations should be performed in order to identify mycotoxin-producing moulds on the surface of the products and to define preventative measures that can reduce the contamination of traditional meat products during their production on households and period of storage.

Keywords: Croatian households, ochratoxin-A, traditional cooked sausages, traditional dry-cured meat products

Procedia PDF Downloads 197
22336 Climate Crises: Consumers and Designers Attitude Towards Sustainability of Fast Fashion Products in Nigeria

Authors: Oluwambe Akinmoye

Abstract:

The textile industry in Nigeria has grown rapidly, fueled by rising demand for fast fashion driven by celebrity culture, fashion TV, and the Internet. However, this growth has come at a cost, with the industry contributing to environmental degradation, waste management crises, economic imbalances, and social injustices. This paper examines the attitudes of consumers and designers toward sustainability in the Nigerian textile and fast fashion industry. The study adopts a mixed-methods research design. Both qualitative and quantitative data were drawn from fast fashion consumers and designers. The sample of consumers and designers was determined using random and purposive sampling techniques. Data were elicited from the consumers and designers using questionnaires and focus group discussions, respectively, coupled with comprehensive literature reviews. The collected data were analyzed using descriptive statistics, content, and thematic analyses. Findings indicate that the strata of Nigerian society pay little attention to fast fashion sustainability. Conversely, designers have started to innovate and adopt sustainable practices by sourcing eco-friendly materials, yet they face significant barriers. The study emphasizes the need for a shift in the industry's approach to sustainability, with a greater concern on circular economy principles, sustainable materials, and fair labour practices.

Keywords: Fast fashion, textiles, sustainability, Climate crises, consumers, designers

Procedia PDF Downloads 12
22335 Dark Gravity Confronted with Supernovae, Baryonic Oscillations and Cosmic Microwave Background Data

Authors: Frederic Henry-Couannier

Abstract:

Dark Gravity is a natural extension of general relativity in presence of a flat non dynamical background. Matter and radiation fields from its dark sector, as soon as their gravity dominates over our side fields gravity, produce a constant acceleration law of the scale factor. After a brief reminder of the Dark Gravity theory foundations, the confrontation with the main cosmological probes is carried out. We show that, amazingly, the sudden transition between the usual matter dominated decelerated expansion law a(t) ∝ t²/³ and this accelerated expansion law a(t) ∝ t² predicted by the theory should be able to fit the main cosmological probes (SN, BAO, CMB and age of the oldest stars data) but also direct H₀ measurements with two free parameters only: H₀ and the transition redshift.

Keywords: anti-gravity, negative energies, time reversal, field discontinuities, dark energy theory

Procedia PDF Downloads 63
22334 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers

Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy

Abstract:

In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.

Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology

Procedia PDF Downloads 107
22333 Determining a Suitable Maintenance Measure for Gentelligent Components Using Case-Based Reasoning

Authors: Maximilian Winkens, Peter Nyhuis

Abstract:

Components with sensory properties such as gentelligent components developed at the Collaborative Research Center 653 offer a new angle on the full utilization of the remaining service life in case of a preventive maintenance. The developed methodology of component status driven maintenance analyses the stress data obtained during the component's useful life and on the basis of this knowledge assesses the type of maintenance called for in this case. The procedure is derived from the case-based reasoning method and will be elucidated in detail. The method's functionality is demonstrated with real-life data obtained during test runs of a racing car prototype.

Keywords: gentelligent component, preventive maintenance, case-based reasoning, sensory

Procedia PDF Downloads 366
22332 Assessing the Incapacity of Indonesian Aviators Medical Conditions in 2016 – 2017

Authors: Ferdi Afian, Inne Yuliawati

Abstract:

Background: The change in causes of death from infectious diseases to non-communicable diseases also occurs in the aviation community in Indonesia. Non-communicable diseases are influenced by several internal risk factors, such as age, lifestyle changes and the presence of other diseases. These risk factors will increase the incidence of heart diseases resulting in the incapacity of Indonesian aviators which will disrupt flight safety. Method: The study was conducted by collecting secondary data. The retrieval of primary data was obtained from medical records at the Indonesian Aviation Health Center in 2016-2017. The subjects in this study were all cases of incapacity in Indonesian aviators medical conditions. Results: In this study, there were 15 cases of aviators in Indonesia who experienced incapacity of medical conditions related to heart and lung diseases in 2016-2017. Based on the secondary data contained in the flight medical records at the Aviation Health Center Aviation, it was found that several factors related to aviators incapacity causing its inability to carried out flight duties. Conclusion: Incapacity of Indonesian aviators medical conditions are most affected by the high value of Body Mass Index (86%) and less affected by high of Uric Acid in the blood (26%) and Hyperglycemia (26%).

Keywords: incapacity, aviators, flight, Indonesia

Procedia PDF Downloads 139
22331 Neuroblastoma in Children and the Potential Involvement of Viruses in Its Pathogenesis

Authors: Ugo Rovigatti

Abstract:

Neuroblastoma (NBL) has epitomized for at least 40 years our understanding of cancer cellular and molecular biology and its potential applications to novel therapeutic strategies. This includes the discovery of the very first oncogene aberrations and tumorigenesis suppression by differentiation in the 80s; the potential role of suppressor genes in the 90s; the relevance of immunotherapy in the millennium first, and the discovery of additional mutations by NGS technology in the millennium second decade. Similar discoveries were achieved in the majority of human cancers, and similar therapeutic interventions were obtained subsequently to NBL discoveries. Unfortunately, targeted therapies suggested by specific mutations (such as MYCN amplification –MNA- present in ¼ or 1/5 of cases) have not elicited therapeutic successes in aggressive NBL, where the prognosis is still dismal. The reasons appear to be linked to Tumor Heterogeneity, which is particularly evident in NBL but also a clear hallmark of aggressive human cancers generally. The new avenue of cancer immunotherapy (CIT) provided new hopes for cancer patients, but we still ignore the cellular or molecular targets. CIT is emblematic of high-risk disease (HR-NBL) since the mentioned GD2 passive immunotherapy is still providing better survival. We recently critically reviewed and evaluated the literature depicting the genomic landscapes of HR-NBL, coming to the qualified conclusion that among hundreds of affected genes, potential targets, or chromosomal sites, none correlated with anti-GD2 sensitivity. A better explanation is provided by the Micro-Foci inducing Virus (MFV) model, which predicts that neuroblasts infection with the MFV, an RNA virus isolated from a cancer-cluster (space-time association) of HR-NBL cases, elicits the appearance of MNA and additional genomic aberrations with mechanisms resembling chromothripsis. Neuroblasts infected with low titers of MFV amplified MYCN up to 100 folds and became highly transformed and malignant, thus causing neuroblastoma in young rat pups of strains SD and Fisher-344 and larger tumor masses in nu/nu mice. An association was discovered with GD2 since this glycosphingolipid is also the receptor for the family of MFV virus (dsRNA viruses). It is concluded that a dsRNA virus, MFV, appears to provide better explicatory mechanisms for the genesis of i) specific genomic aberrations such as MNA; ii) extensive tumor heterogeneity and chromothripsis; iii) the effects of passive immunotherapy with anti-GD2 monoclonals and that this and similar models should be further investigated in both pediatric and adult cancers.

Keywords: neuroblastoma, MYCN, amplification, viruses, GD2

Procedia PDF Downloads 102
22330 Applying a Noise Reduction Method to Reveal Chaos in the River Flow Time Series

Authors: Mohammad H. Fattahi

Abstract:

Chaotic analysis has been performed on the river flow time series before and after applying the wavelet based de-noising techniques in order to investigate the noise content effects on chaotic nature of flow series. In this study, 38 years of monthly runoff data of three gauging stations were used. Gauging stations were located in Ghar-e-Aghaj river basin, Fars province, Iran. The noise level of time series was estimated with the aid of Gaussian kernel algorithm. This step was found to be crucial in preventing removal of the vital data such as memory, correlation and trend from the time series in addition to the noise during de-noising process.

Keywords: chaotic behavior, wavelet, noise reduction, river flow

Procedia PDF Downloads 472
22329 An Enhanced MEIT Approach for Itemset Mining Using Levelwise Pruning

Authors: Tanvi P. Patel, Warish D. Patel

Abstract:

Association rule mining forms the core of data mining and it is termed as one of the well-known methodologies of data mining. Objectives of mining is to find interesting correlations, frequent patterns, associations or casual structures among sets of items in the transaction databases or other data repositories. Hence, association rule mining is imperative to mine patterns and then generate rules from these obtained patterns. For efficient targeted query processing, finding frequent patterns and itemset mining, there is an efficient way to generate an itemset tree structure named Memory Efficient Itemset Tree. Memory efficient IT is efficient for storing itemsets, but takes more time as compare to traditional IT. The proposed strategy generates maximal frequent itemsets from memory efficient itemset tree by using levelwise pruning. For that firstly pre-pruning of items based on minimum support count is carried out followed by itemset tree reconstruction. By having maximal frequent itemsets, less number of patterns are generated as well as tree size is also reduced as compared to MEIT. Therefore, an enhanced approach of memory efficient IT proposed here, helps to optimize main memory overhead as well as reduce processing time.

Keywords: association rule mining, itemset mining, itemset tree, meit, maximal frequent pattern

Procedia PDF Downloads 375