Search results for: predictive mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2021

Search results for: predictive mining

1211 Simon Says: What Should I Study?

Authors: Fonteyne Lot

Abstract:

SIMON (Study capacities and Interest Monitor is a freely accessible online self-assessment tool that allows secondary education pupils to evaluate their interests and capacities in order to choose a post-secondary major that maximally suits their potential. The tool consists of two broad domains that correspond with two general questions pupils ask: 'What study fields interest me?' and 'Am I capable to succeed in this field of study?'. The first question is addressed by a RIASEC-type interest inventory that links personal interests to post-secondary majors. Pupils are provided with a personal profile and an overview of majors with their degree of congruence. The output is dynamic: respondents can manipulate their score and they can compare their results to the profile of all fields of study. That way they are stimulated to explore the broad range of majors. To answer whether pupils are capable of succeeding in a preferred major, a battery of tests is provided. This battery comprises a range of factors that are predictive of academic success. Traditional predictors such as (educational) background and cognitive variables (mathematical and verbal skills) are included. Moreover, non-cognitive predictors of academic success (such as 'motivation', 'test anxiety', 'academic self-efficacy' and 'study skills') are assessed. These non-cognitive factors are generally not included in admission decisions although research shows they are incrementally predictive of success and are less discriminating. These tests inform pupils on potential causes of success and failure. More important, pupils receive their personal chances of success per major. These differential probabilities are validated through the underlying research on academic success of students. For example, the research has shown that we can identify 22 % of the failing students in psychology and educational sciences. In this group, our prediction is 95% accurate. SIMON leads more students to a suitable major which in turn alleviates student success and retention. Apart from these benefits, the instrument grants insight into risk factors of academic failure. It also supports and fosters the development of evidence-based remedial interventions and therefore gives way to a more efficient use of means.

Keywords: academic success, online self-assessment, student retention, vocational choice

Procedia PDF Downloads 398
1210 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 86
1209 Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection

Authors: Salma El Hajjami, Jamal Malki, Alain Bouju, Mohammed Berrada

Abstract:

With the expansion of machine learning and data mining in the context of Big Data analytics, the common problem that affects data is class imbalance. It refers to an imbalanced distribution of instances belonging to each class. This problem is present in many real world applications such as fraud detection, network intrusion detection, medical diagnostics, etc. In these cases, data instances labeled negatively are significantly more numerous than the instances labeled positively. When this difference is too large, the learning system may face difficulty when tackling this problem, since it is initially designed to work in relatively balanced class distribution scenarios. Another important problem, which usually accompanies these imbalanced data, is the overlapping instances between the two classes. It is commonly referred to as noise or overlapping data. In this article, we propose an approach called: One Side Behavioral Noise Reduction (OSBNR). This approach presents a way to deal with the problem of class imbalance in the presence of a high noise level. OSBNR is based on two steps. Firstly, a cluster analysis is applied to groups similar instances from the minority class into several behavior clusters. Secondly, we select and eliminate the instances of the majority class, considered as behavioral noise, which overlap with behavior clusters of the minority class. The results of experiments carried out on a representative public dataset confirm that the proposed approach is efficient for the treatment of class imbalances in the presence of noise.

Keywords: machine learning, imbalanced data, data mining, big data

Procedia PDF Downloads 129
1208 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning

Authors: Xingyu Gao, Qiang Wu

Abstract:

Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.

Keywords: patent influence, interpretable machine learning, predictive models, SHAP

Procedia PDF Downloads 39
1207 Clinical Value of 18F-FDG-PET Compared with CT Scan in the Detection of Nodal and Distant Metastasis in Urothelial Carcinoma or Bladder Cancer

Authors: Mohammed Al-Zubaidi, Katherine Ong, Pravin Viswambaram, Steve McCombie, Oliver Oey, Jeremy Ong, Richard Gauci, Ronny Low, Dickon Hayne

Abstract:

Objective: Lymph node involvement along with distant metastasis in a patient with invasive bladder cancer determines the disease survival, therefeor, it is an essential determinant of the therapeutic management and outcome. This retrospective study aims to determine the accuracy of FDG PET scan in detecting lymphatic involvement and distant metastatic urothelial cancer compared to conventional CT staging. Method: A retrospective review of 76 patients with UC or BC who underwent surgery or confirmatory biopsy that was staged with both CT and 18F-FDG-PET (up to 8 weeks apart) between 2015 and 2020. Fifty-sevenpatients (75%) had formal pelvic LN dissection or biopsy of suspicious metastasis. 18F-FDG-PET reports for positive sites were qualitative depending on SUV Max. On the other hand, enlarged LN by RECIST criteria 1.1 (>10 mm) and other qualitative findings suggesting metastasis were considered positive in CT scan. Histopathological findings from surgical specimens or image-guided biopsies were considered the gold standard in comparison to imaging reports. 18F-FDG-avid or enlarged pelvic LNs with surgically proven nodal metastasis were considered true positives. Performance characteristics of 18F-FDG-PET and CT, including sensitivity, specificity, positive predictive value (PPV), and negative predictive value (PPV), were calculated. Results: Pelvic LN involvement was confirmed histologically in 10/57 (17.5%) patients. Sensitivity, specificity, PPV and NPV of CT for detecting pelvic LN metastases were 41.17% (95% CI:18-67%), 100% (95% CI:90-100%) 100% (95% CI:59-100%) and 78.26% (95% CI:64-89%) respectively. Sensitivity, specificity, PPV and NPV of 18F-FDG-PET for detecting pelvic LN metastases were 62.5% (95% CI:35-85%), 83.78% (95% CI:68-94%), 62.5% (95% CI:35-85%), and 83.78% (95% CI:68-94%) respectively. Pre-operative staging with 18F-FDG-PET identified the distant metastatic disease in 9/76 (11.8%) patients who were occult on CT. This retrospective study suggested that 18F-FDG-PET may be more sensitive than CT for detecting pelvic LN metastases. 7/76 (9.2%) patients avoided cystectomy due to 18F-FDG-PET diagnosed metastases that were not reported on CT. Conclusion: 18F-FDG-PET is more sensitive than CT for pelvic LN metastases, which can be used as the standard modality of bladder cancer staging, as it may change the treatment by detecting lymph node metastasis that was occult in CT. Further research involving randomised controlled trials comparing the diagnostic yield of 18F-FDG-PET and CT in detecting nodal and distant metastasis in UC or BC is warranted to confirm our findings.

Keywords: FDG PET, CT scan, urothelial cancer, bladder cancer

Procedia PDF Downloads 118
1206 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 350
1205 Hydrogeophysical Investigations And Mapping of Ingress Channels Along The Blesbokspruit Stream In The East Rand Basin Of The Witwatersrand, South Africa

Authors: Melvin Sethobya, Sithule Xanga, Sechaba Lenong, Lunga Nolakana, Gbenga Adesola

Abstract:

Mining has been the cornerstone of the South African economy for the last century. Most of the gold mining in South Africa was conducted within the Witwatersrand basin, which contributed to the rapid growth of the city of Johannesburg and capitulated the city to becoming the business and wealth capital of the country. But with gradual depletion of resources, a stoppage in the extraction of underground water from mines and other factors relating to survival of the mining operations over a lengthy period, most of the mines were abandoned and left to pollute the local waterways and groundwater with toxins, heavy metal residue and increased acid mine drainage ensued. The Department of Mineral Resources and Energy commissioned a project whose aim is to monitor, maintain, and mitigate the adverse environmental impacts of polluted water mine water flowing into local streams affecting local ecosystems and livelihoods downstream. As part of mitigation efforts, the diagnosis and monitoring of groundwater or surface water polluted sites has become important. Geophysical surveys, in particular, Resistivity and Magnetics surveys, were selected as some of most suitable techniques for investigation of local ingress points along of one the major streams cutting through the Witwatersrand basin, namely the Blesbokspruit, which is found in the eastern part of the basin. The aim of the surveys was to provide information that could be used to assist in determining possible water loss/ ingress from the Blesbokspriut stream. Modelling of geophysical surveys results offered an in-depth insight into the interaction and pathways of polluted water through mapping of possible ingress channels near the Blesbokspruit. The resistivity - depth profile of the surveyed site exhibit a three(3) layered model with low resistivity values (10 to 200 Ω.m) overburden, which is underlain by a moderate resistivity weathered layer (>300 Ω.m), which sits on a more resistive crystalline bedrock (>500 Ω.m). Two locations of potential ingress channels were mapped across the two traverses at the site. The magnetic survey conducted at the site mapped a major NE-SW trending regional linearment with a strong magnetic signature, which was modeled to depth beyond 100m, with the potential to act as a conduit for dispersion of stream water away from the stream, as it shared a similar orientation with the potential ingress channels as mapped using the resistivity method.

Keywords: eletrictrical resistivity, magnetics survey, blesbokspruit, ingress

Procedia PDF Downloads 61
1204 A Comparative Study: Comparison of Two Different Fluorescent Stains -Auramine and Rhodamine- with Ehrlich-Ziehl-Neelsen, Kinyoun Staining, and Culture in the Determination of Acid Resistant Bacilli

Authors: Recep Keşli, Hayriye Tokay, Cengiz Demir, İsmail Ceyhan

Abstract:

Objective: In many countries, tuberculosis (TB) is still one of the most important diseases. Tuberculosis is among top 10 causes of death worldwide. The early diagnosis of active tuberculosis still depends on the presence of acid resistant bacilli (ARB) in stained smears. In this study, we aimed to investigate the diagnostic performances of Erlich Ziehl Neelsen (EZN), Kinyoun and two different fluorescent stains. Methods: The specimens were obtained from the patients who applied to Chest Diseases Departments of Ankara Atatürk Chest Diseases and Thoracic Surgery Training and Research Hospital, and Afyon Kocatepe University, ANS Research and Practice Hospital. The study was carried out in the Medical Microbiology Laboratory, School of Medicine, Afyon Kocatepe University. All the non-sterile specimens were homogenized and decontaminated according to the EUCAST instructions. Samples were inoculated onto the Löwenstein-Jensen agars (bio-Merieux Marcy l'Etoile, France) and then incubated at 37˚C, for 40 days. Four smears were prepared from each specimen. Slides were stained with commercial EZN (BD, Sparks, USA), Kinyoun (SALUBRIS Istanbul, Turkey), Auramine (SALUBRIS Istanbul, Turkey) and Rhodamine (SALUBRIS Istanbul, Turkey) kit. While EZN and Kinyoun stainings were examined by light microscope, Auramine and Rhodamine slides were examined by fluorescence microscopy. Results: A total of 158 respiratory system samples (sputum, broncho alveolar lavage fluid…etc) were enrolled into the study. A hundred and two of the samples that processed were found as culture positive. The sensitivity, specificity, positive predictive, and negative predictive values were detected as 100%, 67.5%, 73.5%, and 100% for EZN, 100%, 70.9%, 77.4%, and 100% for Kinyoun, 100%,77.8%, 84.3%, 100% for Auramine, and 100%, 80% , 86.3%, and 100% for Rhodamine respectively. Conclusions: According to our study auramine and rhodamine staining methods showed the best diagnostic performance among the four investigated staining methods. In conclusion, the fluorochrome staining method may be accepted as the most reliable, rapid and useful method for diagnosis of the mycobacterial infections truly.

Keywords: acid resistant bacilli (ARB), auramine, Ehrlich-Ziehl-Neelsen (EZN), Kinyoun, Rhodamine

Procedia PDF Downloads 273
1203 A U-Net Based Architecture for Fast and Accurate Diagram Extraction

Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal

Abstract:

In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.

Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO

Procedia PDF Downloads 132
1202 Modeling of Crack Growth in Railway Axles under Static Loading

Authors: Zellagui Redouane, Bellaouar Ahmed, Lachi Mohammed

Abstract:

The railway axles are the essential parts in the bogie of train, and its failure creates a big problem in the railway transport; during the work of this parts we noticed a premature deterioration. The aim has been presented a predictive model allowing the identification of the probable causes that are the cause of these premature deterioration. The results are employed for predicting fatigue crack growth in the railway axle, Also we want to present the variation value of stress intensity factor in different positions of elliptical crack tip. The modeling of axle in performed by the SOLID WORKS software and imported into ANSYS.

Keywords: crack growth, static load, railway axle, lifetime

Procedia PDF Downloads 359
1201 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System

Authors: Nareshkumar Harale, B. B. Meshram

Abstract:

The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.

Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design

Procedia PDF Downloads 224
1200 Data Analysis Tool for Predicting Water Scarcity in Industry

Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse

Abstract:

Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.

Keywords: data mining, industry, machine Learning, shortage, water resources

Procedia PDF Downloads 120
1199 Predictive Factors of Exercise Behaviors of Junior High School Students in Chonburi Province

Authors: Tanida Julvanichpong

Abstract:

Exercise has been regarded as a necessary and important aspect to enhance physical performance and psychology health. Body weight statistics of students in junior high school students in Chonburi Province beyond a standard risk of obesity. Promoting exercise among Junior high school students in Chonburi Province, essential knowledge concerning factors influencing exercise is needed. Therefore, this study aims to (1) determine the levels of perceived exercise behavior, exercise behavior in the past, perceived barriers to exercise, perceived benefits of exercise, perceived self-efficacy to exercise, feelings associated with exercise behavior, influence of the family to exercise, influence of friends to exercise, and the perceived influence of the environment on exercise. (2) examine the predicting ability of each of the above factors while including personal factors (sex, educational level) for exercise behavior. Pender’s Health Promotion Model was used as a guide for the study. Sample included 652 students in junior high schools, Chonburi Provience. The samples were selected by Multi-Stage Random Sampling. Data Collection has been done by using self-administered questionnaires. Data were analyzed using descriptive statistics, Pearson’s product moment correlation coefficient, Eta, and stepwise multiple regression analysis. The research results showed that: 1. Perceived benefits of exercise, influence of teacher, influence of environmental, feelings associated with exercise behavior were at a high level. Influence of the family to exercise, exercise behavior, exercise behavior in the past, perceived self-efficacy to exercise and influence of friends were at a moderate level. Perceived barriers to exercise were at a low level. 2. Exercise behavior was positively significant related to perceived benefits of exercise, influence of the family to exercise, exercise behavior in the past, perceived self-efficacy to exercise, influence of friends, influence of teacher, influence of environmental and feelings associated with exercise behavior (p < .01, respectively) and was negatively significant related to educational level and perceived barriers to exercise (p < .01, respectively). Exercise behavior was significant related to sex (Eta = 0.243, p=.000). 3. Exercise behavior in the past, influence of the family to exercise significantly contributed 60.10 percent of the variance to the prediction of exercise behavior in male students (p < .01). Exercise behavior in the past, perceived self-efficacy to exercise, perceived barriers to exercise, and educational level significantly contributed 52.60 percent of the variance to the prediction of exercise behavior in female students (p < .01).

Keywords: predictive factors, exercise behaviors, Junior high school, Chonburi Province

Procedia PDF Downloads 610
1198 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 111
1197 Customer Acquisition through Time-Aware Marketing Campaign Analysis in Banking Industry

Authors: Harneet Walia, Morteza Zihayat

Abstract:

Customer acquisition has become one of the critical issues of any business in the 21st century; having a healthy customer base is the essential asset of the bank business. Term deposits act as a major source of cheap funds for the banks to invest and benefit from interest rate arbitrage. To attract customers, the marketing campaigns at most financial institutions consist of multiple outbound telephonic calls with more than one contact to a customer which is a very time-consuming process. Therefore, customized direct marketing has become more critical than ever for attracting new clients. As customer acquisition is becoming more difficult to archive, having an intelligent and redefined list is necessary to sell a product smartly. Our aim of this research is to increase the effectiveness of campaigns by predicting customers who will most likely subscribe to the fixed deposit and suggest the most suitable month to reach out to customers. We design a Time Aware Upsell Prediction Framework (TAUPF) using two different approaches, with an aim to find the best approach and technique to build the prediction model. TAUPF is implemented using Upsell Prediction Approach (UPA) and Clustered Upsell Prediction Approach (CUPA). We also address the data imbalance problem by examining and comparing different methods of sampling (Up-sampling and down-sampling). Our results have shown building such a model is quite feasible and profitable for the financial institutions. The Time Aware Upsell Prediction Framework (TAUPF) can be easily used in any industry such as telecom, automobile, tourism, etc. where the TAUPF (Clustered Upsell Prediction Approach (CUPA) or Upsell Prediction Approach (UPA)) holds valid. In our case, CUPA books more reliable. As proven in our research, one of the most important challenges is to define measures which have enough predictive power as the subscription to a fixed deposit depends on highly ambiguous situations and cannot be easily isolated. While we have shown the practicality of time-aware upsell prediction model where financial institutions can benefit from contacting the customers at the specified month, further research needs to be done to understand the specific time of the day. In addition, a further empirical/pilot study on real live customer needs to be conducted to prove the effectiveness of the model in the real world.

Keywords: customer acquisition, predictive analysis, targeted marketing, time-aware analysis

Procedia PDF Downloads 118
1196 ¹⁸F-FDG PET/CT Impact on Staging of Pancreatic Cancer

Authors: Jiri Kysucan, Dusan Klos, Katherine Vomackova, Pavel Koranda, Martin Lovecek, Cestmir Neoral, Roman Havlik

Abstract:

Aim: The prognosis of patients with pancreatic cancer is poor. The median of survival after establishing diagnosis is 3-11 months without surgical treatment, 13-20 months with surgical treatment depending on the disease stage, 5-year survival is less than 5%. Radical surgical resection remains the only hope of curing the disease. Early diagnosis with valid establishment of tumor resectability is, therefore, the most important aim for patients with pancreatic cancer. The aim of the work is to evaluate the contribution and define the role of 18F-FDG PET/CT in preoperative staging. Material and Methods: In 195 patients (103 males, 92 females, median age 66,7 years, 32-88 years) with a suspect pancreatic lesion, as part of the standard preoperative staging, in addition to standard examination methods (ultrasonography, contrast spiral CT, endoscopic ultrasonography, endoscopic ultrasonographic biopsy), a hybrid 18F-FDG PET/CT was performed. All PET/CT findings were subsequently compared with standard staging (CT, EUS, EUS FNA), with peroperative findings and definitive histology in the operated patients as reference standards. Interpretation defined the extent of the tumor according to TNM classification. Limitations of resectability were local advancement (T4) and presence of distant metastases (M1). Results: PET/CT was performed in a total of 195 patients with a suspect pancreatic lesion. In 153 patients, pancreatic carcinoma was confirmed and of these patients, 72 were not indicated for radical surgical procedure due to local inoperability or generalization of the disease. The sensitivity of PET/CT in detecting the primary lesion was 92.2%, specificity was 90.5%. A false negative finding in 12 patients, a false positive finding was seen in 4 cases, positive predictive value (PPV) 97.2%, negative predictive value (NPV) 76,0%. In evaluating regional lymph nodes, sensitivity was 51.9%, specificity 58.3%, PPV 58,3%, NPV 51.9%. In detecting distant metastases, PET/CT reached a sensitivity of 82.8%, specificity was 97.8%, PPV 96.9%, NPV 87.0%. PET/CT found distant metastases in 12 patients, which were not detected by standard methods. In 15 patients (15.6%) with potentially radically resectable findings, the procedure was contraindicated based on PET/CT findings and the treatment strategy was changed. Conclusion: PET/CT is a highly sensitive and specific method useful in preoperative staging of pancreatic cancer. It improves the selection of patients for radical surgical procedures, who can benefit from it and decreases the number of incorrectly indicated operations.

Keywords: cancer, PET/CT, staging, surgery

Procedia PDF Downloads 246
1195 The Curse of Natural Resources: An Empirical Analysis Applied to the Case of Copper Mining in Zambia

Authors: Chomba Kalunga

Abstract:

Many developing countries have a rich endowment of natural resources. Yet, amidst that wealth, living standards remain poor. At the same time, international markets have been surged with an increase in copper prices in the last twenty years. This is a presentation of the findings on the causal economic impact of Zambia’s copper mines, a country located in sub-Saharan Africa endowed with vast copper deposits on living standards using household data from 1996 to 2010, exploiting an episode where the copper prices on the international market were rising. Using an Instrumental Variable approach and controlling for constituency-level and microeconomic factors, the results show a significant impact of copper production on living standards. After splitting the constituencies close to and far away from the nearest mine, the results document that constituencies close to the mines benefited significantly from the increase in copper production, compared to their counterparts through increased levels of employment. Finally, the results are not consistent with the natural resource curse hypothesis; findings show a positive causal relationship between the presence of natural resources and socioeconomic outcomes in less developed countries, particularly for constituencies close to the mines in Zambia. Some key policy implications follow from the findings. The finding that increased copper production led to an increase in employment suggests that, in Zambias’ context, policies that promote local employment may be more beneficial to residents. Meaning that it is government policies that can help improve the living standards were government needs to work towards making this impact more substantial.

Keywords: copper prices, local development, mining, natural resources

Procedia PDF Downloads 208
1194 Passive Attenuation of Nitrogen Species at Northern Mine Sites

Authors: Patrick Mueller, Alan Martin, Justin Stockwell, Robert Goldblatt

Abstract:

Elevated concentrations of inorganic nitrogen (N) compounds (nitrate, nitrite, and ammonia) are a ubiquitous feature to mine-influenced drainages due to the leaching of blasting residues and use of cyanide in the milling of gold ores. For many mines, the management of N is a focus for environmental protection, therefore understanding the factors controlling the speciation and behavior of N is central to effective decision making. In this paper, the passive attenuation of ammonia and nitrite is described for three northern water bodies (two lakes and a tailings pond) influenced by mining activities. In two of the water bodies, inorganic N compounds originate from explosives residues in mine water and waste rock. The third water body is a decommissioned tailings impoundment, with N compounds largely originating from the breakdown of cyanide compounds used in the processing of gold ores. Empirical observations from water quality monitoring indicate nitrification (the oxidation of ammonia to nitrate) occurs in all three waterbodies, where enrichment of nitrate occurs commensurately with ammonia depletion. The N species conversions in these systems occurred more rapidly than chemical oxidation kinetics permit, indicating that microbial mediated conversion was occurring, despite the cool water temperatures. While nitrification of ammonia and nitrite to nitrate was the primary process, in all three waterbodies nitrite was consistently present at approximately 0.5 to 2.0 % of total N, even following ammonia depletion. The persistence of trace amounts of nitrite under these conditions suggests the co-occurrence denitrification processes in the water column and/or underlying substrates. The implications for N management in mine waters are discussed.

Keywords: explosives, mining, nitrification, water

Procedia PDF Downloads 314
1193 Modelling Spatial Dynamics of Terrorism

Authors: André Python

Abstract:

To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.

Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling

Procedia PDF Downloads 345
1192 Early Predictive Signs for Kasai Procedure Success

Authors: Medan Isaeva, Anna Degtyareva

Abstract:

Context: Biliary atresia is a common reason for liver transplants in children, and the Kasai procedure can potentially be successful in avoiding the need for transplantation. However, it is important to identify factors that influence surgical outcomes in order to optimize treatment and improve patient outcomes. Research aim: The aim of this study was to develop prognostic models to assess the outcomes of the Kasai procedure in children with biliary atresia. Methodology: This retrospective study analyzed data from 166 children with biliary atresia who underwent the Kasai procedure between 2002 and 2021. The effectiveness of the operation was assessed based on specific criteria, including post-operative stool color, jaundice reduction, and bilirubin levels. The study involved a comparative analysis of various parameters, such as gestational age, birth weight, age at operation, physical development, liver and spleen sizes, and laboratory values including bilirubin, ALT, AST, and others, measured pre- and post-operation. Ultrasonographic evaluations were also conducted pre-operation, assessing the hepatobiliary system and related quantitative parameters. The study was carried out by two experienced specialists in pediatric hepatology. Comparative analysis and multifactorial logistic regression were used as the primary statistical methods. Findings: The study identified several statistically significant predictors of a successful Kasai procedure, including the presence of the gallbladder and levels of cholesterol and direct bilirubin post-operation. A detectable gallbladder was associated with a higher probability of surgical success, while elevated post-operative cholesterol and direct bilirubin levels were indicative of a reduced chance of positive outcomes. Theoretical importance: The findings of this study contribute to the optimization of treatment strategies for children with biliary atresia undergoing the Kasai procedure. By identifying early predictive signs of success, clinicians can modify treatment plans and manage patient care more effectively and proactively. Data collection and analysis procedures: Data for this analysis were obtained from the health records of patients who received the Kasai procedure. Comparative analysis and multifactorial logistic regression were employed to analyze the data and identify significant predictors. Question addressed: The study addressed the question of identifying predictive factors for the success of the Kasai procedure in children with biliary atresia. Conclusion: The developed prognostic models serve as valuable tools for early detection of patients who are less likely to benefit from the Kasai procedure. This enables clinicians to modify treatment plans and manage patient care more effectively and proactively. Potential limitations of the study: The study has several limitations. Its retrospective nature may introduce biases and inconsistencies in data collection. Being single centered, the results might not be generalizable to wider populations due to variations in surgical and postoperative practices. Also, other potential influencing factors beyond the clinical, laboratory, and ultrasonographic parameters considered in this study were not explored, which could affect the outcomes of the Kasai operation. Future studies could benefit from including a broader range of factors.

Keywords: biliary atresia, kasai operation, prognostic model, native liver survival

Procedia PDF Downloads 51
1191 An Architectural Model for APT Detection

Authors: Nam-Uk Kim, Sung-Hwan Kim, Tai-Myoung Chung

Abstract:

Typical security management systems are not suitable for detecting APT attack, because they cannot draw the big picture from trivial events of security solutions. Although SIEM solutions have security analysis engine for that, their security analysis mechanisms need to be verified in academic field. Although this paper proposes merely an architectural model for APT detection, we will keep studying on correlation analysis mechanism in the future.

Keywords: advanced persistent threat, anomaly detection, data mining

Procedia PDF Downloads 523
1190 Modelling Fluidization by Data-Based Recurrence Computational Fluid Dynamics

Authors: Varun Dongre, Stefan Pirker, Stefan Heinrich

Abstract:

Over the last decades, the numerical modelling of fluidized bed processes has become feasible even for industrial processes. Commonly, continuous two-fluid models are applied to describe large-scale fluidization. In order to allow for coarse grids novel two-fluid models account for unresolved sub-grid heterogeneities. However, computational efforts remain high – in the order of several hours of compute-time for a few seconds of real-time – thus preventing the representation of long-term phenomena such as heating or particle conversion processes. In order to overcome this limitation, data-based recurrence computational fluid dynamics (rCFD) has been put forward in recent years. rCFD can be regarded as a data-based method that relies on the numerical predictions of a conventional short-term simulation. This data is stored in a database and then used by rCFD to efficiently time-extrapolate the flow behavior in high spatial resolution. This study will compare the numerical predictions of rCFD simulations with those of corresponding full CFD reference simulations for lab-scale and pilot-scale fluidized beds. In assessing the predictive capabilities of rCFD simulations, we focus on solid mixing and secondary gas holdup. We observed that predictions made by rCFD simulations are highly sensitive to numerical parameters such as diffusivity associated with face swaps. We achieved a computational speed-up of four orders of magnitude (10,000 time faster than classical TFM simulation) eventually allowing for real-time simulations of fluidized beds. In the next step, we apply the checkerboarding technique by introducing gas tracers subjected to convection and diffusion. We then analyze the concentration profiles by observing mixing, transport of gas tracers, insights about the convective and diffusive pattern of the gas tracers, and further towards heat and mass transfer methods. Finally, we run rCFD simulations and calibrate them with numerical and physical parameters compared with convectional Two-fluid model (full CFD) simulation. As a result, this study gives a clear indication of the applicability, predictive capabilities, and existing limitations of rCFD in the realm of fluidization modelling.

Keywords: multiphase flow, recurrence CFD, two-fluid model, industrial processes

Procedia PDF Downloads 68
1189 Power Asymmetry and Major Corporate Social Responsibility Projects in Mhondoro-Ngezi District, Zimbabwe

Authors: A. T. Muruviwa

Abstract:

Empirical studies of the current CSR agenda have been dominated by literature from the North at the expense of the nations from the South where most TNCs are located. Therefore, owing to the limitations of the current discourse that is dominated by Western ideas such as voluntarism, philanthropy, business case and economic gains, scholars have been calling for a new CSR agenda that is South-centred and addresses the needs of developing nations. The development theme has dominated in the recent literature as scholars concerned with the relationship between business and society have tried to understand its relationship with CSR. Despite a plethora of literature on the roles of corporations in local communities and the impact of CSR initiatives, there is lack of adequate empirical evidence to help us understand the nexus between CSR and development. For all the claims made about the positive and negative consequences of CSR, there is surprisingly little information about the outcomes it delivers. This study is a response to these claims made about the developmental aspect of CSR in developing countries. It offers some empirical bases for assessing the major CSR projects that have been fulfilled by a major mining company, Zimplats in Mhondoro-Ngezi Zimbabwe. The neo-liberal idea of capitalism and market dominations has empowered TNCs to stamp their authority in the developing countries. TNCs have made their mark in developing nations as they stamp their global private authority, rivalling or implicitly challenging the state in many functions. This dominance of corporate power raises great concerns over their tendencies of abuses in terms of environmental, social and human rights concerns as well as how to make them increasingly accountable. The hegemonic power of TNCs in the developing countries has had a tremendous impact on the overall CSR practices. While TNCs are key drivers of globalization they may be acting responsibly in their Global Northern home countries where there is a combination of legal mechanisms and the fear of civil society activism associated with corporate scandals. Using a triangulated approach in which both qualitative and quantitative methods were used the study found out that most CSR projects in Zimbabwe are dominated and directed by Zimplats because of the power it possesses. Most of the major CSR projects are beneficial to the mining company as they serve the business plans of the mining company. What was deduced from the study is that the infrastructural development initiatives by Zimplats confirm that CSR is a tool to advance business obligations. This shows that although proponents of CSR might claim that business has a mandate for social obligations to society, we need not to forget the dominant idea that the primary function of CSR is to enhance the firm’s profitability.

Keywords: hegemonic power, projects, reciprocity, stakeholders

Procedia PDF Downloads 249
1188 Allele Mining for Rice Sheath Blight Resistance by Whole-Genome Association Mapping in a Tail-End Population

Authors: Naoki Yamamoto, Hidenobu Ozaki, Taiichiro Ookawa, Youming Liu, Kazunori Okada, Aiping Zheng

Abstract:

Rice sheath blight is one of the destructive fungal diseases in rice. We have thought that rice sheath blight resistance is a polygenic trait. Host-pathogen interactions and secondary metabolites such as lignin and phytoalexins are likely to be involved in defense against R. solani. However, to our knowledge, it is still unknown how sheath blight resistance can be enhanced in rice breeding. To seek for an alternative genetic factor that contribute to sheath blight resistance, we mined relevant allelic variations from rice core collections created in Japan. Based on disease lesion length on detached leaf sheath, we selected 30 varieties of the top tail-end and the bottom tail-end, respectively, from the core collections to perform genome-wide association mapping. Re-sequencing reads for these varieties were used for calling single nucleotide polymorphisms among the 60 varieties to create a SNP panel, which contained 1,137,131 homozygous variant sites after filitering. Association mapping highlighted a locus on the long arm of chromosome 11, which is co-localized with three sheath blight QTLs, qShB11-2-TX, qShB11, and qSBR-11-2. Based on the localization of the trait-associated alleles, we identified an ankyryn repeat-containing protein gene (ANK-M) as an uncharacterized candidate factor for rice sheath blight resistance. Allelic distributions for ANK-M in the whole rice population supported the reliability of trait-allele associations. Gene expression characteristics were checked to evaluiate the functionality of ANK-M. Since an ANK-M homolog (OsPIANK1) in rice seems a basal defense regulator against rice blast and bacterial leaf blight, ANK-M may also play a role in the rice immune system.

Keywords: allele mining, GWAS, QTL, rice sheath blight

Procedia PDF Downloads 75
1187 Groundwater Treatment of Thailand's Mae Moh Lignite Mine

Authors: A. Laksanayothin, W. Ariyawong

Abstract:

Mae Moh Lignite Mine is the largest open-pit mine in Thailand. The mine serves coal to the power plant about 16 million tons per year. This amount of coal can produce electricity accounting for about 10% of Nation’s electric power generation. The mining area of Mae Moh Mine is about 28 km2. At present, the deepest area of the pit is about 280 m from ground level (+40 m. MSL) and in the future the depth of the pit can reach 520 m from ground level (-200 m.MSL). As the size of the pit is quite large, the stability of the pit is seriously important. Furthermore, the preliminary drilling and extended drilling in year 1989-1996 had found high pressure aquifer under the pit. As a result, the pressure of the underground water has to be released in order to control mine pit stability. The study by the consulting experts later found that 3-5 million m3 per year of the underground water is needed to be de-watered for the safety of mining. However, the quality of this discharged water should meet the standard. Therefore, the ground water treatment facility has been implemented, aiming to reduce the amount of naturally contaminated Arsenic (As) in discharged water lower than the standard limit of 10 ppb. The treatment system consists of coagulation and filtration process. The main components include rapid mixing tanks, slow mixing tanks, sedimentation tank, thickener tank and sludge drying bed. The treatment process uses 40% FeCl3 as a coagulant. The FeCl3 will adsorb with As(V), forming floc particles and separating from the water as precipitate. After that, the sludge is dried in the sand bed and then be disposed in the secured land fill. Since 2011, the treatment plant of 12,000 m3/day has been efficiently operated. The average removal efficiency of the process is about 95%.

Keywords: arsenic, coagulant, ferric chloride, groundwater, lignite, coal mine

Procedia PDF Downloads 308
1186 Predictive Factors of Prognosis in Acute Stroke Patients Receiving Traditional Chinese Medicine Therapy: A Retrospective Study

Authors: Shaoyi Lu

Abstract:

Background: Traditional Chinese medicine has been used to treat stroke, which is a major cause of morbidity and mortality. There is, however, no clear agreement about the optimal timing, population, efficacy, and predictive prognosis factors of traditional Chinese medicine supplemental therapy. Method: In this study, we used a retrospective analysis with data collection from stroke patients in Stroke Registry In Chang Gung Healthcare System (SRICHS). Stroke patients who received traditional Chinese medicine consultation in neurology ward of Keelung Chang Gung Memorial Hospital from Jan 2010 to Dec 2014 were enrolled. Clinical profiles including the neurologic deficit, activities of daily living and other basic characteristics were analyzed. Through propensity score matching, we compared the NIHSS and Barthel index before and after the hospitalization, and applied with subgroup analysis, and adjusted by multivariate regression method. Results: Totally 115 stroke patients were enrolled with experiment group in 23 and control group in 92. The most important factor for prognosis prediction were the scores of National Institutes of Health Stroke Scale and Barthel index right before the hospitalization. Traditional Chinese medicine intervention had no statistically significant influence on the neurological deficit of acute stroke patients, and mild negative influence on daily activity performance of acute hemorrhagic stroke patient. Conclusion: Efficacy of traditional Chinese medicine as a supplemental therapy for acute stroke patients was controversial. The reason for this phenomenon might be complex and require more research to comprehend. Key words: traditional Chinese medicine, acupuncture, Stroke, NIH stroke scale, Barthel index, predictive factor. Method: In this study, we used a retrospective analysis with data collection from stroke patients in Stroke Registry In Chang Gung Healthcare System (SRICHS). Stroke patients who received traditional Chinese medicine consultation in neurology ward of Keelung Chang Gung Memorial Hospital from Jan 2010 to Dec 2014 were enrolled. Clinical profiles including the neurologic deficit, activities of daily living and other basic characteristics were analyzed. Through propensity score matching, we compared the NIHSS and Barthel index before and after the hospitalization, and applied with subgroup analysis, and adjusted by multivariate regression method. Results: Totally 115 stroke patients were enrolled with experiment group in 23 and control group in 92. The most important factor for prognosis prediction were the scores of National Institutes of Health Stroke Scale and Barthel index right before the hospitalization. Traditional Chinese medicine intervention had no statistically significant influence on the neurological deficit of acute stroke patients, and mild negative influence on daily activity performance of acute hemorrhagic stroke patient. Conclusion: Efficacy of traditional Chinese medicine as a supplemental therapy for acute stroke patients was controversial. The reason for this phenomenon might be complex and require more research to comprehend.

Keywords: traditional Chinese medicine, complementary and alternative medicine, stroke, acupuncture

Procedia PDF Downloads 357
1185 The Fundamental Research and Industrial Application on CO₂+O₂ in-situ Leaching Process in China

Authors: Lixin Zhao, Genmao Zhou

Abstract:

Traditional acid in-situ leaching (ISL) is not suitable for the sandstone uranium deposit with low permeability and high content of carbonate minerals, because of the blocking of calcium sulfate precipitates. Another factor influences the uranium acid in-situ leaching is that the pyrite in ore rocks will react with oxidation reagent and produce lots of sulfate ions which may speed up the precipitation process of calcium sulphate and consume lots of oxidation reagent. Due to the advantages such as less chemical reagent consumption and groundwater pollution, CO₂+O₂ in-situ leaching method has become one of the important research areas in uranium mining. China is the second country where CO₂+O₂ ISL has been adopted in industrial uranium production of the world. It is shown that the CO₂+O₂ ISL in China has been successfully developed. The reaction principle, technical process, well field design and drilling engineering, uranium-bearing solution processing, etc. have been fully studied. At current stage, several uranium mines use CO₂+O₂ ISL method to extract uranium from the ore-bearing aquifers. The industrial application and development potential of CO₂+O₂ ISL method in China are summarized. By using CO₂+O₂ neutral leaching technology, the problem of calcium carbonate and calcium sulfate precipitation have been solved during uranium mining. By reasonably regulating the amount of CO₂ and O₂, related ions and hydro-chemical conditions can be controlled within the limited extent for avoiding the occurrence of calcium sulfate and calcium carbonate precipitation. Based on this premise, the demand of CO₂+O₂ uranium leaching has been met to the maximum extent, which not only realizes the effective leaching of uranium, but also avoids the occurrence and precipitation of calcium carbonate and calcium sulfate, realizing the industrial development of the sandstone type uranium deposit.

Keywords: CO₂+O₂ ISL, industrial production, well field layout, uranium processing

Procedia PDF Downloads 169
1184 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings

Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian

Abstract:

Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.

Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM

Procedia PDF Downloads 102
1183 Pregnant Women in Substance Abuse: Transition of Characteristics and Mining of Association from Teds-a 2011 to 2018

Authors: Md Tareq Ferdous Khan, Shrabanti Mazumder, MB Rao

Abstract:

Background: Substance use during pregnancy is a longstanding public health problem that results in severe consequences for pregnant women and fetuses. Methods: Eight (2011-2018) datasets on pregnant women’s admissions are extracted from TEDS-A. Distributions of sociodemographic, substance abuse behaviors, and clinical characteristics are constructed and compared over the years for trends by the Cochran-Armitage test. Market basket analysis is used in mining the association among polysubstance abuse. Results: Over the years, pregnant woman admissions as the percentage of total and female admissions remain stable, where total annual admissions range from 1.54 to about 2 million with the female share of 33.30% to 35.61%. Pregnant women aged 21-29, 12 or more years of education, white race, unemployed, holding independent living status are among the most vulnerable. Concerns prevail on a significant number of polysubstance users, young age at first use, frequency of daily users, and records of prior admissions (60%). Trends of abused primary substances show a significant rise in heroin (66%) and methamphetamine (46%) over the years, although the latest year shows a considerable downturn. On the other hand, significant decreasing patterns are evident for alcohol (43%), marijuana or hashish (24%), cocaine or crack (23%), other opiates or synthetics (36%), and benzodiazepines (29%). Basket analysis reveals some patterns of co-occurrence of substances consistent over the years. Conclusions: This comprehensive study can work as a reference to identify the most vulnerable groups based on their characteristics and deal with the most hazardous substances from their evidence of co-occurrence.

Keywords: basket analysis, pregnant women, substance abuse, trend analysis

Procedia PDF Downloads 194
1182 Prediction of Anticancer Potential of Curcumin Nanoparticles by Means of Quasi-Qsar Analysis Using Monte Carlo Method

Authors: Ruchika Goyal, Ashwani Kumar, Sandeep Jain

Abstract:

The experimental data for anticancer potential of curcumin nanoparticles was calculated by means of eclectic data. The optimal descriptors were examined using Monte Carlo method based CORAL SEA software. The statistical quality of the model is following: n = 14, R² = 0.6809, Q² = 0.5943, s = 0.175, MAE = 0.114, F = 26 (sub-training set), n =5, R²= 0.9529, Q² = 0.7982, s = 0.086, MAE = 0.068, F = 61, Av Rm² = 0.7601, ∆R²m = 0.0840, k = 0.9856 and kk = 1.0146 (test set) and n = 5, R² = 0.6075 (validation set). This data can be used to build predictive QSAR models for anticancer activity.

Keywords: anticancer potential, curcumin, model, nanoparticles, optimal descriptors, QSAR

Procedia PDF Downloads 314