Search results for: predictive mining
1365 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 811364 Comparison of Various Policies under Different Maintenance Strategies on a Multi-Component System
Authors: Demet Ozgur-Unluakin, Busenur Turkali, Ayse Karacaorenli
Abstract:
Maintenance strategies can be classified into two types, which are reactive and proactive, with respect to the time of the failure and maintenance. If the maintenance activity is done after a breakdown, it is called reactive maintenance. On the other hand, proactive maintenance, which is further divided as preventive and predictive, focuses on maintaining components before a failure occurs to prevent expensive halts. Recently, the number of interacting components in a system has increased rapidly and therefore, the structure of the systems have become more complex. This situation has made it difficult to provide the right maintenance decisions. Herewith, determining effective decisions has played a significant role. In multi-component systems, many methodologies and strategies can be applied when a component or a system has already broken down or when it is desired to identify and avoid proactively defects that could lead to future failure. This study focuses on the comparison of various maintenance strategies on a multi-component dynamic system. Components in the system are hidden, although there exists partial observability to the decision maker and they deteriorate in time. Several predefined policies under corrective, preventive and predictive maintenance strategies are considered to minimize the total maintenance cost in a planning horizon. The policies are simulated via Dynamic Bayesian Networks on a multi-component system with different policy parameters and cost scenarios, and their performances are evaluated. Results show that when the difference between the corrective and proactive maintenance cost is low, none of the proactive maintenance policies is significantly better than the corrective maintenance. However, when the difference is increased, at least one policy parameter for each proactive maintenance strategy gives significantly lower cost than the corrective maintenance.Keywords: decision making, dynamic Bayesian networks, maintenance, multi-component systems, reliability
Procedia PDF Downloads 1311363 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 711362 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator
Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib
Abstract:
Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model
Procedia PDF Downloads 3081361 An Improved K-Means Algorithm for Gene Expression Data Clustering
Authors: Billel Kenidra, Mohamed Benmohammed
Abstract:
Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.Keywords: microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initialization
Procedia PDF Downloads 1901360 Sampling and Chemical Characterization of Particulate Matter in a Platinum Mine
Authors: Juergen Orasche, Vesta Kohlmeier, George C. Dragan, Gert Jakobi, Patricia Forbes, Ralf Zimmermann
Abstract:
Underground mining poses a difficult environment for both man and machines. At more than 1000 meters underneath the surface of the earth, ores and other mineral resources are still gained by conventional and motorised mining. Adding to the hazards caused by blasting and stone-chipping, the working conditions are best described by the high temperatures of 35-40°C and high humidity, at low air exchange rates. Separate ventilation shafts lead fresh air into a mine and others lead expended air back to the surface. This is essential for humans and machines working deep underground. Nevertheless, mines are widely ramified. Thus the air flow rate at the far end of a tunnel is sensed to be close to zero. In recent years, conventional mining was supplemented by mining with heavy diesel machines. These very flat machines called Load Haul Dump (LHD) vehicles accelerate and ease work in areas favourable for heavy machines. On the other hand, they emit non-filtered diesel exhaust, which constitutes an occupational hazard for the miners. Combined with a low air exchange, high humidity and inorganic dust from the mining it leads to 'black smog' underneath the earth. This work focuses on the air quality in mines employing LHDs. Therefore we performed personal sampling (samplers worn by miners during their work), stationary sampling and aethalometer (Microaeth MA200, Aethlabs) measurements in a platinum mine in around 1000 meters under the earth’s surface. We compared areas of high diesel exhaust emission with areas of conventional mining where no diesel machines were operated. For a better assessment of health risks caused by air pollution we applied a separated gas-/particle-sampling tool (or system), with first denuder section collecting intermediate VOCs. These multi-channel silicone rubber denuders are able to trap IVOCs while allowing particles ranged from 10 nm to 1 µm in diameter to be transmitted with an efficiency of nearly 100%. The second section is represented by a quartz fibre filter collecting particles and adsorbed semi-volatile organic compounds (SVOC). The third part is a graphitized carbon black adsorber – collecting the SVOCs that evaporate from the filter. The compounds collected on these three sections were analyzed in our labs with different thermal desorption techniques coupled with gas chromatography and mass spectrometry (GC-MS). VOCs and IVOCs were measured with a Shimadzu Thermal Desorption Unit (TD20, Shimadzu, Japan) coupled to a GCMS-System QP 2010 Ultra with a quadrupole mass spectrometer (Shimadzu). The GC was equipped with a 30m, BP-20 wax column (0.25mm ID, 0.25µm film) from SGE (Australia). Filters were analyzed with In-situ derivatization thermal desorption gas chromatography time-of-flight-mass spectrometry (IDTD-GC-TOF-MS). The IDTD unit is a modified GL sciences Optic 3 system (GL Sciences, Netherlands). The results showed black carbon concentrations measured with the portable aethalometers up to several mg per m³. The organic chemistry was dominated by very high concentrations of alkanes. Typical diesel engine exhaust markers like alkylated polycyclic aromatic hydrocarbons were detected as well as typical lubrication oil markers like hopanes.Keywords: diesel emission, personal sampling, aethalometer, mining
Procedia PDF Downloads 1571359 A General Framework for Measuring the Internal Fraud Risk of an Enterprise Resource Planning System
Authors: Imran Dayan, Ashiqul Khan
Abstract:
Internal corporate fraud, which is fraud carried out by internal stakeholders of a company, affects the well-being of the organisation just like its external counterpart. Even if such an act is carried out for the short-term benefit of a corporation, the act is ultimately harmful to the entity in the long run. Internal fraud is often carried out by relying upon aberrations from usual business processes. Business processes are the lifeblood of a company in modern managerial context. Such processes are developed and fine-tuned over time as a corporation grows through its life stages. Modern corporations have embraced technological innovations into their business processes, and Enterprise Resource Planning (ERP) systems being at the heart of such business processes is a testimony to that. Since ERP systems record a huge amount of data in their event logs, the logs are a treasure trove for anyone trying to detect any sort of fraudulent activities hidden within the day-to-day business operations and processes. This research utilises the ERP systems in place within corporations to assess the likelihood of prospective internal fraud through developing a framework for measuring the risks of fraud through Process Mining techniques and hence finds risky designs and loose ends within these business processes. This framework helps not only in identifying existing cases of fraud in the records of the event log, but also signals the overall riskiness of certain business processes, and hence draws attention for carrying out a redesign of such processes to reduce the chance of future internal fraud while improving internal control within the organisation. The research adds value by applying the concepts of Process Mining into the analysis of data from modern day applications of business process records, which is the ERP event logs, and develops a framework that should be useful to internal stakeholders for strengthening internal control as well as provide external auditors with a tool of use in case of suspicion. The research proves its usefulness through a few case studies conducted with respect to big corporations with complex business processes and an ERP in place.Keywords: enterprise resource planning, fraud risk framework, internal corporate fraud, process mining
Procedia PDF Downloads 3361358 Predicting Emerging Agricultural Investment Opportunities: The Potential of Structural Evolution Index
Authors: Kwaku Damoah
Abstract:
The agricultural sector is characterized by continuous transformation, driven by factors such as demographic shifts, evolving consumer preferences, climate change, and migration trends. This dynamic environment presents complex challenges for key stakeholders including farmers, governments, and investors, who must navigate these changes to achieve optimal investment returns. To effectively predict market trends and uncover promising investment opportunities, a systematic, data-driven approach is essential. This paper introduces the Structural Evolution Index (SEI), a machine learning-based methodology. SEI is specifically designed to analyse long-term trends and forecast the potential of emerging agricultural products for investment. Versatile in application, it evaluates various agricultural metrics such as production, yield, trade, land use, and consumption, providing a comprehensive view of the evolution within agricultural markets. By harnessing data from the UN Food and Agricultural Organisation (FAOSTAT), this study demonstrates the SEI's capabilities through Comparative Exploratory Analysis and evaluation of international trade in agricultural products, focusing on Malaysia and Singapore. The SEI methodology reveals intricate patterns and transitions within the agricultural sector, enabling stakeholders to strategically identify and capitalize on emerging markets. This predictive framework is a powerful tool for decision-makers, offering crucial insights that help anticipate market shifts and align investments with anticipated returns.Keywords: agricultural investment, algorithm, comparative exploratory analytics, machine learning, market trends, predictive analytics, structural evolution index
Procedia PDF Downloads 631357 Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency
Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami
Abstract:
Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the k-means for clustering numeric datasets and the k-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the k-means algorithm instead the k-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases.Keywords: clustering, unsupervised learning, pattern recognition, categorical datasets, knowledge discovery, k-means
Procedia PDF Downloads 2611356 Framework for Integrating Big Data and Thick Data: Understanding Customers Better
Authors: Nikita Valluri, Vatcharaporn Esichaikul
Abstract:
With the popularity of data-driven decision making on the rise, this study focuses on providing an alternative outlook towards the process of decision-making. Combining quantitative and qualitative methods rooted in the social sciences, an integrated framework is presented with a focus on delivering a much more robust and efficient approach towards the concept of data-driven decision-making with respect to not only Big data but also 'Thick data', a new form of qualitative data. In support of this, an example from the retail sector has been illustrated where the framework is put into action to yield insights and leverage business intelligence. An interpretive approach to analyze findings from both kinds of quantitative and qualitative data has been used to glean insights. Using traditional Point-of-sale data as well as an understanding of customer psychographics and preferences, techniques of data mining along with qualitative methods (such as grounded theory, ethnomethodology, etc.) are applied. This study’s final goal is to establish the framework as a basis for providing a holistic solution encompassing both the Big and Thick aspects of any business need. The proposed framework is a modified enhancement in lieu of traditional data-driven decision-making approach, which is mainly dependent on quantitative data for decision-making.Keywords: big data, customer behavior, customer experience, data mining, qualitative methods, quantitative methods, thick data
Procedia PDF Downloads 1631355 Exploring Gaming-Learning Interaction in MMOG Using Data Mining Methods
Authors: Meng-Tzu Cheng, Louisa Rosenheck, Chen-Yen Lin, Eric Klopfer
Abstract:
The purpose of the research is to explore some of the ways in which gameplay data can be analyzed to yield results that feedback into the learning ecosystem. Back-end data for all users as they played an MMOG, The Radix Endeavor, was collected, and this study reports the analyses on a specific genetics quest by using the data mining techniques, including the decision tree method. In the study, different reasons for quest failure between participants who eventually succeeded and who never succeeded were revealed. Regarding the in-game tools use, trait examiner was a key tool in the quest completion process. Subsequently, the results of decision tree showed that a lack of trait examiner usage can be made up with additional Punnett square uses, displaying multiple pathways to success in this quest. The methods of analysis used in this study and the resulting usage patterns indicate some useful ways that gameplay data can provide insights in two main areas. The first is for game designers to know how players are interacting with and learning from their game. The second is for players themselves as well as their teachers to get information on how they are progressing through the game, and to provide help they may need based on strategies and misconceptions identified in the data.Keywords: MMOG, decision tree, genetics, gaming-learning interaction
Procedia PDF Downloads 3581354 Exploring the Correlation between Population Distribution and Urban Heat Island under Urban Data: Taking Shenzhen Urban Heat Island as an Example
Authors: Wang Yang
Abstract:
Shenzhen is a modern city of China's reform and opening-up policy, the development of urban morphology has been established on the administration of the Chinese government. This city`s planning paradigm is primarily affected by the spatial structure and human behavior. The subjective urban agglomeration center is divided into several groups and centers. In comparisons of this effect, the city development law has better to be neglected. With the continuous development of the internet, extensive data technology has been introduced in China. Data mining and data analysis has become important tools in municipal research. Data mining has been utilized to improve data cleaning such as receiving business data, traffic data and population data. Prior to data mining, government data were collected by traditional means, then were analyzed using city-relationship research, delaying the timeliness of urban development, especially for the contemporary city. Data update speed is very fast and based on the Internet. The city's point of interest (POI) in the excavation serves as data source affecting the city design, while satellite remote sensing is used as a reference object, city analysis is conducted in both directions, the administrative paradigm of government is broken and urban research is restored. Therefore, the use of data mining in urban analysis is very important. The satellite remote sensing data of the Shenzhen city in July 2018 were measured by the satellite Modis sensor and can be utilized to perform land surface temperature inversion, and analyze city heat island distribution of Shenzhen. This article acquired and classified the data from Shenzhen by using Data crawler technology. Data of Shenzhen heat island and interest points were simulated and analyzed in the GIS platform to discover the main features of functional equivalent distribution influence. Shenzhen is located in the east-west area of China. The city’s main streets are also determined according to the direction of city development. Therefore, it is determined that the functional area of the city is also distributed in the east-west direction. The urban heat island can express the heat map according to the functional urban area. Regional POI has correspondence. The research result clearly explains that the distribution of the urban heat island and the distribution of urban POIs are one-to-one correspondence. Urban heat island is primarily influenced by the properties of the underlying surface, avoiding the impact of urban climate. Using urban POIs as analysis object, the distribution of municipal POIs and population aggregation are closely connected, so that the distribution of the population corresponded with the distribution of the urban heat island.Keywords: POI, satellite remote sensing, the population distribution, urban heat island thermal map
Procedia PDF Downloads 1051353 External Validation of Risk Prediction Score for Candidemia in Critically Ill Patients: A Retrospective Observational Study
Authors: Nurul Mazni Abdullah, Saw Kian Cheah, Raha Abdul Rahman, Qurratu 'Aini Musthafa
Abstract:
Purpose: Candidemia was associated with high mortality in critically ill patients. Early candidemia prediction is imperative for preemptive antifungal treatment. This study aimed to externally validate the candidemia risk prediction scores by Jameran et al. (2021) by identifying risk factors of acute kidney injury, renal replacement therapy, parenteral nutrition, and multifocal candida colonization. Methods: This single-center, retrospective observational study included all critically ill patients admitted to the intensive care unit (ICU) in a tertiary referral center from January 2018 to December 2023. The study evaluated the candidemia risk prediction score performance by analyzing the occurrence of candidemia within the study period. Patients’ demographic characteristics, comorbidities, SOFA scores, and ICU outcomes were analyzed. Patients who were diagnosed with candidemia before ICU admission were excluded. Results: A total of 500 patients were analyzed with 2 dropouts due to incomplete data. Validation analysis showed that the candidemia risk prediction score has a sensitivity of 75.00% (95% CI: 59.66-86.81), specificity of 65.35% (95% CI: 60.78-69.72), positive predictive value of 17.28, and negative predictive value of 96.44. The incidence of candidemia was 8.86% with no significant differences in the demographic and comorbidities except higher SOFA scoring in the candidemia group. The candidemia group showed significantly longer ICU and hospital LOS and higher ICU and in-hospital mortality. Conclusion: This study concluded the candidemia risk prediction score by Jameran et al (2021) had good sensitivity and a high negative prediction value.Keywords: candidemia, intensive care, clinical prediction rule, incidence
Procedia PDF Downloads 201352 The Role of HPV Status in Patients with Overlapping Grey Zone Cancer in Oral Cavity and Oropharynx
Authors: Yao Song
Abstract:
Objectives: We aimed to explore the clinicodemographic characteristics and prognosis of grey zone squamous cell cancer (GZSCC) located in the overlapping or ambiguous area of the oral cavity and oropharynx and to identify valuable factors that would improve its differential diagnosis and prognosis. Methods: Information of GZSCC patients in the Surveillance, Epidemiology, and End Results (SEER) database was compared to patients with an oral cavity (OCSCC) and oropharyngeal (OPSCC) squamous cell carcinomas with corresponding HPV status, respectively. Kaplan-Meier method with log-rank test and multivariate Cox regression analysis were applied to assess associations between clinical characteristics and overall survival (OS). A predictive model integrating age, gender, marital status, HPV status, and staging variables was conducted to classify GZSCC patients into three risk groups and verified internally by 10-fold cross validation. Results: A total of 3318 GZSCC, 10792 OPSCC, and 6656 OCSCC patients were identified. HPV-positive GZSCC patients had the best 5-year OS as HPV-positive OPSCC (81% vs. 82%). However, the 5-year OS of HPV-negative/unknown GZSCC (43%/42%) was the worst among all groups, indicating that HPV status and the overlapping nature of tumors were valuable prognostic predictors in GZSCC patients. Compared with the strategy of dividing GZSCC into two groups by HPV status, the predictive model integrating more variables could additionally identify a unique high-risk GZSCC group with the lowest OS rate. Conclusions: GZSCC patients had distinct clinical characteristics and prognoses compared with OPSCC and OCSCC; integrating HPV status and other clinical factors could help distinguish GZSCC and predict their prognosis.Keywords: GZSCC, OCSCC, OPSCC, HPV
Procedia PDF Downloads 751351 The Study of Power as a Pertinent Motive among Tribal College Students of Assam
Authors: K. P. Gogoi
Abstract:
The current research study investigates the motivational pattern viz Power motivation among the tribal college students of Assam. The sample consisted of 240 college students (120 tribal and 120 non-tribal) ranging from 18-24 years, 60 males and 60 females for both tribal’s and non-tribal’s. Attempts were made to include all the prominent tribes of Assam viz. Thematic Apperception Test, Power motive Scale and a semi structured interview schedule were used to gather information about their family types, parental deprivation, parental relations, social and political belongingness. Mean, Standard Deviation, and t-test were the statistical measures adopted in this 2x2 factorial design study. In addition to this discriminant analysis has been worked out to strengthen the predictive validity of the obtained data. TAT scores reveal significant difference between the tribal’s and non-tribal on power motivation. However results obtained on gender difference indicates similar scores among both the cultures. Cross validation of the TAT results was done by using the power motive scale by T. S. Dapola which confirms the results on need for power through TAT scores. Power motivation has been studied in three directions i.e. coercion, inducement and restraint. An interesting finding is that on coercion tribal’s score high showing significant difference whereas in inducement or seduction the non-tribal’s scored high showing significant difference. On the other hand on restraint no difference exists between both cultures. Discriminant analysis has been worked out between the variables n-power, coercion, inducement and restraint. Results indicated that inducement or seduction (.502) is the dependent measure which has the most discriminating power between these two cultures.Keywords: power motivation, tribal, social, political, predictive validity, cross validation, coercion, inducement, restraint
Procedia PDF Downloads 4861350 Role of Imaging in Predicting the Receptor Positivity Status in Lung Adenocarcinoma: A Chapter in Radiogenomics
Authors: Sonal Sethi, Mukesh Yadav, Abhimanyu Gupta
Abstract:
The upcoming field of radiogenomics has the potential to upgrade the role of imaging in lung cancer management by noninvasive characterization of tumor histology and genetic microenvironment. Receptor positivity like epidermal growth factor receptor (EGFR) and anaplastic lymphoma kinase (ALK) genotyping are critical in lung adenocarcinoma for treatment. As conventional identification of receptor positivity is an invasive procedure, we analyzed the features on non-invasive computed tomography (CT), which predicts the receptor positivity in lung adenocarcinoma. Retrospectively, we did a comprehensive study from 77 proven lung adenocarcinoma patients with CT images, EGFR and ALK receptor genotyping, and clinical information. Total 22/77 patients were receptor-positive (15 had only EGFR mutation, 6 had ALK mutation, and 1 had both EGFR and ALK mutation). Various morphological characteristics and metastatic distribution on CT were analyzed along with the clinical information. Univariate and multivariable logistic regression analyses were used. On multivariable logistic regression analysis, we found spiculated margin, lymphangitic spread, air bronchogram, pleural effusion, and distant metastasis had a significant predictive value for receptor mutation status. On univariate analysis, air bronchogram and pleural effusion had significant individual predictive value. Conclusions: Receptor positive lung cancer has characteristic imaging features compared with nonreceptor positive lung adenocarcinoma. Since CT is routinely used in lung cancer diagnosis, we can predict the receptor positivity by a noninvasive technique and would follow a more aggressive algorithm for evaluation of distant metastases as well as for the treatment.Keywords: lung cancer, multidisciplinary cancer care, oncologic imaging, radiobiology
Procedia PDF Downloads 1361349 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing
Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn
Abstract:
Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency
Procedia PDF Downloads 1141348 Sarcasm Recognition System Using Hybrid Tone-Word Spotting Audio Mining Technique
Authors: Sandhya Baskaran, Hari Kumar Nagabushanam
Abstract:
Sarcasm sentiment recognition is an area of natural language processing that is being probed into in the recent times. Even with the advancements in NLP, typical translations of words, sentences in its context fail to provide the exact information on a sentiment or emotion of a user. For example, if something bad happens, the statement ‘That's just what I need, great! Terrific!’ is expressed in a sarcastic tone which could be misread as a positive sign by any text-based analyzer. In this paper, we are presenting a unique real time ‘word with its tone’ spotting technique which would provide the sentiment analysis for a tone or pitch of a voice in combination with the words being expressed. This hybrid approach increases the probability for identification of special sentiment like sarcasm much closer to the real world than by mining text or speech individually. The system uses a tone analyzer such as YIN-FFT which extracts pitch segment-wise that would be used in parallel with a speech recognition system. The clustered data is classified for sentiments and sarcasm score for each of it determined. Our Simulations demonstrates the improvement in f-measure of around 12% compared to existing detection techniques with increased precision and recall.Keywords: sarcasm recognition, tone-word spotting, natural language processing, pitch analyzer
Procedia PDF Downloads 2931347 Design and Implementation a Platform for Adaptive Online Learning Based on Fuzzy Logic
Authors: Budoor Al Abid
Abstract:
Educational systems are increasingly provided as open online services, providing guidance and support for individual learners. To adapt the learning systems, a proper evaluation must be made. This paper builds the evaluation model Fuzzy C Means Adaptive System (FCMAS) based on data mining techniques to assess the difficulty of the questions. The following steps are implemented; first using a dataset from an online international learning system called (slepemapy.cz) the dataset contains over 1300000 records with 9 features for students, questions and answers information with feedback evaluation. Next, a normalization process as preprocessing step was applied. Then FCM clustering algorithms are used to adaptive the difficulty of the questions. The result is three cluster labeled data depending on the higher Wight (easy, Intermediate, difficult). The FCM algorithm gives a label to all the questions one by one. Then Random Forest (RF) Classifier model is constructed on the clustered dataset uses 70% of the dataset for training and 30% for testing; the result of the model is a 99.9% accuracy rate. This approach improves the Adaptive E-learning system because it depends on the student behavior and gives accurate results in the evaluation process more than the evaluation system that depends on feedback only.Keywords: machine learning, adaptive, fuzzy logic, data mining
Procedia PDF Downloads 1971346 Comparative Analysis of the Computer Methods' Usage for Calculation of Hydrocarbon Reserves in the Baltic Sea
Authors: Pavel Shcherban, Vlad Golovanov
Abstract:
Nowadays, the depletion of hydrocarbon deposits on the land of the Kaliningrad region leads to active geological exploration and development of oil and natural gas reserves in the southeastern part of the Baltic Sea. LLC 'Lukoil-Kaliningradmorneft' implements a comprehensive program for the development of the region's shelf in 2014-2023. Due to heterogeneity of reservoir rocks in various open fields, as well as with ambiguous conclusions on the contours of deposits, additional geological prospecting and refinement of the recoverable oil reserves are carried out. The key element is use of an effective technique of computer stock modeling at the first stage of processing of the received data. The following step uses information for the cluster analysis, which makes it possible to optimize the field development approaches. The article analyzes the effectiveness of various methods for reserves' calculation and computer modelling methods of the offshore hydrocarbon fields. Cluster analysis allows to measure influence of the obtained data on the development of a technical and economic model for mining deposits. The relationship between the accuracy of the calculation of recoverable reserves and the need of modernization of existing mining infrastructure, as well as the optimization of the scheme of opening and development of oil deposits, is observed.Keywords: cluster analysis, computer modelling of deposits, correction of the feasibility study, offshore hydrocarbon fields
Procedia PDF Downloads 1661345 Using the Transtheoretical Model to Investigate Stages of Change in Regular Volunteer Service among Seniors in Community
Authors: Pei-Ti Hsu, I-Ju Chen, Jeu-Jung Chen, Cheng-Fen Chang, Shiu-Yan Yang
Abstract:
Taiwan now is an aging society Research on the elderly should not be confined to caring for seniors, but should also be focused on ways to improve health and the quality of life. Senior citizens who participate in volunteer services could become less lonely, have new growth opportunities, and regain a sense of accomplishment. Thus, the question of how to get the elderly to participate in volunteer service is worth exploring. Apply the Transtheoretical Model to understand stages of change in regular volunteer service and voluntary service behaviour among the seniors. 1525 adults over the age of 65 from the Renai district of Keelung City were interviewed. The research tool was a self-constructed questionnaire and individual interviews were conducted to collect data. Then the data was processed and analyzed using the IBM SPSS Statistics 20 (Windows version) statistical software program. In the past six months, research subjects averaged 9.92 days of volunteer services. A majority of these elderly individuals had no intention to change their regular volunteer services. We discovered that during the maintenance stage, the self-efficacy for volunteer services was higher than during all other stages, but self-perceived barriers were less during the preparation stage and action stage. Self-perceived benefits were found to have an important predictive power for those with regular volunteer service behaviors in the previous stage, and self-efficacy was found to have an important predictive power for those with regular volunteer service behaviors in later stages. The research results support the conclusion that community nursing staff should group elders based on their regular volunteer services change stages and design appropriate behavioral change strategies.Keywords: seniors, stages of change in regular volunteer services, volunteer service behavior, self-efficacy, self-perceived benefits
Procedia PDF Downloads 4261344 Statistically Accurate Synthetic Data Generation for Enhanced Traffic Predictive Modeling Using Generative Adversarial Networks and Long Short-Term Memory
Authors: Srinivas Peri, Siva Abhishek Sirivella, Tejaswini Kallakuri, Uzair Ahmad
Abstract:
Effective traffic management and infrastructure planning are crucial for the development of smart cities and intelligent transportation systems. This study addresses the challenge of data scarcity by generating realistic synthetic traffic data using the PeMS-Bay dataset, improving the accuracy and reliability of predictive modeling. Advanced synthetic data generation techniques, including TimeGAN, GaussianCopula, and PAR Synthesizer, are employed to produce synthetic data that replicates the statistical and structural characteristics of real-world traffic. Future integration of Spatial-Temporal Generative Adversarial Networks (ST-GAN) is planned to capture both spatial and temporal correlations, further improving data quality and realism. The performance of each synthetic data generation model is evaluated against real-world data to identify the best models for accurately replicating traffic patterns. Long Short-Term Memory (LSTM) networks are utilized to model and predict complex temporal dependencies within traffic patterns. This comprehensive approach aims to pinpoint areas with low vehicle counts, uncover underlying traffic issues, and inform targeted infrastructure interventions. By combining GAN-based synthetic data generation with LSTM-based traffic modeling, this study supports data-driven decision-making that enhances urban mobility, safety, and the overall efficiency of city planning initiatives.Keywords: GAN, long short-term memory, synthetic data generation, traffic management
Procedia PDF Downloads 291343 The Role of Artificial Intelligence in Criminal Procedure
Authors: Herke Csongor
Abstract:
The artificial intelligence (AI) has been used in the United States of America in the decisionmaking process of the criminal justice system for decades. In the field of law, including criminal law, AI can provide serious assistance in decision-making in many places. The paper reviews four main areas where AI still plays a role in the criminal justice system and where it is expected to play an increasingly important role. The first area is the predictive policing: a number of algorithms are used to prevent the commission of crimes (by predicting potential crime locations or perpetrators). This may include the so-called linking hot-spot analysis, crime linking and the predictive coding. The second area is the Big Data analysis: huge amounts of data sets are already opaque to human activity and therefore unprocessable. Law is one of the largest producers of digital documents (because not only decisions, but nowadays the entire document material is available digitally), and this volume can only and exclusively be handled with the help of computer programs, which the development of AI systems can have an increasing impact on. The third area is the criminal statistical data analysis. The collection of statistical data using traditional methods required enormous human resources. The AI is a huge step forward in that it can analyze the database itself, based on the requested aspects, a collection according to any aspect can be available in a few seconds, and the AI itself can analyze the database and indicate if it finds an important connection either from the point of view of crime prevention or crime detection. Finally, the use of AI during decision-making in both investigative and judicial fields is analyzed in detail. While some are skeptical about the future role of AI in decision-making, many believe that the question is not whether AI will participate in decision-making, but only when and to what extent it will transform the current decision-making system.Keywords: artificial intelligence, international criminal cooperation, planning and organizing of the investigation, risk assessment
Procedia PDF Downloads 421342 Use of Locally Effective Microorganisms in Conjunction with Biochar to Remediate Mine-Impacted Soils
Authors: Thomas F. Ducey, Kristin M. Trippe, James A. Ippolito, Jeffrey M. Novak, Mark G. Johnson, Gilbert C. Sigua
Abstract:
The Oronogo-Duenweg mining belt –approximately 20 square miles around the Joplin, Missouri area– is a designated United States Environmental Protection Agency Superfund site due to lead-contaminated soil and groundwater by former mining and smelting operations. Over almost a century of mining (from 1848 to the late 1960’s), an estimated ten million tons of cadmium, lead, and zinc containing material have been deposited on approximately 9,000 acres. Sites that have undergone remediation, in which the O, A, and B horizons have been removed along with the lead contamination, the exposed C horizon remains incalcitrant to revegetation efforts. These sites also suffer from poor soil microbial activity, as measured by soil extracellular enzymatic assays, though 16S ribosomal ribonucleic acid (rRNA) indicates that microbial diversity is equal to sites that have avoided mine-related contamination. Soil analysis reveals low soil organic carbon, along with high levels of bio-available zinc, that reflect the poor soil fertility conditions and low microbial activity. Our study looked at the use of several materials to restore and remediate these sites, with the goal of improving soil health. The following materials, and their purposes for incorporation into the study, were as follows: manure-based biochar for the binding of zinc and other heavy metals responsible for phytotoxicity, locally sourced biosolids and compost to incorporate organic carbon into the depleted soils, effective microorganisms harvested from nearby pristine sites to provide a stable community for nutrient cycling in the newly composited 'soil material'. Our results indicate that all four materials used in conjunction result in the greatest benefit to these mine-impacted soils, based on above ground biomass, microbial biomass, and soil enzymatic activities.Keywords: locally effective microorganisms, biochar, remediation, reclamation
Procedia PDF Downloads 2171341 Clinical and Radiological Features of Adenomyosis and Its Histopathological Correlation
Authors: Surabhi Agrawal Kohli, Sunita Gupta, Esha Khanuja, Parul Garg, P. Gupta
Abstract:
Background: Adenomyosis is a common gynaecological condition that affects the menstruating women. Uterine enlargement, dysmenorrhoea, and menorrhagia are regarded as the cardinal clinical symptoms of adenomyosis. Classically it was thought, compared with ultrasonography, when adenomyosis is suspected, MRI enables more accurate diagnosis of the disease. Materials and Methods: 172 subjects were enrolled after an informed consent that had complaints of HMB, dyspareunia, dysmenorrhea, and chronic pelvic pain. Detailed history of the enrolled subjects was taken, followed by a clinical examination. These patients were then subjected to TVS where myometrial echo texture, presence of myometrial cysts, blurring of endomyometrial junction was noted. MRI was followed which noted the presence of junctional zone thickness and myometrial cysts. After hysterectomy, histopathological diagnosis was obtained. Results: 78 participants were analysed. The mean age was 44.2 years. 43.5% had parity of 4 or more. heavy menstrual bleeding (HMB) was present in 97.8% and dysmenorrhea in 93.48 % of HPE positive patient. Transvaginal sonography (TVS) and MRI had a sensitivity of 89.13% and 80.43%, specificity of 90.62% and 84.37%, positive likelihood ratio of 9.51 and 5.15, negative likelihood ratio of 0.12 and 0.23, positive predictive value of 93.18% and 88.1%, negative predictive value of 85.29% and 75% and a diagnostic accuracy of 89.74% and 82.5%. Comparison of sensitivity (p=0.289) and specificity (p=0.625) showed no statistically significant difference between TVS and MRI. Conclusion: Prevalence of 30.23%. HMB with dysmenorrhoea and chronic pelvic pain helps in diagnosis. TVS (Endomyometrial junction blurring) is both sensitive and specific in diagnosing adenomyosis without need for additional diagnostic tool. Both TVS and MRI are equally efficient, however because of certain additional advantages of TVS over MRI, it may be used as the first choice of imaging. MRI may be used additionally in difficult cases as well as in patients with existing co-pathologies.Keywords: adenomyosis, heavy menstrual bleeding, MRI, TVS
Procedia PDF Downloads 4981340 Solar Power Generation in a Mining Town: A Case Study for Australia
Authors: Ryan Chalk, G. M. Shafiullah
Abstract:
Climate change is a pertinent issue facing governments and societies around the world. The industrial revolution has resulted in a steady increase in the average global temperature. The mining and energy production industries have been significant contributors to this change prompting government to intervene by promoting low emission technology within these sectors. This paper initially reviews the energy problem in Australia and the mining sector with a focus on the energy requirements and production methods utilised in Western Australia (WA). Renewable energy in the form of utility-scale solar photovoltaics (PV) provides a solution to these problems by providing emission-free energy which can be used to supplement the existing natural gas turbines in operation at the proposed site. This research presents a custom renewable solution for the mining site considering the specific township network, local weather conditions, and seasonal load profiles. A summary of the required PV output is presented to supply slightly over 50% of the towns power requirements during the peak (summer) period, resulting in close to full coverage in the trench (winter) period. Dig Silent Power Factory Software has been used to simulate the characteristics of the existing infrastructure and produces results of integrating PV. Large scale PV penetration in the network introduce technical challenges, that includes; voltage deviation, increased harmonic distortion, increased available fault current and power factor. Results also show that cloud cover has a dramatic and unpredictable effect on the output of a PV system. The preliminary analyses conclude that mitigation strategies are needed to overcome voltage deviations, unacceptable levels of harmonics, excessive fault current and low power factor. Mitigation strategies are proposed to control these issues predominantly through the use of high quality, made for purpose inverters. Results show that use of inverters with harmonic filtering reduces the level of harmonic injections to an acceptable level according to Australian standards. Furthermore, the configuration of inverters to supply active and reactive power assist in mitigating low power factor problems. Use of FACTS devices; SVC and STATCOM also reduces the harmonics and improve the power factor of the network, and finally, energy storage helps to smooth the power supply.Keywords: climate change, mitigation strategies, photovoltaic (PV), power quality
Procedia PDF Downloads 1661339 Discerning Divergent Nodes in Social Networks
Authors: Mehran Asadi, Afrand Agah
Abstract:
In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.Keywords: online social networks, data mining, social cloud computing, interaction and collaboration
Procedia PDF Downloads 1601338 A Methodology for Developing New Technology Ideas to Avoid Patent Infringement: F-Term Based Patent Analysis
Authors: Kisik Song, Sungjoo Lee
Abstract:
With the growing importance of intangible assets recently, the impact of patent infringement on the business of a company has become more evident. Accordingly, it is essential for firms to estimate the risk of patent infringement risk before developing a technology and create new technology ideas to avoid the risk. Recognizing the needs, several attempts have been made to help develop new technology opportunities and most of them have focused on identifying emerging vacant technologies from patent analysis. In these studies, the IPC (International Patent Classification) system or keywords from text-mining application to patent documents was generally used to define vacant technologies. Unlike those studies, this study adopted F-term, which classifies patent documents according to the technical features of the inventions described in them. Since the technical features are analyzed by various perspectives by F-term, F-term provides more detailed information about technologies compared to IPC while more systematic information compared to keywords. Therefore, if well utilized, it can be a useful guideline to create a new technology idea. Recognizing the potential of F-term, this paper aims to suggest a novel approach to developing new technology ideas to avoid patent infringement based on F-term. For this purpose, we firstly collected data about F-term and then applied text-mining to the descriptions about classification criteria and attributes. From the text-mining results, we could identify other technologies with similar technical features of the existing one, the patented technology. Finally, we compare the technologies and extract the technical features that are commonly used in other technologies but have not been used in the existing one. These features are presented in terms of “purpose”, “function”, “structure”, “material”, “method”, “processing and operation procedure” and “control means” and so are useful for creating new technology ideas that help avoid infringing patent rights of other companies. Theoretically, this is one of the earliest attempts to adopt F-term to patent analysis; the proposed methodology can show how to best take advantage of F-term with the wealth of technical information. In practice, the proposed methodology can be valuable in the ideation process for successful product and service innovation without infringing the patents of other companies.Keywords: patent infringement, new technology ideas, patent analysis, F-term
Procedia PDF Downloads 2701337 Expanding Trading Strategies By Studying Sentiment Correlation With Data Mining Techniques
Authors: Ved Kulkarni, Karthik Kini
Abstract:
This experiment aims to understand how the media affects the power markets in the mainland United States and study the duration of reaction time between news updates and actual price movements. it have taken into account electric utility companies trading in the NYSE and excluded companies that are more politically involved and move with higher sensitivity to Politics. The scrapper checks for any news related to keywords, which are predefined and stored for each specific company. Based on this, the classifier will allocate the effect into five categories: positive, negative, highly optimistic, highly negative, or neutral. The effect on the respective price movement will be studied to understand the response time. Based on the response time observed, neural networks would be trained to understand and react to changing market conditions, achieving the best strategy in every market. The stock trader would be day trading in the first phase and making option strategy predictions based on the black holes model. The expected result is to create an AI-based system that adjusts trading strategies within the market response time to each price movement.Keywords: data mining, language processing, artificial neural networks, sentiment analysis
Procedia PDF Downloads 201336 Phytoremediation of artisanal gold mine tailings - Potential of Chrysopogon zizanioides and Andropogon gayanus in the Sahelian climate
Authors: Yamma Rose, Kone Martine, Yonli Arsène, Wanko Ngnien Adrien
Abstract:
Soil pollution and, consequently, water resources by micropollutants from gold mine tailings constitute a major threat in developing countries due to the lack of waste treatment. Phytoremediation is an alternative for extracting or trapping micropollutants from contaminated soils by mining residues. The potentialities of Chrysopogon zizanioides (acclimated plant) and Andropogon gayanus (native plant) to accumulate arsenic (As), mercury (Hg), iron (Fe) and zinc (Zn) were studied in artisanal gold mine in Ouagadougou, Burkina Faso. The phytoremediation effectiveness of two plant species was studied in 75 pots of 30 liters each, containing mining residues from the artisanal gold processing site in the rural commune of Nimbrogo. The experiments cover three modalities: Tn - planted unpolluted soils; To – unplanted mine tailings and Tp – planted mine tailings arranged in a randomized manner. The pots were amended quarterly with compost to provide nutrients to the plants. The phytoremediation assessment consists of comparing the growth, biomass and capacity of these two herbaceous plants to extract or to trap Hg, Fe, Zn and As in mining residues in a controlled environment. The analysis of plant species parameters cultivated in mine tailings shows indices of relative growth of A. gayanus very significantly high (34.38%) compared to 20.37% for C.zizanioides. While biomass analysis reveals that C. zizanioides has greater foliage and root system growth than A. gayanus. The results after a culture time of 6 months showed that C. zizanioides and A. gayanus have the potential to accumulate Hg, Fe, Zn and As. Root biomass has a more significant accumulation than aboveground biomass for both herbaceous species. Although the BCF bioaccumulation factor values for both plants together are low (<1), the removal efficiency of Hg, Fe, Zn and As is 45.13%, 42.26%, 21.5% and 2.87% respectively in 24 weeks of culture with C. zizanioides. However, pots grown with A. gayanus gives an effectiveness rate of 43.55%; 41.52%; 2.87% and 1.35% respectively for Fe, Zn, Hg and As. The results indicate that the plant species studied have a strong phytoremediation potential, although that of A. gayanus is relatively less than C. zizanioides.Keywords: artisanal gold mine tailings, andropogon gayanus, chrysopogon zizanioides, phytoremediation
Procedia PDF Downloads 65