Search results for: data mining techniques
27864 The Perspective on Data Collection Instruments for Younger Learners
Authors: Hatice Kübra Koç
Abstract:
For academia, collecting reliable and valid data is one of the most significant issues for researchers. However, it is not the same procedure for all different target groups; meanwhile, during data collection from teenagers, young adults, or adults, researchers can use common data collection tools such as questionnaires, interviews, and semi-structured interviews; yet, for young learners and very young ones, these reliable and valid data collection tools cannot be easily designed or applied by the researchers. In this study, firstly, common data collection tools are examined for ‘very young’ and ‘young learners’ participant groups since it is thought that the quality and efficiency of an academic study is mainly based on its valid and correct data collection and data analysis procedure. Secondly, two different data collection instruments for very young and young learners are stated as discussing the efficacy of them. Finally, a suggested data collection tool – a performance-based questionnaire- which is specifically developed for ‘very young’ and ‘young learners’ participant groups in the field of teaching English to young learners as a foreign language is presented in this current study. The designing procedure and suggested items/factors for the suggested data collection tool are accordingly revealed at the end of the study to help researchers have studied with young and very learners.Keywords: data collection instruments, performance-based questionnaire, young learners, very young learners
Procedia PDF Downloads 9327863 The Impact of China’s Waste Import Ban on the Waste Mining Economy in East Asia
Authors: Michael Picard
Abstract:
This proposal offers to shed light on the changing legal geography of the global waste economy. Global waste recycling has become a multi-billion-dollar industry. NASDAQ predicts the emergence of a worldwide 1,296G$ waste management market between 2017 and 2022. Underlining this evolution, a new generation of preferential waste-trade agreements has emerged in the Pacific. In the last decade, Japan has concluded a series of bilateral treaties with Asian countries, and most recently with China. An agreement between Tokyo and Beijing was formalized on 7 May 2008, which forged an economic partnership on waste transfer and mining. The agreement set up International Recycling Zones, where certified recycling plants in China process industrial waste imported from Japan. Under the joint venture, Chinese companies salvage the embedded value from Japanese industrial discards, reprocess them and send them back to Japanese manufacturers, such as Mitsubishi and Panasonic. This circular economy is designed to convert surplus garbage into surplus value. Ever since the opening of Sino-Japanese eco-parks, millions of tons of plastic and e-waste have been exported from Japan to China every year. Yet, quite unexpectedly, China has recently closed its waste market to imports, jeopardizing Japan’s billion-dollar exports to China. China notified the WTO that, by the end of 2017, it would no longer accept imports of plastics and certain metals. Given China’s share of Japanese waste exports, a complete closure of China’s market would require Japan to find new uses for its recyclable industrial trash generated domestically every year. It remains to be seen how China will effectively implement its ban on waste imports, considering the economic interests at stake. At this stage, what remains to be clarified is whether China's ban on waste imports will negatively affect the recycling trade between Japan and China. What is clear, though, is the rapid transformation in the legal geography of waste mining in East-Asia. For decades, East-Asian waste trade had been tied up in an ‘ecologically unequal exchange’ between the Japanese core and the Chinese periphery. This global unequal waste distribution could be measured by the Environmental Stringency Index, which revealed that waste regulation was 39% weaker in the Global South than in Japan. This explains why Japan could legally export its hazardous plastic and electronic discards to China. The asymmetric flow of hazardous waste between Japan and China carried the colonial heritage of international law. The legal geography of waste distribution was closely associated to the imperial construction of an ecological trade imbalance between the Japanese source and the Chinese sink. Thus, China’s recent decision to ban hazardous waste imports is a sign of a broader ecological shift. As a global economic superpower, China announced to the world it would no longer be the planet’s junkyard. The policy change will have profound consequences on the global circulation of waste, re-routing global waste towards countries south of China, such as Vietnam and Malaysia. By the time the Berlin Conference takes place in May 2018, the presentation will be able to assess more accurately the effect of the Chinese ban on the transboundary movement of waste in Asia.Keywords: Asia, ecological unequal exchange, global waste trade, legal geography
Procedia PDF Downloads 21027862 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods
Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo
Abstract:
The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines
Procedia PDF Downloads 62127861 Pediatric Health Nursing Research in Jordan: Evaluating the State of Knowledge and Determining Future Research Direction
Authors: Inaam Khalaf, Nadin M. Abdel Razeq, Hamza Alduraidi, Suhaila Halasa, Omayyah S. Nassar, Eman Al-Horani, Jumana Shehadeh, Anna Talal
Abstract:
Background: Nursing researchers are responsible for generating knowledge that corresponds to national and global research priorities in order to promote, restore, and maintain the health of individuals and societies. The objectives of this scoping review of Jordanian literature are to assess the existing research on pediatric nursing in terms of evolution, authorship and collaborations, funding sources, methodologies, topics of research, and pediatric subjects' age groups so as to identify gaps in research. Methodology: A search was conducted using related keywords obtained from national and international databases. The reviewed literature included pediatric health articles published through December 2019 in English and Arabic, authored by nursing researchers. The investigators assessed the retrieved studies and extracted data using a data-mining checklist. Results: The review included 265 articles authored by Jordanian nursing researchers concerning children's health, published between 1987 and 2019; 95% were published between 2009 and 2019. The most commonly applied research methodology was the descriptive non-experimental method (76%). The main generic topics were health promotion and disease prevention (23%), chronic physical conditions (19%), mental health, behavioral disorders, and forensic issues (16%). Conclusion: The review findings identified a grave shortage of evidence concerning nursing care issues for children below five years of age, especially those between ages two and five years. The research priorities identified in this review resonate with those identified in international reports. Implications: Nursing researchers are encouraged to conduct more research targeting topics of national-level importance in collaboration with clinically involved nurses and international scholars.Keywords: Jordan, scoping review, children health nursing, pediatric, adolescents
Procedia PDF Downloads 8627860 An AI-generated Semantic Communication Platform in HCI Course
Authors: Yi Yang, Jiasong Sun
Abstract:
Almost every aspect of our daily lives is now intertwined with some degree of human-computer interaction (HCI). HCI courses draw on knowledge from disciplines as diverse as computer science, psychology, design principles, anthropology, and more. Our HCI courses, named the Media and Cognition course, are constantly updated to reflect state-of-the-art technological advancements such as virtual reality, augmented reality, and artificial intelligence-based interactions. For more than a decade, our course has used an interest-based approach to teaching, in which students proactively propose some research-based questions and collaborate with teachers, using course knowledge to explore potential solutions. Semantic communication plays a key role in facilitating understanding and interaction between users and computer systems, ultimately enhancing system usability and user experience. The advancements in AI-generated technology, which have gained significant attention from both academia and industry in recent years, are exemplified by language models like GPT-3 that generate human-like dialogues from given prompts. Our latest version of the Human-Computer Interaction course practices a semantic communication platform based on AI-generated techniques. The purpose of this semantic communication is twofold: to extract and transmit task-specific information while ensuring efficient end-to-end communication with minimal latency. An AI-generated semantic communication platform evaluates the retention of signal sources and converts low-retain ability visual signals into textual prompts. These data are transmitted through AI-generated techniques and reconstructed at the receiving end; on the other hand, visual signals with a high retain ability rate are compressed and transmitted according to their respective regions. The platform and associated research are a testament to our students' growing ability to independently investigate state-of-the-art technologies.Keywords: human-computer interaction, media and cognition course, semantic communication, retainability, prompts
Procedia PDF Downloads 11627859 A Predictive Analytics Approach to Project Management: Reducing Project Failures in Web and Software Development Projects
Authors: Tazeen Fatima
Abstract:
Use of project management in web & software development projects is very significant. It has been observed that even with the application of effective project management, projects usually do not complete their lifecycle and fail. To minimize these failures, key performance indicators have been introduced in previous studies to counter project failures. However, there are always gaps and problems in the KPIs identified. Despite of incessant efforts at technical and managerial levels, projects still fail. There is no substantial approach to identify and avoid these failures in the very beginning of the project lifecycle. In this study, we aim to answer these research problems by analyzing the concept of predictive analytics which is a specialized technology and is very easy to use in this era of computation. Project organizations can use data gathering, compute power, and modern tools to render efficient Predictions. The research aims to identify such a predictive analytics approach. The core objective of the study was to reduce failures and introduce effective implementation of project management principles. Existing predictive analytics methodologies, tools and solution providers were also analyzed. Relevant data was gathered from projects and was analyzed via predictive techniques to make predictions well advance in time to render effective project management in web & software development industry.Keywords: project management, predictive analytics, predictive analytics methodology, project failures
Procedia PDF Downloads 34727858 Water Supply and Demand Analysis for Ranchi City under Climate Change Using Water Evaluation and Planning System Model
Authors: Pappu Kumar, Ajai Singh, Anshuman Singh
Abstract:
There are different water user sectors such as rural, urban, mining, subsistence and commercial irrigated agriculture, commercial forestry, industry, power generation which are present in the catchment in Subarnarekha River Basin and Ranchi city. There is an inequity issue in the access to water. The development of the rural area, construction of new power generation plants, along with the population growth, the requirement of unmet water demand and the consideration of environmental flows, the revitalization of small-scale irrigation schemes is going to increase the water demands in almost all the water-stressed catchment. The WEAP Model was developed by the Stockholm Environment Institute (SEI) to enable evaluation of planning and management issues associated with water resources development. The WEAP model can be used for both urban and rural areas and can address a wide range of issues including sectoral demand analyses, water conservation, water rights and allocation priorities, river flow simulation, reservoir operation, ecosystem requirements and project cost-benefit analyses. This model is a tool for integrated water resource management and planning like, forecasting water demand, supply, inflows, outflows, water use, reuse, water quality, priority areas and Hydropower generation, In the present study, efforts have been made to access the utility of the WEAP model for water supply and demand analysis for Ranchi city. A detailed works have been carried out and it was tried to ascertain that the WEAP model used for generating different scenario of water requirement, which could help for the future planning of water. The water supplied to Ranchi city was mostly contributed by our study river, Hatiya reservoir and ground water. Data was collected from various agencies like PHE Ranchi, census data of 2011, Doranda reservoir and meteorology department etc. This collected and generated data was given as input to the WEAP model. The model generated the trends for discharge of our study river up to next 2050 and same time also generated scenarios calculating our demand and supplies for feature. The results generated from the model outputs predicting the water require 12 million litter. The results will help in drafting policies for future regarding water supplies and demands under changing climatic scenarios.Keywords: WEAP model, water demand analysis, Ranchi, scenarios
Procedia PDF Downloads 41927857 Auteur 3D Filmmaking: From Hitchcock’s Protrusion Technique to Godard’s Immersion Aesthetic
Authors: Delia Enyedi
Abstract:
Throughout film history, the regular return of 3D cinema has been discussed in connection to crises caused by the advent of television or the competition of the Internet. In addition, the three waves of stereoscopic 3D (from 1952 up to 1983) and its current digital version have been blamed for adding a challenging technical distraction to the viewing experience. By discussing the films Dial M for Murder (1954) and Goodbye to Language (2014), the paper aims to analyze the response of recognized auteurs to the use of 3D techniques in filmmaking. For Alfred Hitchcock, the solution to attaining perceptual immersion paradoxically resided in restraining the signature effect of 3D, namely protrusion. In Jean-Luc Godard’s vision, 3D techniques allowed him to explore perceptual absorption by means of depth of field, for which he had long advocated as being central to cinema. Thus, both directors contribute to the foundation of an auteur aesthetic in 3D filmmaking.Keywords: Alfred Hitchcock, authorship, 3D filmmaking, Jean-Luc Godard, perceptual absorption, perceptual immersion
Procedia PDF Downloads 29027856 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 8827855 Ontology Mapping with R-GNN for IT Infrastructure: Enhancing Ontology Construction and Knowledge Graph Expansion
Authors: Andrey Khalov
Abstract:
The rapid growth of unstructured data necessitates advanced methods for transforming raw information into structured knowledge, particularly in domain-specific contexts such as IT service management and outsourcing. This paper presents a methodology for automatically constructing domain ontologies using the DOLCE framework as the base ontology. The research focuses on expanding ITIL-based ontologies by integrating concepts from ITSMO, followed by the extraction of entities and relationships from domain-specific texts through transformers and statistical methods like formal concept analysis (FCA). In particular, this work introduces an R-GNN-based approach for ontology mapping, enabling more efficient entity extraction and ontology alignment with existing knowledge bases. Additionally, the research explores transfer learning techniques using pre-trained transformer models (e.g., DeBERTa-v3-large) fine-tuned on synthetic datasets generated via large language models such as LLaMA. The resulting ontology, termed IT Ontology (ITO), is evaluated against existing methodologies, highlighting significant improvements in precision and recall. This study advances the field of ontology engineering by automating the extraction, expansion, and refinement of ontologies tailored to the IT domain, thus bridging the gap between unstructured data and actionable knowledge.Keywords: ontology mapping, knowledge graphs, R-GNN, ITIL, NER
Procedia PDF Downloads 1727854 Removal of Pb(II) Ions from Wastewater Using Magnetic Chitosan–Ethylene Glycol Diglycidyl Ether Beads as Adsorbent
Authors: Pyar Singh Jassal, Priti Rani, Rajni Johar
Abstract:
The adsorption of Pb(II) ions from wastewater using ethylene glycol diglycidyl ether cross-linked magnetic chitosan beads (EGDE-MCB) was carried out by considering a number of parameters. The removal efficiency of the metal ion by magnetic chitosan beads (MCB) and its cross-linked derivatives depended on viz contact time, dose of the adsorbent, pH, temperature, etc. The concentration of Cd( II) at different time intervals was estimated by differential pulse anodic stripping voltammetry (DPSAV) using 797 voltametric analyzer computrace. The adsorption data could be well interpreted by Langmuir and Freundlich adsorption model. The equilibrium parameter, RL values, support that the adsorption (027853 Public Accountability, a Challenge to Sustainable Development: A Case Study of Uganda
Authors: Nassali Celine Lindah
Abstract:
The study sought to find out how public accountability is a challenge to sustainable development in Uganda. The study was guided by the following set of objectives included establishing the challenges of Public accountability, the importance of accountability in Uganda, and the possible solutions to the problems identified in the study. In order to ensure proper accountability there should be proper control of resources, specifically the control of both public revenue and expenditures. Stakeholders should also be involved in the accountability process. Accountability can reduce corruption and other abuses, assure compliance with standards and procedures, and improve performance and organizational learning. The study involved qualitative and quantitative data collection techniques. A sample of 20 respondents from various districts/towns was used using both technical staff and non-technical staff members. The study utilized secondary and primary data, which was obtained using interviews and observations. The study reached a conclusion that the major challenges of Public accountability in Uganda are poor leadership, poor resource management, unethical behavior by the government officials and political involvement, among others. The study also recommended that the policymakers should design relevant guidelines/policies to help promote the process of public accountability in Uganda like prosecution and convictions, strengthen public expenditure management benchmarking and performance measurements, among others.Keywords: accountability, sustainability, government activities, government sector
Procedia PDF Downloads 13627852 A Qualitative Research of Online Fraud Decision-Making Process
Authors: Semire Yekta
Abstract:
Many online retailers set up manual review teams to overcome the limitations of automated online fraud detection systems. This study critically examines the strategies they adapt in their decision-making process to set apart fraudulent individuals from non-fraudulent online shoppers. The study uses a mix method research approach. 32 in-depth interviews have been conducted alongside with participant observation and auto-ethnography. The study found out that all steps of the decision-making process are significantly affected by a level of subjectivity, personal understandings of online fraud, preferences and judgments and not necessarily by objectively identifiable facts. Rather clearly knowing who the fraudulent individuals are, the team members have to predict whether they think the customer might be a fraudster. Common strategies used are relying on the classification and fraud scorings in the automated fraud detection systems, weighing up arguments for and against the customer and making a decision, using cancellation to test customers’ reaction and making use of personal experiences and “the sixth sense”. The interaction in the team also plays a significant role given that some decisions turn into a group discussion. While customer data represent the basis for the decision-making, fraud management teams frequently make use of Google search and Google Maps to find out additional information about the customer and verify whether the customer is the person they claim to be. While this, on the one hand, raises ethical concerns, on the other hand, Google Street View on the address and area of the customer puts customers living in less privileged housing and areas at a higher risk of being classified as fraudsters. Phone validation is used as a final measurement to make decisions for or against the customer when previous strategies and Google Search do not suffice. However, phone validation is also characterized by individuals’ subjectivity, personal views and judgment on customer’s reaction on the phone that results in a final classification as genuine or fraudulent.Keywords: online fraud, data mining, manual review, social construction
Procedia PDF Downloads 34327851 Forecasting Equity Premium Out-of-Sample with Sophisticated Regression Training Techniques
Authors: Jonathan Iworiso
Abstract:
Forecasting the equity premium out-of-sample is a major concern to researchers in finance and emerging markets. The quest for a superior model that can forecast the equity premium with significant economic gains has resulted in several controversies on the choice of variables and suitable techniques among scholars. This research focuses mainly on the application of Regression Training (RT) techniques to forecast monthly equity premium out-of-sample recursively with an expanding window method. A broad category of sophisticated regression models involving model complexity was employed. The RT models include Ridge, Forward-Backward (FOBA) Ridge, Least Absolute Shrinkage and Selection Operator (LASSO), Relaxed LASSO, Elastic Net, and Least Angle Regression were trained and used to forecast the equity premium out-of-sample. In this study, the empirical investigation of the RT models demonstrates significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk. Thus, the forecasting models appeared to guarantee an investor in a market setting who optimally reallocates a monthly portfolio between equities and risk-free treasury bills using equity premium forecasts at minimal risk.Keywords: regression training, out-of-sample forecasts, expanding window, statistical predictability, economic significance, utility gains
Procedia PDF Downloads 10727850 Static and Dynamic Tailings Dam Monitoring with Accelerometers
Authors: Cristiana Ortigão, Antonio Couto, Thiago Gabriel
Abstract:
In the wake of Samarco Fundão’s failure in 2015 followed by Vale’s Brumadinho disaster in 2019, the Brazilian National Mining Agency started a comprehensive dam safety programmed to rank dam safety risks and establish monitoring and analysis procedures. This paper focuses on the use of accelerometers for static and dynamic applications. Static applications may employ tiltmeters, as an example shown later in this paper. Dynamic monitoring of a structure with accelerometers yields its dynamic signature and this technique has also been successfully used in Brazil and this paper gives an example of tailings dam.Keywords: instrumentation, dynamic, monitoring, tailings, dams, tiltmeters, automation
Procedia PDF Downloads 14727849 Attaining Financial Efficiency through Funds Utilization
Authors: Muhammad Shujaat Saleem, Imamuddin
Abstract:
In reply to the argument made by the non-believers of Makkah “Sale is similar to riba”, Almighty Allah ordered “Sale is permissible while riba is impermissible”. The main intent of the study was to clarify the fallacy prevailing among the Muslims that in practical terms the product of Murabaha which is being offered by the Islamic banks is similar to that of conventional interest based business loan. However, specific objective was to ascertain the degree of financial efficiency on the basis of fund/loan utilization for intended purpose of Murabaha financing vis-à-vis conventional interest based business loan. The study employed survey strategy to collect primary data through structured close ended questionnaires from the sample of 98 Murabaha officers and 178 loan officers out of the whole population of 5 Islamic and 10 conventional banks respectively. Quantitative and qualitative techniques were used to analyze the data and the same is tabulated by use of frequency tables. The study found that the financial efficiency of Murabaha financing is more than that of conventional interest based business loan by 28% as Murabaha funds of Islamic banks are utilized for its intended purpose to the extent of 97% on average, compared to 69% of business loan offered by conventional banks.Keywords: financial efficiency, murabaha funds, loan amount, intended purpose
Procedia PDF Downloads 33827848 Corrosion Interaction Between Steel and Acid Mine Drainage: Use of AI Based on Fuzzy Logic
Authors: Maria Luisa de la Torre, Javier Aroba, Jose Miguel Davila, Aguasanta M. Sarmiento
Abstract:
Steel is one of the most widely used materials in polymetallic sulfide mining installations. One of the main problems suffered by these facilities is the economic losses due to the corrosion of this material, which is accelerated and aggravated by the contact with acid waters generated in these mines when sulfides come into contact with oxygen and water. This generation of acidic water, in turn, is accelerated by the presence of acidophilic bacteria. In order to gain a more detailed understanding of this corrosion process and the interaction between steel and acidic water, a laboratory experiment was carried out in which carbon steel plates were introduced into four different solutions for 27 days: distilled water (BK), which tried to assimilate the effect produced by rain on this material, an acid solution from a mine with a high Fe2+/Fe3+ (PO) content, another acid solution of water from another mine with a high Fe3+/Fe2+ (PH) content and, finally, one that reproduced the acid mine water with a high Fe2+/Fe3+ content but in which there were no bacteria (ST). Every 24 hours, physicochemical parameters were measured, and water samples were taken to carry out an analysis of the dissolved elements. The results of these measurements were processed using an explainable AI model based on fuzzy logic. It could be seen that, in all cases, there was an increase in pH, as well as in the concentrations of Fe and, in particular, Fe(II), as a consequence of the oxidation of the steel plates. Proportionally, the increase in Fe concentration was higher in PO and ST than in PH because Fe precipitates were produced in the latter. The rise of Fe(II) was proportionally much higher in PH, especially in the first hours of exposure, because it started from a lower initial concentration of this ion. Although to a lesser extent than in PH, the greater increase in Fe(II) also occurred faster in PO than in ST, a consequence of the action of the catalytic bacteria. On the other hand, Cu concentrations decreased throughout the experiment (with the exception of distilled water, which initially had no Cu, as a result of an electrochemical process that generates a precipitation of Cu together with Fe hydroxides. This decrease is lower in PH because the high total acidity keeps it in solution for a longer time. With the application of an artificial intelligence tool, it has been possible to evaluate the effects of steel corrosion in mining environments, corroborating and extending what was obtained by means of classical statistics.Keywords: acid mine drainage, artificial intelligence, carbon steel, corrosion, fuzzy logic
Procedia PDF Downloads 827847 Determination of Weathering at Kilistra Ancient City by Using Non-Destructive Techniques, Central Anatolia, Turkey
Authors: İsmail İnce, Osman Günaydin, Fatma Özer
Abstract:
Stones used in the construction of historical structures are exposed to various direct or indirect atmospheric effects depending on climatic conditions. Building stones deteriorate partially or fully as a result of this exposure. The historic structures are important symbols of any cultural heritage. Therefore, it is important to protect and restore these historical structures. The aim of this study is to determine the weathering conditions at the Kilistra ancient city. It is located in the southwest of the Konya city, Central Anatolia, and was built by carving into pyroclastic rocks during the Byzantine Era. For this purpose, the petrographic and mechanical properties of the pyroclastic rocks were determined. In the assessment of weathering of structures in the ancient city, in-situ non-destructive testing (i.e., Schmidt hardness rebound value, relative humidity measurement) methods were applied.Keywords: cultural heritage, Kilistra ancient city, non-destructive techniques, weathering
Procedia PDF Downloads 36027846 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis
Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio
Abstract:
Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction
Procedia PDF Downloads 31027845 Modelling for Roof Failure Analysis in an Underground Cave
Authors: M. Belén Prendes-Gero, Celestino González-Nicieza, M. Inmaculada Alvarez-Fernández
Abstract:
Roof collapse is one of the problems with a higher frequency in most of the mines of all countries, even now. There are many reasons that may cause the roof to collapse, namely the mine stress activities in the mining process, the lack of vigilance and carelessness or the complexity of the geological structure and irregular operations. This work is the result of the analysis of one accident produced in the “Mary” coal exploitation located in northern Spain. In this accident, the roof of a crossroad of excavated galleries to exploit the “Morena” Layer, 700 m deep, collapsed. In the paper, the work done by the forensic team to determine the causes of the incident, its conclusions and recommendations are collected. Initially, the available documentation (geology, geotechnics, mining, etc.) and accident area were reviewed. After that, laboratory and on-site tests were carried out to characterize the behaviour of the rock materials and the support used (metal frames and shotcrete). With this information, different hypotheses of failure were simulated to find the one that best fits reality. For this work, the software of finite differences in three dimensions, FLAC 3D, was employed. The results of the study confirmed that the detachment was originated as a consequence of one sliding in the layer wall, due to the large roof span present in the place of the accident, and probably triggered as a consequence of the existence of a protection pillar insufficient. The results allowed to establish some corrective measures avoiding future risks. For example, the dimensions of the protection zones that must be remained unexploited and their interaction with the crossing areas between galleries, or the use of more adequate supports for these conditions, in which the significant deformations may discourage the use of rigid supports such as shotcrete. At last, a grid of seismic control was proposed as a predictive system. Its efficiency was tested along the investigation period employing three control equipment that detected new incidents (although smaller) in other similar areas of the mine. These new incidents show that the use of explosives produces vibrations which are a new risk factor to analyse in a next future.Keywords: forensic analysis, hypothesis modelling, roof failure, seismic monitoring
Procedia PDF Downloads 11527844 GIS Pavement Maintenance Selection Strategy
Authors: Mekdelawit Teferi Alamirew
Abstract:
As a practical tool, the Geographical information system (GIS) was used for data integration, collection, management, analysis, and output presentation in pavement mangement systems . There are many GIS techniques to improve the maintenance activities like Dynamic segmentation and weighted overlay analysis which considers Multi Criteria Decision Making process. The results indicated that the developed MPI model works sufficiently and yields adequate output for providing accurate decisions. Hence considering multi criteria to prioritize the pavement sections for maintenance, as a result of the fact that GIS maps can express position, extent, and severity of pavement distress features more effectively than manual approaches, lastly the paper also offers digitized distress maps that can help agencies in their decision-making processes.Keywords: pavement, flexible, maintenance, index
Procedia PDF Downloads 6227843 Adoption of Climate-Smart Agriculture Practices Among Farmers and Its Effect on Crop Revenue in Ethiopia
Authors: Fikiru Temesgen Gelata
Abstract:
Food security, adaptation, and climate change mitigation are all problems that can be resolved simultaneously with Climate-Smart Agriculture (CSA). This study examines determinants of climate-smart agriculture (CSA) practices among smallholder farmers, aiming to understand the factors guiding adoption decisions and evaluate the impact of CSA on smallholder farmer income in the study areas. For this study, three-stage sampling techniques were applied to select 230 smallholders randomly. Mann-Kendal test and multinomial endogenous switching regression model were used to analyze trends of decrease or increase within long-term temporal data and the impact of CSA on the smallholder farmer income, respectively. Findings revealed education level, household size, land ownership, off-farm income, climate information, and contact with extension agents found to be highly adopted CSA practices. On the contrary, erosion exerted a detrimental impact on all the agricultural practices examined within the study region. Various factors such as farming methods, the size of farms, proximity to irrigated farmlands, availability of extension services, distance to market hubs, and access to weather forecasts were recognized as key determinants influencing the adoption of CSA practices. The multinomial endogenous switching regression model (MESR) revealed that joint adoption of crop rotation and soil and water conservation practices significantly increased farm income by 1,107,245 ETB. The study recommends that counties and governments should prioritize addressing climate change in their development agendas to increase the adoption of climate-smart farming techniques.Keywords: climate-smart practices, food security, Oincome, MERM, Ethiopia
Procedia PDF Downloads 3827842 Sensor Monitoring of the Concentrations of Different Gases Present in Synthesis of Ammonia Based on Multi-Scale Entropy and Multivariate Statistics
Authors: S. Aouabdi, M. Taibi
Abstract:
The supervision of chemical processes is the subject of increased development because of the increasing demands on reliability and safety. An important aspect of the safe operation of chemical process is the earlier detection of (process faults or other special events) and the location and removal of the factors causing such events, than is possible by conventional limit and trend checks. With the aid of process models, estimation and decision methods it is possible to also monitor hundreds of variables in a single operating unit, and these variables may be recorded hundreds or thousands of times per day. In the absence of appropriate processing method, only limited information can be extracted from these data. Hence, a tool is required that can project the high-dimensional process space into a low-dimensional space amenable to direct visualization, and that can also identify key variables and important features of the data. Our contribution based on powerful techniques for development of a new monitoring method based on multi-scale entropy MSE in order to characterize the behaviour of the concentrations of different gases present in synthesis and soft sensor based on PCA is applied to estimate these variables.Keywords: ammonia synthesis, concentrations of different gases, soft sensor, multi-scale entropy, multivarite statistics
Procedia PDF Downloads 33627841 Neutral Heavy Scalar Searches via Standard Model Gauge Boson Decays at the Large Hadron Electron Collider with Multivariate Techniques
Authors: Luigi Delle Rose, Oliver Fischer, Ahmed Hammad
Abstract:
In this article, we study the prospects of the proposed Large Hadron electron Collider (LHeC) in the search for heavy neutral scalar particles. We consider a minimal model with one additional complex scalar singlet that interacts with the Standard Model (SM) via mixing with the Higgs doublet, giving rise to an SM-like Higgs boson and a heavy scalar particle. Both scalar particles are produced via vector boson fusion and can be tested via their decays into pairs of SM particles, analogously to the SM Higgs boson. Using multivariate techniques, we show that the LHeC is sensitive to heavy scalars with masses between 200 and 800 GeV down to scalar mixing of order 0.01.Keywords: beyond the standard model, large hadron electron collider, multivariate analysis, scalar singlet
Procedia PDF Downloads 13727840 Emerging Technology for Business Intelligence Applications
Authors: Hsien-Tsen Wang
Abstract:
Business Intelligence (BI) has long helped organizations make informed decisions based on data-driven insights and gain competitive advantages in the marketplace. In the past two decades, businesses witnessed not only the dramatically increasing volume and heterogeneity of business data but also the emergence of new technologies, such as Artificial Intelligence (AI), Semantic Web (SW), Cloud Computing, and Big Data. It is plausible that the convergence of these technologies would bring more value out of business data by establishing linked data frameworks and connecting in ways that enable advanced analytics and improved data utilization. In this paper, we first review and summarize current BI applications and methodology. Emerging technologies that can be integrated into BI applications are then discussed. Finally, we conclude with a proposed synergy framework that aims at achieving a more flexible, scalable, and intelligent BI solution.Keywords: business intelligence, artificial intelligence, semantic web, big data, cloud computing
Procedia PDF Downloads 9527839 A Comparative Analysis of Machine Learning Techniques for PM10 Forecasting in Vilnius
Authors: Mina Adel Shokry Fahim, Jūratė Sužiedelytė Visockienė
Abstract:
With the growing concern over air pollution (AP), it is clear that this has gained more prominence than ever before. The level of consciousness has increased and a sense of knowledge now has to be forwarded as a duty by those enlightened enough to disseminate it to others. This realisation often comes after an understanding of how poor air quality indices (AQI) damage human health. The study focuses on assessing air pollution prediction models specifically for Lithuania, addressing a substantial need for empirical research within the region. Concentrating on Vilnius, it specifically examines particulate matter concentrations 10 micrometers or less in diameter (PM10). Utilizing Gaussian Process Regression (GPR) and Regression Tree Ensemble, and Regression Tree methodologies, predictive forecasting models are validated and tested using hourly data from January 2020 to December 2022. The study explores the classification of AP data into anthropogenic and natural sources, the impact of AP on human health, and its connection to cardiovascular diseases. The study revealed varying levels of accuracy among the models, with GPR achieving the highest accuracy, indicated by an RMSE of 4.14 in validation and 3.89 in testing.Keywords: air pollution, anthropogenic and natural sources, machine learning, Gaussian process regression, tree ensemble, forecasting models, particulate matter
Procedia PDF Downloads 5327838 Short-Term Forecast of Wind Turbine Production with Machine Learning Methods: Direct Approach and Indirect Approach
Authors: Mamadou Dione, Eric Matzner-lober, Philippe Alexandre
Abstract:
The Energy Transition Act defined by the French State has precise implications on Renewable Energies, in particular on its remuneration mechanism. Until then, a purchase obligation contract permitted the sale of wind-generated electricity at a fixed rate. Tomorrow, it will be necessary to sell this electricity on the Market (at variable rates) before obtaining additional compensation intended to reduce the risk. This sale on the market requires to announce in advance (about 48 hours before) the production that will be delivered on the network, so to be able to predict (in the short term) this production. The fundamental problem remains the variability of the Wind accentuated by the geographical situation. The objective of the project is to provide, every day, short-term forecasts (48-hour horizon) of wind production using weather data. The predictions of the GFS model and those of the ECMWF model are used as explanatory variables. The variable to be predicted is the production of a wind farm. We do two approaches: a direct approach that predicts wind generation directly from weather data, and an integrated approach that estimâtes wind from weather data and converts it into wind power by power curves. We used machine learning techniques to predict this production. The models tested are random forests, CART + Bagging, CART + Boosting, SVM (Support Vector Machine). The application is made on a wind farm of 22MW (11 wind turbines) of the Compagnie du Vent (that became Engie Green France). Our results are very conclusive compared to the literature.Keywords: forecast aggregation, machine learning, spatio-temporal dynamics modeling, wind power forcast
Procedia PDF Downloads 21727837 Modeling Route Selection Using Real-Time Information and GPS Data
Authors: William Albeiro Alvarez, Gloria Patricia Jaramillo, Ivan Reinaldo Sarmiento
Abstract:
Understanding the behavior of individuals and the different human factors that influence the choice when faced with a complex system such as transportation is one of the most complicated aspects of measuring in the components that constitute the modeling of route choice due to that various behaviors and driving mode directly or indirectly affect the choice. During the last two decades, with the development of information and communications technologies, new data collection techniques have emerged such as GPS, geolocation with mobile phones, apps for choosing the route between origin and destination, individual service transport applications among others, where an interest has been generated to improve discrete choice models when considering the incorporation of these developments as well as psychological factors that affect decision making. This paper implements a discrete choice model that proposes and estimates a hybrid model that integrates route choice models and latent variables based on the observation on the route of a sample of public taxi drivers from the city of Medellín, Colombia in relation to its behavior, personality, socioeconomic characteristics, and driving mode. The set of choice options includes the routes generated by the individual service transport applications versus the driver's choice. The hybrid model consists of measurement equations that relate latent variables with measurement indicators and utilities with choice indicators along with structural equations that link the observable characteristics of drivers with latent variables and explanatory variables with utilities.Keywords: behavior choice model, human factors, hybrid model, real time data
Procedia PDF Downloads 15227836 Study on Control Techniques for Adaptive Impact Mitigation
Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty
Abstract:
Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber
Procedia PDF Downloads 9127835 Crustal Scale Seismic Surveys in Search for Gawler Craton Iron Oxide Cu-Au (IOCG) under Very Deep Cover
Authors: E. O. Okan, A. Kepic, P. Williams
Abstract:
Iron oxide copper gold (IOCG) deposits constitute important sources of copper and gold in Australia especially since the discovery of the supergiant Olympic Dam deposits in 1975. They are considered to be metasomatic expressions of large crustal-scale alteration events occasioned by intrusive actions and are associated with felsic igneous rocks in most cases, commonly potassic igneous magmatism, with the deposits ranging from ~2.2 –1.5 Ga in age. For the past two decades, geological, geochemical and potential methods have been used to identify the structures hosting these deposits follow up by drilling. Though these methods have largely been successful for shallow targets, at deeper depth due to low resolution they are limited to mapping only very large to gigantic deposits with sufficient contrast. As the search for ore-bodies under regolith cover continues due to depletion of the near surface deposits, there is a compelling need to develop new exploration technology to explore these deep seated ore-bodies within 1-4km which is the current mining depth range. Seismic reflection method represents this new technology as it offers a distinct advantage over all other geophysical techniques because of its great depth of penetration and superior spatial resolution maintained with depth. Further, in many different geological scenarios, it offers a greater ‘3D mapability’ of units within the stratigraphic boundary. Despite these superior attributes, no arguments for crustal scale seismic surveys have been proposed because there has not been a compelling argument of economic benefit to proceed with such work. For the seismic reflection method to be used at these scales (100’s to 1000’s of square km covered) the technical risks or the survey costs have to be reduced. In addition, as most IOCG deposits have large footprint due to its association with intrusions and large fault zones; we hypothesized that these deposits can be found by mainly looking for the seismic signatures of intrusions along prospective structures. In this study, we present two of such cases: - Olympic Dam and Vulcan iron-oxide copper-gold (IOCG) deposits all located in the Gawler craton, South Australia. Results from our 2D modelling experiments revealed that seismic reflection surveys using 20m geophones and 40m shot spacing as an exploration tool for locating IOCG deposit is possible even when hosted in very complex structures. The migrated sections were not only able to identify and trace various layers plus the complex structures but also show reflections around the edges of intrusive packages. The presences of such intrusions were clearly detected from 100m to 1000m depth range without losing its resolution. The modelled seismic images match the available real seismic data and have the hypothesized characteristics; thus, the seismic method seems to be a valid exploration tool to find IOCG deposits. We therefore propose that 2D seismic survey is viable for IOCG exploration as it can detect mineralised intrusive structures along known favourable corridors. This would help in reducing the exploration risk associated with locating undiscovered resources as well as conducting a life-of-mine study which will enable better development decisions at the very beginning.Keywords: crustal scale, exploration, IOCG deposit, modelling, seismic surveys
Procedia PDF Downloads 325