Search results for: machine learning tools and techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16673

Search results for: machine learning tools and techniques

4043 Contentious Issues Concerning the Methodology of Using the Lexical Approach in Teaching ESP

Authors: Elena Krutskikh, Elena Khvatova

Abstract:

In tertiary settings expanding students’ vocabulary and teaching discursive competence is seen as one of the chief goals of a professional development course. However, such a focus often is detrimental to students’ cognitive competences, such as analysis, synthesis, and creative processing of information, and deprives students of motivation for self-improvement and self-development of language skills. The presentation is going to argue that in an ESP course special attention should be paid to reading/listening which can promote understanding and using the language as a tool for solving significant real world problems, including professional ones. It is claimed that in the learning process it is necessary to maintain a balance between the content and the linguistic aspect of the educational process as language acquisition is inextricably linked with mental activity and the need to express oneself is a primary stimulus for using a language. A study conducted among undergraduates indicates that they place a premium on quality materials that motivate them and stimulate their further linguistic and professional development. Thus, more demands are placed on study materials that should contain new information for students and serve not only as a source of new vocabulary but also prepare them for real tasks related to professional activities.

Keywords: critical reading, english for professional development, english for specific purposes, high order thinking skills, lexical approach, vocabulary acquisition

Procedia PDF Downloads 160
4042 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices

Authors: Pratik Dhabal Deo, Manoj P.

Abstract:

With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of Video Quality Assessment (VQA) and metrics like VMAF, SSIM etc. are said to be some of the best performing metrics, but the evaluation of these metrics is dominantly done on professionally taken video contents using professional tools, lighting conditions etc. No study particularly pinpointing the performance of the metrics on the contents taken by users on very commonly available devices has been done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective VQA metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and android smartphone, an IOS smartphone and a DSLR. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied on addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics didn’t perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using HEVC codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, SSIM and VMAF have performed significantly better.

Keywords: distortion, metrics, performance, resolution, video quality assessment

Procedia PDF Downloads 198
4041 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)

Authors: Tarek Duzan

Abstract:

Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.

Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data

Procedia PDF Downloads 89
4040 Numerical Simulation of Precast Concrete Panels for Airfield Pavement

Authors: Josef Novák, Alena Kohoutková, Vladimír Křístek, Jan Vodička

Abstract:

Numerical analysis software belong to the main tools for simulating the real behavior of various concrete structures and elements. In comparison with experimental tests, they offer an affordable way to study the mechanical behavior of structures under various conditions. The contribution deals with a precast element of an innovative airfield pavement system which is being developed within an ongoing scientific project. The proposed system consists a two-layer surface course of precast concrete panels positioned on a two-layer base of fiber-reinforced concrete with recycled aggregate. As the panels are supposed to be installed directly on the hardened base course, imperfections at the interface between the base course and surface course are expected. Considering such circumstances, three various behavior patterns could be established and considered when designing the precast element. Enormous costs of full-scale experiments force to simulate the behavior of the element in a numerical analysis software using finite element method. The simulation was conducted on a nonlinear model in order to obtain such results which could fully compensate results from the experiments. First, several loading schemes were considered with the aim to observe the critical one which was used for the simulation later on. The main objective of the simulation was to optimize reinforcement of the element subject to quasi-static loading from airplanes. When running the simulation several parameters were considered. Namely, it concerns geometrical imperfections, manufacturing imperfections, stress state in reinforcement, stress state in concrete and crack width. The numerical simulation revealed that the precast element should be heavily reinforced to fulfill all the demands assumed. The main cause of using high amount of reinforcement is the size of the imperfections which could occur at real structure. Improving manufacturing quality, the installation of the precast panels on a fresh base course or using a bedding layer underneath the surface course belong to the main steps how to reduce the size of imperfections and consequently lower the consumption of reinforcement.

Keywords: nonlinear analysis, numerical simulation, precast concrete, pavement

Procedia PDF Downloads 250
4039 Conceptual Design of a Residential House Based on IDEA 4E - Discussion of the Process of Interdisciplinary Pre-Project Research and Optimal Design Solutions Created as Part of Project-Based Learning

Authors: Dorota Winnicka-Jasłowska, Małgorzata Jastrzębska, Jan Kaczmarczyk, Beata Łaźniewska-Piekarczyk, Piotr Skóra, Beata Kobiałko, Agata Kołodziej, Błażej Mól, Ewelina Lasyk, Karolina Brzęczek, Michał Król

Abstract:

Creating economical, comfortable, and healthy buildings which respect the environment is a necessity resulting from legal regulations, but it is also a response to the expectations of a modern investor. Developing the concept of a residential house based on the 4E and the 2+2+(1) IDEAs is a complex process that requires specialist knowledge of many trades and requires adaptation of comprehensive solutions. IDEA 4E assumes the use of energy-saving, ecological, ergonomics, and economic solutions. In addition, IDEA 2+2+(1) assuming appropriate surface and functional-spatial solutions for a family at different stages of a building's life, i.e. 2, 4, or 5 members, enforces certain flexibility of the designed building, which may change with the number and age of its users. The building should therefore be easy to rearrange or expand. The task defined in this way was carried out by an interdisciplinary team of students of the Silesian University of Technology as part of PBL. The team consisted of 6 undergraduate and graduate students representing the following faculties: 3 students of architecture, 2 civil engineering students, and 1 student of environmental engineering. The work of the team was supported by 3 academic teachers representing the above-mentioned faculties and additional experts. The project was completed in one semester. The article presents the successive stages of the project. At first pre-design studies were carried out. They allowed to define the guidelines for the project. For this purpose, the "Model house" questionnaire was developed. The questions concerned determining the utility needs of a potential family that would live in a model house - specifying the types of rooms, their size, and equipment. A total of 114 people participated in the study. The answers to the questions in the survey helped to build the functional programme of the designed house. Other research consisted in the search for optimal technological and construction solutions and the most appropriate building materials based mainly on recycling. Appropriate HVAC systems responsible for the building's microclimate were also selected, i.e. low, temperature heating, mechanical ventilation, and the use of energy from renewable sources was planned so as to obtain a nearly zero-energy building. Additionally, rainwater retention and its local use were planned. The result of the project was a design of a model residential building that meets the presented assumptions. A 3D VR spatial model of the designed building and its surroundings was also made. The final result was the organization of an exhibition for students and the academic community. Participation in the interdisciplinary project allowed the project team members to better understand the consequences of the adopted solutions for achieving the assumed effect and the need to work out a compromise. The implementation of the project made all its participants aware of the importance of cooperation as well as systematic and clear communication. The need to define milestones and their consistent enforcement is an important element guaranteeing the achievement of the intended end result. The implementation of PBL enables students to the acquire competences important in their future professional work.

Keywords: architecture and urban planning, civil engineering, environmental engineering, project-based learning, sustainable building

Procedia PDF Downloads 106
4038 Enhancing Knowledge Graph Convolutional Networks with Structural Adaptive Receptive Fields for Improved Node Representation and Information Aggregation

Authors: Zheng Zhihao

Abstract:

Recently, the Knowledge Graph Framework Network (KGCN) has developed powerful capabilities in knowledge representation and reasoning tasks. However, traditional KGCN often uses a fixed weight mechanism when aggregating information, failing to make full use of rich structural information, resulting in a certain expression ability of node representation and easily causing over-smoothing problems. In order to solve these challenges, the paper proposes an distinct graph neural network model called KGCN-STAR (Knowledge Graph Convolutional Network with Structural Adaptive Receptive Fields). This model dynamically adjusts the perception of each node by introducing a structural adaptive receptive field. Wild range and a subgraph aggregator is designed to capture local structural information more effectively. Experimental results show that KGCN-STAR shows significant performance improvement on multiple knowledge graph data sets, especially showing considerable capabilities in the task of representation learning of complex structures.

Keywords: knowledge graph(KG), graph neural networks (GNN), structural adaptive receptive fields, information aggregation

Procedia PDF Downloads 14
4037 Innovative Schools as Birthplaces for Promoting Educational Innovations: A Case Study of Two Hungarian Schools

Authors: Khin Khin Thant Sin

Abstract:

This study is a case study which investigates successful and ongoing bottom-up innovations for school improvement initiatives in Hungary. Two innovative schools are selected in this study due to their outstanding achievement during the past ten years in Hungary. In one school, data from the personal experiences of a school principal who initiated the bottom-up innovation are included. For the second school, three interviews were carried out with two schoolteachers and one secondary school student. In addition, desk research, including the principal’s published articles, the schoolteachers’ master thesis, the school websites, and other published articles, are analysed to explore the schools’ innovative processes. This study investigates how bottom-up innovation led to major achievements in student learning, teacher professional development, networking and collaboration with other schools, and the establishment of successful partnerships with universities. The highlight of this study is how innovative schools can be the major sources promoting educational innovations as well as improving teacher education, especially in initial teacher education and continuous professional development.

Keywords: school innovation, teacher education, hungary, educational innovation, school improvement

Procedia PDF Downloads 106
4036 Character and Evolution of Electronic Waste: A Technologically Developing Country's Experience

Authors: Karen C. Olufokunbi, Odetunji A. Odejobi

Abstract:

The discourse of this paper is the examination of the generation, accumulation and growth of e-waste in a developing country. Images and other data about computer e-waste were collected using a digital camera, 290 copies of questionnaire and three structured interviews using Obafemi Awolowo University (OAU), Ile-Ife, Nigeria environment as a case study. The numerical data were analysed using R data analysis and process tool. Automata-based techniques and Petri net modeling tool were used to design and simulate a computational model for the recovery of saleable materials from e-waste. The R analysis showed that at a 95 percent confidence level, the computer equipment that will be disposed by 2020 will be 417 units. Compared to the 800 units in circulation in 2014, 50 percent of personal computer components will become e-waste. This indicates that personal computer components were in high demand due to their low costs and will be disposed more rapidly when replaced by new computer equipment Also, 57 percent of the respondents discarded their computer e-waste by throwing it into the garbage bin or by dumping it. The simulated model using Coloured Petri net modelling tool for the process showed that the e-waste dynamics is a forward sequential process in the form of a pipeline meaning that an e-waste recovery of saleable materials process occurs in identifiable discrete stages indicating that e-waste will continue to accumulate and grow in volume with time.

Keywords: Coloured Petri net, computational modelling, electronic waste, electronic waste process dynamics

Procedia PDF Downloads 163
4035 Experimental Parameters’ Effects on the Electrical Discharge Machining Performances

Authors: Asmae Tafraouti, Yasmina Layouni, Pascal Kleimann

Abstract:

The growing market for Microsystems (MST) and Micro-Electromechanical Systems (MEMS) is driving the research for alternative manufacturing techniques to microelectronics-based technologies, which are generally expensive and time-consuming. Hot-embossing and micro-injection modeling of thermoplastics appear to be industrially viable processes. However, both require the use of master models, usually made in hard materials such as steel. These master models cannot be fabricated using standard microelectronics processes. Thus, other micromachining processes are used, such as laser machining or micro-electrical discharge machining (µEDM). In this work, µEDM has been used. The principle of µEDM is based on the use of a thin cylindrical micro-tool that erodes the workpiece surface. The two electrodes are immersed in a dielectric with a distance of a few micrometers (gap). When an electrical voltage is applied between the two electrodes, electrical discharges are generated, which cause material machining. In order to produce master models with high resolution and smooth surfaces, it is necessary to well control the discharge mechanism. However, several problems are encountered, such as a random electrical discharge process, the fluctuation of the discharge energy, the electrodes' polarity inversion, and the wear of the micro-tool. The effect of different parameters, such as the applied voltage, the working capacitor, the micro-tool diameter, and the initial gap, has been studied. This analysis helps to improve the machining performances, such as the workpiece surface condition and the lateral crater's gap.

Keywords: craters, electrical discharges, micro-electrical discharge machining, microsystems

Procedia PDF Downloads 71
4034 Approach for Evaluating Wastewater Reuse Options in Agriculture

Authors: Manal Elgallal, Louise Fletcher, Barbara Evans

Abstract:

Water scarcity is a growing concern in many arid and semi-arid countries. The increase of water scarcity threatens economic development and sustainability of human livelihoods as well as environment especially in developing countries. Globally, agriculture is the largest water consumption sector, accounting for approximately 70% of all freshwater extraction. Growing competition between the agricultural and higher economic value in urban and industrial uses of high-quality freshwater supplies, especially in regions where water scarcity major problems, will increase the pressure on this precious resource. In this circumstance, wastewater may provide reliable source of water for agriculture and enable freshwater to be exchanged for more economically valuable purposes. Concern regarding the risks from microbial and toxic components to human health and environment quality is a serious obstacle for wastewater reuse particularly in agriculture. Although powerful approaches and tools for microbial risk assessment and management for safe use of wastewater are now available, few studies have attempted to provide any mechanism to quantitatively assess and manage the environmental risks resulting from reusing wastewater. In seeking pragmatic solutions to sustainable wastewater reuse, there remains a lack of research incorporating both health and environmental risk assessment and management with economic analysis in order to quantitatively combine cost, benefits and risks to rank alternative reuse options. This study seeks to enhance effective reuse of wastewater for irrigation in arid and semi-arid areas, the outcome of the study is an evaluation approach that can be used to assess different reuse strategies and to determine the suitable scale at which treatment alternatives and interventions are possible, feasible and cost effective in order to optimise the trade-offs between risks to protect public health and the environment and preserving the substantial benefits.

Keywords: environmental risks, management, life cycle costs, waste water irrigation

Procedia PDF Downloads 259
4033 Development and Characterization of a Composite Material for Ceiling Board Construction Applications in Ethiopia

Authors: Minase Yitbarek Mengistu, Abrham Melkamu, Dawit Yisfaw, Bisrat Belihu, Abdulhakim Lalega

Abstract:

This research was aimed at reducing and recycling waste paper and sawdust from our environment, thereby reducing environmental pollution resulting from the management/disposal of these waste materials. In this research, some mechanical properties of composite ceiling board materials made from waste paper, sawdust, and pineapple leaf fibers were investigated to determine their suitability for use in low-cost construction work. The ceiling board was obtained from the waste of paper, sawdust chips, and pineapple leaf fibers by manual mechanical bonding techniques using dissolved polystyrene films as a binding agent. The results obtained showed that the water absorption values of between 6 % and 8.1 %; as well as density values of 500 kg/mm3 and 611.1 kg/mm3.From our result, the better one is a ratio of pineapple leaf fiber 25%, sawdust 40%, binder 25%, and waste paper 10%. The composite ceiling boards were successfully nailed with firm grips. These values obtained were compared with those of the conventional ceiling boards and it was observed that these composite materials can be used for internal low-cost construction work and Insulation (acoustic and thermal) performance. It is highly recommended that small and medium enterprises be encouraged to venture into waste recycling and the production of these composite ceiling materials to create jobs for skilled and unskilled labor that are locally available.

Keywords: composite material, environment, textile, ceiling board

Procedia PDF Downloads 67
4032 Internet-Of-Things and Ergonomics, Increasing Productivity and Reducing Waste: A Case Study

Authors: V. Jaime Contreras, S. Iliana Nunez, S. Mario Sanchez

Abstract:

Inside a manufacturing facility, we can find innumerable automatic and manual operations, all of which are relevant to the production process. Some of these processes add more value to the products more than others. Manual operations tend to add value to the product since they can be found in the final assembly area o final operations of the process. In this areas, where a mistake or accident can increase the cost of waste exponentially. To reduce or mitigate these costly mistakes, one approach is to rely on automation to eliminate the operator from the production line - requires a hefty investment and development of specialized machinery. In our approach, the center of the solution is the operator through sufficient and adequate instrumentation, real-time reporting and ergonomics. Efficiency and reduced cycle time can be achieved thorough the integration of Internet-of-Things (IoT) ready technologies into assembly operations to enhance the ergonomics of the workstations. Augmented reality visual aids, RFID triggered personalized workstation dimensions and real-time data transfer and reporting can help achieve these goals. In this case study, a standard work cell will be used for real-life data acquisition and a simulation software to extend the data points beyond the test cycle. Three comparison scenarios will run in the work cell. Each scenario will introduce a dimension of the ergonomics to measure its impact independently. Furthermore, the separate test will determine the limitations of the technology and provide a reference for operating costs and investment required. With the ability, to monitor costs, productivity, cycle time and scrap/waste in real-time the ROI (return on investment) can be determined at the different levels to integration. This case study will help to show that ergonomics in the assembly lines can make significant impact when IoT technologies are introduced. Ergonomics can effectively reduce waste and increase productivity with minimal investment if compared with setting up to custom machine.

Keywords: augmented reality visual aids, ergonomics, real-time data acquisition and reporting, RFID triggered workstation dimensions

Procedia PDF Downloads 212
4031 Assessment of the Standard of Referrals for Extraction of Carious Primary Teeth under General Anaesthetic

Authors: Emma Carr, Jennifer Morrison, Peter Walker

Abstract:

Background: Due to COVID-19, there was a significant reduction in the number of children being treated under general anaesthetic (GA) within the health board, which led to a backlog of referrals. The referrals were being triaged and added to a waiting list in order of priority -determined by the information given. By implementing a checklist, it is anticipated that at least 70% of referrals will have the majority of the information required to effectively prioritise patients. The gold standard, as defined in ‘Guidelines For The Management Of Children Referred For Dental Extractions Under General Anaesthesia’, indicates that all referrals should mention: (i) Inability of the child to cooperate, (ii) Previously tried anxiety management techniques, (iii) Existence of psychological disorders, (iv) Presence of acute dental infection, (v) Requirement for extractions in multiple quadrants. Method: 130 referrals were examined over three months and compared to the recommended standard. A letter was emailed to referring dentists within Ayrshire & Arran outlining the recommended information to be included within the referral. The second round of data collection was then carried out, which involved an examination of 105 referrals. Results: The first round revealed that only 28% of referrals mentioned at least four defined standards outlined above. Following issuing a checklist to all dentists, this increased to 72%. Conclusion: As many of the children referred for extractions under GA have suffered pain and infection because of dental caries, it is important that delay of treatment is minimised, where possible. The implementation of a standardised checklist has enabled more effective prioritisation of patients.

Keywords: caries, dentistry, general anaesthetic, paediatrics

Procedia PDF Downloads 103
4030 Raising Test of English for International Communication (TOEIC) Scores through Purpose-Driven Vocabulary Acquisition

Authors: Edward Sarich, Jack Ryan

Abstract:

In contrast to learning new vocabulary incidentally in one’s first language, foreign language vocabulary is often acquired purposefully, because a lack of natural exposure requires it to be studied in an artificial environment. It follows then that foreign language vocabulary may be more efficiently acquired if it is purpose-driven, or linked to a clear and desirable outcome. The research described in this paper relates to the early stages of what is seen as a long-term effort to measure the effectiveness of a methodology for purpose-driven foreign language vocabulary instruction, specifically by analyzing whether directed studying from high-frequency vocabulary lists leads to an improvement in Test of English for International Communication (TOEIC) scores. The research was carried out in two sections of a first-year university English composition class at a small university in Japan. The results seem to indicate that purposeful study from relevant high-frequency vocabulary lists can contribute to raising TOEIC scores and that the test preparation methodology used in this study was thought by students to be beneficial in helping them to prepare to take this high-stakes test.

Keywords: corpus vocabulary, language asssessment, second language vocabulary acquisition, TOEIC test preparation

Procedia PDF Downloads 144
4029 Human Relationships in the Virtual Classrooms as Predictors of Students Academic Resilience and Performance

Authors: Eddiebal P. Layco

Abstract:

The purpose of this study is to describe students' virtual classroom relationships in terms of their relationship to their peers and teachers; academic resilience; and performance. Further, the researcher wants to examine if these virtual classroom relations predict students' resilience and performance in their academics. The data were collected from 720 junior and senior high school or grade 7 to 12 students in selected state universities and colleges (SUCs) in Region III offering online or virtual classes during S.Y. 2020-2021. Results revealed that virtual classroom relationships such as teacher-student and peer relationships predict academic resilience and performance. This implies that students' academic relations with their teachers and peers have something to do with their ability to bounce back and beat the odds amidst challenges they faced in the online or virtual learning environment. These virtual relationships significantly influence also their academic performance. Adequate teacher support and positive peer relations may lead to enhanced academic resilience, which may also promote a meaningful and fulfilled life academically. Result suggests that teachers should develop their students' academic resiliency and maintain good relationships in the classroom since these results in academic success.

Keywords: virtual classroom relationships, teacher-pupil relationship, peer-relationship, academic resilience, academic performance

Procedia PDF Downloads 148
4028 The Utilisation of Storytelling as a Therapeutic Intervention by Educational Psychologists to Address Behavioural Challenges Relating to Grief of Adolescent Clients

Authors: Laila Jeebodh Desai

Abstract:

Storytelling as a therapeutic intervention entails the narrating of events by externalising emotions, thoughts and responses to life-changing events such as loss and grief. This creates the opportunity for clients to engage with psychologists by projecting various beliefs and challenges, such as grief, through a range of therapeutic modalities. This study conducts an inquiry into the ways in which storytelling can be utilised by educational psychologists with adolescent clients to address behavioural challenges relating to grief. This qualitative study therefore aims to facilitate an understanding of the use and benefits of storytelling as a therapeutic intervention. This has been achieved by examining interviews with four educational psychologists who have utilised storytelling as a therapeutic intervention with adolescent clients to overcome challenges with grief. The participants (educational psychologists) discussed case studies during interviews, which provided evidence of their practical administration of storytelling as a therapeutic intervention incorporating integrated theoretical approaches through the use of blended therapeutic techniques. Behavioural challenges relating to grief were also predominant in the case study information provided by the participants. The participants further confirmed that the term ‘grief’ included different types of loss that were experienced among adolescent clients. The implications and recommendations of the findings encouraged the utilisation of storytelling as a therapeutic intervention with adolescent clients in addressing behavioural challenges related to grief, based on the outcome of the case studies discussed by the participants.

Keywords: storytelling, therapeutic intervention, adolescents, grief

Procedia PDF Downloads 493
4027 Perception and Usage of Academic Social Networks among Scientists: A Cross-Sectional Study of North Indian Universities

Authors: Anita Chhatwal

Abstract:

Purpose: The purpose of this paper is to evaluate and investigate the scope of usage of Academic Social Networking Websites (ASNs) by the Science faculty members across universities of North India, viz. Panjab University, Punjabi University and University of Delhi, Delhi. Design/Methodology/Approach: The present study is based upon the primary data collected from 81 science faculty participants from three universities of North India. Questionnaire method was used as an instrument for survey. The study is descriptive and research-based to investigate the popular ASNs amongst the participants from three sample universities and the purpose for which they use them along with the problems they encounter while using ASNs. Findings: The findings of the study revealed that majority of the participants were using ASNs for their academic needs. It was observed that majority of the participants (78%) used ASNs to access scientific papers, while 73.8% of the participants used them to share their research publications. ResearchGate (60.5%) and Google Scholar (59.7%) were the top two most preferred and widely used ASNs by the participants. The critical analysis of the data shows that laptops (86.3%) emerged as major tools for accessing ASNs. Shortage of computers was found to be the chief obstacle in accessing ASNs by the participants. Results of the study demonstrate that 56.3% of participants suggested conduct of seminars and training as the most effective method to increase the awareness of ASNs. Research Limitations/Implications: The study in hand absorbed the 81 faculty (Assistant Professors) members from 15 Science teaching departments across three sample universities of North India. The findings of this study will help the Government of India to regulate and simultaneously make effort to develop and enhance ASNs usage among faculty, researchers, and students. The present study will add to the existing library and information science literature and will be advantageous for all the information professionals as well. Originality/Value: This study is original survey based on primary data investigate the usage of ASNs by the academia. This study will be useful for research scholars, academicians and students all over the world.

Keywords: academic social networks, awareness and usage, North India, scholarly communication, web 2.0

Procedia PDF Downloads 111
4026 Advanced Combinatorial Method for Solving Complex Fault Trees

Authors: José de Jesús Rivero Oliva, Jesús Salomón Llanes, Manuel Perdomo Ojeda, Antonio Torres Valle

Abstract:

Combinatorial explosion is a common problem to both predominant methods for solving fault trees: Minimal Cut Set (MCS) approach and Binary Decision Diagram (BDD). High memory consumption impedes the complete solution of very complex fault trees. Only approximated non-conservative solutions are possible in these cases using truncation or other simplification techniques. The paper proposes a method (CSolv+) for solving complex fault trees, without any possibility of combinatorial explosion. Each individual MCS is immediately discarded after its contribution to the basic events importance measures and the Top gate Upper Bound Probability (TUBP) has been accounted. An estimation of the Top gate Exact Probability (TEP) is also provided. Therefore, running in a computer cluster, CSolv+ will guarantee the complete solution of complex fault trees. It was successfully applied to 40 fault trees from the Aralia fault trees database, performing the evaluation of the top gate probability, the 1000 Significant MCSs (SMCS), and the Fussell-Vesely, RRW and RAW importance measures for all basic events. The high complexity fault tree nus9601 was solved with truncation probabilities from 10-²¹ to 10-²⁷ just to limit the execution time. The solution corresponding to 10-²⁷ evaluated 3.530.592.796 MCSs in 3 hours and 15 minutes.

Keywords: system reliability analysis, probabilistic risk assessment, fault tree analysis, basic events importance measures

Procedia PDF Downloads 37
4025 Conjugated Chitosan-Carboxymethyl-5-Fluorouracil Nanoparticles for Skin Delivery

Authors: Mazita Mohd Diah, Anton V. Dolzhenko, Tin Wui Wong

Abstract:

Nanoparticles, being small with a large specific surface area, increase solubility, enhance bioavailability, improve controlled release and enable precision targeting of the entrapped compounds. In this study, chitosan as polymeric permeation enhancer was conjugated to a polar pro-drug, carboxymethyl-5-fluorouracil (CMFU) to increase the skin drug permeation. Chitosan-CMFU conjugate was synthesized using chemical conjugation process through succinate linker. It was then transformed into nanoparticles via spray drying method. The conjugation was elucidated using Fourier Transform Infrared and Proton Nuclear Magnetic Resonance techniques. The nanoparticle size, size distribution, zeta potential, drug content, skin permeation and retention profiles were characterized. The conjugation was denoted using 1H NMR by new peaks at signal δ = 4.184 ppm (singlet, 2H for CH2) and 7.676-7.688 ppm (doublet, 1H for C6) attributed to CMFU in chitosan-CMFU NMR spectrum. The nanoparticles had profiles of particle size: 93.97 ±35.11 nm, polydispersity index: 0.40 ± 0.14, zeta potential: +18.25 ±2.95 mV and drug content: 6.20 ± 1.98 % w/w. Almost 80 % w/w CMFU in the form of nanoparticles permeated through the skin in 24 hours and close to 50 % w/w permeation occurred in first 1-2 hours. Without conjugation to chitosan and nanoparticulation, less than 40 % w/w CMFU permeated through the skin in 24 hours. The skin drug retention likewise was higher with chitosan-CMFU nanoparticles (15.34 ± 5.82 % w/w) than CMFU (2.24 ± 0.57 % w/w). CMFU, through conjugation with chitosan permeation enhancer and processed in nanogeometry, had its skin permeation and retention degree promoted.

Keywords: carboxymethyl-5-fluorouracil, chitosan, conjugate, skin permeation, skin retention

Procedia PDF Downloads 360
4024 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 324
4023 Information Management Approach in the Prediction of Acute Appendicitis

Authors: Ahmad Shahin, Walid Moudani, Ali Bekraki

Abstract:

This research aims at presenting a predictive data mining model to handle an accurate diagnosis of acute appendicitis with patients for the purpose of maximizing the health service quality, minimizing morbidity/mortality, and reducing cost. However, acute appendicitis is the most common disease which requires timely accurate diagnosis and needs surgical intervention. Although the treatment of acute appendicitis is simple and straightforward, its diagnosis is still difficult because no single sign, symptom, laboratory or image examination accurately confirms the diagnosis of acute appendicitis in all cases. This contributes in increasing morbidity and negative appendectomy. In this study, the authors propose to generate an accurate model in prediction of patients with acute appendicitis which is based, firstly, on the segmentation technique associated to ABC algorithm to segment the patients; secondly, on applying fuzzy logic to process the massive volume of heterogeneous and noisy data (age, sex, fever, white blood cell, neutrophilia, CRP, urine, ultrasound, CT, appendectomy, etc.) in order to express knowledge and analyze the relationships among data in a comprehensive manner; and thirdly, on applying dynamic programming technique to reduce the number of data attributes. The proposed model is evaluated based on a set of benchmark techniques and even on a set of benchmark classification problems of osteoporosis, diabetes and heart obtained from the UCI data and other data sources.

Keywords: healthcare management, acute appendicitis, data mining, classification, decision tree

Procedia PDF Downloads 344
4022 Enhanced Retrieval-Augmented Generation (RAG) Method with Knowledge Graph and Graph Neural Network (GNN) for Automated QA Systems

Authors: Zhihao Zheng, Zhilin Wang, Linxin Liu

Abstract:

In the research of automated knowledge question-answering systems, accuracy and efficiency are critical challenges. This paper proposes a knowledge graph-enhanced Retrieval-Augmented Generation (RAG) method, combined with a Graph Neural Network (GNN) structure, to automatically determine the correctness of knowledge competition questions. First, a domain-specific knowledge graph was constructed from a large corpus of academic journal literature, with key entities and relationships extracted using Natural Language Processing (NLP) techniques. Then, the RAG method's retrieval module was expanded to simultaneously query both text databases and the knowledge graph, leveraging the GNN to further extract structured information from the knowledge graph. During answer generation, contextual information provided by the knowledge graph and GNN is incorporated to improve the accuracy and consistency of the answers. Experimental results demonstrate that the knowledge graph and GNN-enhanced RAG method perform excellently in determining the correctness of questions, achieving an accuracy rate of 95%. Particularly in cases involving ambiguity or requiring contextual information, the structured knowledge provided by the knowledge graph and GNN significantly enhances the RAG method's performance. This approach not only demonstrates significant advantages in improving the accuracy and efficiency of automated knowledge question-answering systems but also offers new directions and ideas for future research and practical applications.

Keywords: knowledge graph, graph neural network, retrieval-augmented generation, NLP

Procedia PDF Downloads 29
4021 Effect of Non-metallic Inclusion from the Continuous Casting Process on the Multi-Stage Forging Process and the Tensile Strength of the Bolt: Case Study

Authors: Tomasz Dubiel, Tadeusz Balawender, Miroslaw Osetek

Abstract:

The paper presents the influence of non-metallic inclusions on the multi-stage forging process and the mechanical properties of the dodecagon socket bolt used in the automotive industry. The detected metallurgical defect was so large that it directly influenced the mechanical properties of the bolt and resulted in failure to meet the requirements of the mechanical property class. In order to assess the defect, an X-ray examination and metallographic examination of the defective bolt were performed, showing exogenous non-metallic inclusion. The size of the defect on the cross-section was 0.531 [mm] in width and 1.523 [mm] in length; the defect was continuous along the entire axis of the bolt. In analysis, a FEM simulation of the multi-stage forging process was designed, taking into account a non-metallic inclusion parallel to the sample axis, reflecting the studied case. The process of defect propagation due to material upset in the head area was analyzed. The final forging stage in shaping the dodecagonal socket and filling the flange area was particularly studied. The effect of the defect was observed to significantly reduce the effective cross-section as a result of the expansion of the defect perpendicular to the axis of the bolt. The mechanical properties of products with and without the defect were analyzed. In the first step, the hardness test confirmed that the required value for the mechanical class 8.8 of both bolt types was obtained. In the second step, the bolts were subjected to a static tensile test. The bolts without the defect gave a positive result, while all 10 bolts with the defect gave a negative result, achieving a tensile strength below the requirements. Tensile strength tests were confirmed by metallographic tests and FEM simulation with perpendicular inclusion spread in the area of the head. The bolts were damaged directly under the bolt head, which is inconsistent with the requirements of ISO 898-1. It has been shown that non-metallic inclusions with orientation in accordance with the axis of the bolt can directly cause loss of functionality and these defects should be detected even before assembling in the machine element.

Keywords: continuous casting, multi-stage forging, non-metallic inclusion, upset bolt head

Procedia PDF Downloads 152
4020 Simulation Study of Enhanced Terahertz Radiation Generation by Two-Color Laser Plasma Interaction

Authors: Nirmal Kumar Verma, Pallavi Jha

Abstract:

Terahertz (THz) radiation generation by propagation of two-color laser pulses in plasma is an active area of research due to its potential applications in various areas, including security screening, material characterization and spectroscopic techniques. Due to non ionizing nature and the ability to penetrate several millimeters, THz radiation is suitable for diagnosis of cancerous cells. Traditional THz emitters like optically active crystals when irradiated with high power laser radiation, are subject to material breakdown and hence low conversion efficiencies. This problem is not encountered in laser - plasma based THz radiation sources. The present paper is devoted to the simulation study of the enhanced THz radiation generation by propagation of two-color, linearly polarized laser pulses through magnetized plasma. The two laser pulses orthogonally polarized are co-propagating along the same direction. The direction of the external magnetic field is such that one of the two laser pulses propagates in the ordinary mode, while the other pulse propagates in the extraordinary mode through homogeneous plasma. A transverse electromagnetic wave with frequency in the THz range is generated due to the presence of the static magnetic field. It is observed that larger amplitude terahertz can be generated by mixing of ordinary and extraordinary modes of two-color laser pulses as compared with a single laser pulse propagating in the extraordinary mode.

Keywords: two-color laser pulses, terahertz radiation, magnetized plasma, ordinary and extraordinary mode

Procedia PDF Downloads 298
4019 Tunable Control of Therapeutics Release from the Nanochannel Delivery System (nDS)

Authors: Thomas Geninatti, Bruno Giacomo, Alessandro Grattoni

Abstract:

Nanofluidic devices have been investigated for over a decade as promising platforms for the controlled release of therapeutics. The nanochannel drug delivery system (nDS), a membrane fabricated with high precision silicon techniques, capable of zero-order release of drugs by exploiting diffusion transport at the nanoscale originated from the interactions between molecules with nanochannel surfaces, showed the flexibility of the sustained release in vitro and in vivo, over periods of time ranging from weeks to months. To improve the implantable bio nanotechnology, in order to create a system that possesses the key features for achieve the suitable release of therapeutics, the next generation of nDS has been created. Platinum electrodes are integrated by e-beam deposition onto both surfaces of the membrane allowing low voltage (<2 V) and active temporal control of drug release through modulation of electrostatic potentials at the inlet and outlet of the membrane’s fluidic channels. Hence, a tunable administration of drugs is ensured from the nanochannel drug delivery system. The membrane will be incorporated into a peek implantable capsule, which will include drug reservoir, control hardware and RF system to allow suitable therapeutic regimens in real-time. Therefore, this new nanotechnology offers tremendous potential solutions to manage chronic disease such as cancer, heart disease, circadian dysfunction, pain and stress.

Keywords: nanochannel membrane, drug delivery, tunable release, personalized administration, nanoscale transport, biomems

Procedia PDF Downloads 308
4018 Importance of Prostate Volume, Prostate Specific Antigen Density and Free/Total Prostate Specific Antigen Ratio for Prediction of Prostate Cancer

Authors: Aliseydi Bozkurt

Abstract:

Objectives: Benign prostatic hyperplasia (BPH) is the most common benign disease, and prostate cancer (PC) is malign disease of the prostate gland. Transrectal ultrasound-guided biopsy (TRUS-bx) is one of the most important diagnostic tools in PC diagnosis. Identifying men at increased risk for having a biopsy detectable prostate cancer should consider prostate specific antigen density (PSAD), f/t PSA Ratio, an estimate of prostate volume. Method: We retrospectively studied 269 patients who had a prostate specific antigen (PSA) score of 4 or who had suspected rectal examination at any PSA level and received TRUS-bx between January 2015 and June 2018 in our clinic. TRUS-bx was received by 12 experienced urologists with 12 quadrants. Prostate volume was calculated prior to biopsy together with TRUS. Patients were classified as malignant and benign at the end of pathology. Age, PSA value, prostate volume in transrectal ultrasonography, corpuscle biopsy, biopsy pathology result, the number of cancer core and Gleason score were evaluated in the study. The success rates of PV, PSAD, and f/tPSA were compared in all patients and those with PSA 2.5-10 ng/mL and 10.1-30 ng/mL tp foresee prostate cancer. Result: In the present study, in patients with PSA 2.5-10 ng/ml, PV cut-off value was 43,5 mL (n=42 < 43,5 mL and n=102 > 43,5 mL) while in those with PSA 10.1-30 ng/mL prostate volüme (PV) cut-off value was found 61,5 mL (n=31 < 61,5 mL and n=36 > 61,5 mL). Total PSA values in the group with PSA 2.5-10 ng/ml were found lower (6.0 ± 1.3 vs 6.7 ± 1.7) than that with PV < 43,5 mL, this value was nearly significant (p=0,043). In the group with PSA value 10.1-30 ng/mL, no significant difference was found (p=0,117) in terms of total PSA values between the group with PV < 61,5 mL and that with PV > 61,5 mL. In the group with PSA 2.5-10 ng/ml, in patients with PV < 43,5 mL, f/t PSA value was found significantly lower compared to the group with PV > 43,5 mL (0.21 ± 0.09 vs 0.26 ± 0.09 p < 0.001 ). Similarly, in the group with PSA value of 10.1-30 ng/mL, f/t PSA value was found significantly lower in patients with PV < 61,5 mL (0.16 ± 0.08 vs 0.23 ± 0.10 p=0,003). In the group with PSA 2.5-10 ng/ml, PSAD value in patients with PV < 43,5 mL was found significantly higher compared to those with PV > 43,5 mL (0.17 ± 0.06 vs 0.10 ± 0.03 p < 0.001). Similarly, in the group with PSA value 10.1-30 ng/mL PSAD value was found significantly higher in patients with PV < 61,5 mL (0.47 ± 0.23 vs 0.17 ± 0.08 p < 0.001 ). The biopsy results suggest that in the group with PSA 2.5-10 ng/ml, in 29 of the patients with PV < 43,5 mL (69%) cancer was detected while in 13 patients (31%) no cancer was detected. While in 19 patients with PV > 43,5 mL (18,6%) cancer was found, in 83 patients (81,4%) no cancer was detected (p < 0.001). In the group with PSA value 10.1-30 ng/mL, in 21 patients with PV < 61,5 mL (67.7%) cancer was observed while only in10 patients (32.3%) no cancer was seen. In 5 patients with PV > 61,5 mL (13.9%) cancer was found while in 31 patients (86.1%) no cancer was observed (p < 0.001). Conclusions: Identifying men at increased risk for having a biopsy detectable prostate cancer should consider PSA, f/t PSA Ratio, an estimate of prostate volume. Prostate volume in PC was found lower.

Keywords: prostate cancer, prostate volume, prostate specific antigen, free/total PSA ratio

Procedia PDF Downloads 142
4017 Philosophical Foundations of Education at the Kazakh Languages by Aiding Communicative Methods

Authors: Duisenova Marzhan

Abstract:

This paper considers the looking from a philosophical point of view the interactive technology and tiered developing Kazakh language teaching primary school pupils through the method of linguistic communication, content and teaching methods formed in the education system. The values determined by the formation of new practical ways that could lead to a novel qualitative level and solving the problem. In the formation of the communicative competence of elementary school students would be to pay attention to other competencies. It helps to understand the motives and needs socialization of students, the development of their cognitive abilities and participate in language relations arising from different situations. Communicative competence is the potential of its own in pupils creative language activity. In this article, the Kazakh language teaching in primary school communicative method is presented. The purpose of learning communicative method, personal development, effective psychological development of the child, himself-education, expansion and growth of language skills and vocabulary, socialization of children, the adoption of the laws of life in the social environment, analyzed the development of vocabulary richness of the language that forms the erudition to ensure continued improvement of education of the child.

Keywords: communicative, culture, training, process, method, primary, competence

Procedia PDF Downloads 335
4016 An Analysis of Uncoupled Designs in Chicken Egg

Authors: Pratap Sriram Sundar, Chandan Chowdhury, Sagar Kamarthi

Abstract:

Nature has perfected her designs over 3.5 billion years of evolution. Research fields such as biomimicry, biomimetics, bionics, bio-inspired computing, and nature-inspired designs have explored nature-made artifacts and systems to understand nature’s mechanisms and intelligence. Learning from nature, the researchers have generated sustainable designs and innovation in a variety of fields such as energy, architecture, agriculture, transportation, communication, and medicine. Axiomatic design offers a method to judge if a design is good. This paper analyzes design aspects of one of the nature’s amazing object: chicken egg. The functional requirements (FRs) of components of the object are tabulated and mapped on to nature-chosen design parameters (DPs). The ‘independence axiom’ of the axiomatic design methodology is applied to analyze couplings and to evaluate if eggs’ design is good (i.e., uncoupled design) or bad (i.e., coupled design). The analysis revealed that eggs design is a good design, i.e., uncoupled design. This approach can be applied to any nature’s artifacts to judge whether their design is a good or a bad. This methodology is valuable for biomimicry studies. This approach can also be a very useful teaching design consideration of biology and bio-inspired innovation.

Keywords: uncoupled design, axiomatic design, nature design, design evaluation

Procedia PDF Downloads 168
4015 Modeling of Maximum Rainfall Using Poisson-Generalized Pareto Distribution in Kigali, Rwanda

Authors: Emmanuel Iyamuremye

Abstract:

Extreme rainfall events have caused significant damage to agriculture, ecology, and infrastructure, disruption of human activities, injury, and loss of life. They also have significant social, economic, and environmental consequences because they considerably damage urban as well as rural areas. Early detection of extreme maximum rainfall helps to implement strategies and measures, before they occur, hence mitigating the consequences. Extreme value theory has been used widely in modeling extreme rainfall and in various disciplines, such as financial markets, the insurance industry, failure cases. Climatic extremes have been analyzed by using either generalized extreme value (GEV) or generalized Pareto (GP) distributions, which provides evidence of the importance of modeling extreme rainfall from different regions of the world. In this paper, we focused on Peak Over Thresholds approach, where the Poisson-generalized Pareto distribution is considered as the proper distribution for the study of the exceedances. This research also considers the use of the generalized Pareto (GP) distribution with a Poisson model for arrivals to describe peaks over a threshold. The research used statistical techniques to fit models that used to predict extreme rainfall in Kigali. The results indicate that the proposed Poisson-GP distribution provides a better fit to maximum monthly rainfall data. Further, the Poisson-GP models are able to estimate various return levels. The research also found a slow increase in return levels for maximum monthly rainfall for higher return periods, and further, the intervals are increasingly wider as the return period is increasing.

Keywords: exceedances, extreme value theory, generalized Pareto distribution, Poisson generalized Pareto distribution

Procedia PDF Downloads 132
4014 Extraction of Amorphous SiO₂ From Equisetnm Arvense Plant for Synthesis of SiO₂/Zeolitic Imidazolate Framework-8 Nanocomposite and Its Photocatalytic Activity

Authors: Babak Azari, Afshin Pourahmad, Babak Sadeghi, Masuod Mokhtari

Abstract:

In this work, Equisetnm arvense plant extract was used for preparing amorphous SiO₂. For preparing of SiO₂/zeolitic imidazolate framework-8 (ZIF-8) nanocomposite by solvothermal method, the synthesized SiO₂ was added to the synthesis mixture ZIF-8. The nanocomposite was characterized using a range of techniques. The photocatalytic activity of SiO₂/ZIF-8 was investigated systematically by degrading crystal violet as a cationic dye under Ultraviolet light irradiation. Among synthesized samples (SiO₂, ZIF-8 and SiO₂/ZIF-8), the SiO₂/ZIF-8 exhibited the highest photocatalytic activity and improved stability compared to pure SiO₂ and ZIF-8. As evidenced by Scanning Electron Microscopy and Transmission electron microscopy images, ZIF-8 particles without aggregation are located over SiO₂. The SiO₂ not only provides structured support for ZIF-8 but also prevents the aggregation of ZIF-8 Metal-organic framework in comparison to the isolated ZIF-8. The superior activity of this photocatalyst was attributed to the synergistic effects from SiO₂ owing to (I) an electron acceptor (from ZIF-8) and an electron donor (to O₂ molecules), (II) preventing recombination of electron-hole in ZIF-8, and (III) maximum interfacial contact ZIF-8 with the SiO₂ surface without aggregation or prevent the accumulation of ZIF-8. The results demonstrate that holes (h+) and •O₂- are primary reactive species involved in the photocatalytic oxidation process. Moreover, the SiO₂/ZIF-8 photocatalyst did not show any obvious loss of photocatalytic activity during five-cycle tests, which indicates that the heterostructured photocatalyst was highly stable and could be used repeatedly.

Keywords: nano, zeolit, potocatalist, nanocomposite

Procedia PDF Downloads 75