Search results for: noise reduction techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11888

Search results for: noise reduction techniques

10478 Nanostructure Antireflective Sol-Gel Silica Coatings for Solar Collectors

Authors: Najme Lari, Shahrokh Ahangarani, Ali Shanaghi

Abstract:

Sol-gel technology is a promising manufacturing method to produce anti reflective silica thin films for solar energy applications. So to improve the properties of the films, controlling parameter of the sol - gel method is very important. In this study, soaking treatment effect on optical properties of silica anti reflective thin films was investigated. UV-Visible Spectroscopy, Fourier-Transformed Infrared Spectrophotometer and Field Emission Scanning Electron Microscopy was used for the characterization of silica thin films. Results showed that all nanoporous silica layers cause to considerable reduction of light reflections compared with uncoated glasses. With single layer deposition, the amount of reduction depends on the dipping time of coating and has an optimal time. Also, it was found that solar transmittance increased from 91.5% for the bare slide up to 97.5% for the best made sample corresponding to two deposition cycles.

Keywords: sol–gel, silica thin films, anti reflective coatings, optical properties, soaking treatment

Procedia PDF Downloads 453
10477 Modeling and System Identification of a Variable Excited Linear Direct Drive

Authors: Heiko Weiß, Andreas Meister, Christoph Ament, Nils Dreifke

Abstract:

Linear actuators are deployed in a wide range of applications. This paper presents the modeling and system identification of a variable excited linear direct drive (LDD). The LDD is designed based on linear hybrid stepper technology exhibiting the characteristic tooth structure of mover and stator. A three-phase topology provides the thrust force caused by alternating strengthening and weakening of the flux of the legs. To achieve best possible synchronous operation, the phases are commutated sinusoidal. Despite the fact that these LDDs provide high dynamics and drive forces, noise emission limits their operation in calm workspaces. To overcome this drawback an additional excitation of the magnetic circuit is introduced to LDD using additional enabling coils instead of permanent magnets. The new degree of freedom can be used to reduce force variations and related noise by varying the excitation flux that is usually generated by permanent magnets. Hence, an identified simulation model is necessary to analyze the effects of this modification. Especially the force variations must be modeled well in order to reduce them sufficiently. The model can be divided into three parts: the current dynamics, the mechanics and the force functions. These subsystems are described with differential equations or nonlinear analytic functions, respectively. Ordinary nonlinear differential equations are derived and transformed into state space representation. Experiments have been carried out on a test rig to identify the system parameters of the complete model. Static and dynamic simulation based optimizations are utilized for identification. The results are verified in time and frequency domain. Finally, the identified model provides a basis for later design of control strategies to reduce existing force variations.

Keywords: force variations, linear direct drive, modeling and system identification, variable excitation flux

Procedia PDF Downloads 368
10476 A Survey of Semantic Integration Approaches in Bioinformatics

Authors: Chaimaa Messaoudi, Rachida Fissoune, Hassan Badir

Abstract:

Technological advances of computer science and data analysis are helping to provide continuously huge volumes of biological data, which are available on the web. Such advances involve and require powerful techniques for data integration to extract pertinent knowledge and information for a specific question. Biomedical exploration of these big data often requires the use of complex queries across multiple autonomous, heterogeneous and distributed data sources. Semantic integration is an active area of research in several disciplines, such as databases, information-integration, and ontology. We provide a survey of some approaches and techniques for integrating biological data, we focus on those developed in the ontology community.

Keywords: biological ontology, linked data, semantic data integration, semantic web

Procedia PDF Downloads 446
10475 Virtual 3D Environments for Image-Based Navigation Algorithms

Authors: V. B. Bastos, M. P. Lima, P. R. G. Kurka

Abstract:

This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model’s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender.

Keywords: simulation, visual navigation, mobile robot, data visualization

Procedia PDF Downloads 252
10474 Synthesis and Characterization of Poly (N-(Pyridin-2-Ylmethylidene)Pyridin-2-Amine: Thermal and Conductivity Properties

Authors: Nuray Yılmaz Baran

Abstract:

The conjugated Schiff base polymers which are also called as polyazomethines are promising materials for various applications due to their good thermal resistance semiconductive, liquid crystal, fiber forming, nonlinear optical outstanding photo- and electroluminescence and antimicrobial properties. In recent years, polyazomethines have attracted intense attention of researchers especially due to optoelectronic properties which have made its usage possible in organic light emitting diodes (OLEDs), solar cells (SCs), organic field effect transistors (OFETs), and photorefractive holographic materials (PRHMs). In this study, N-(pyridin-2-ylmethylidene)pyridin-2-amine Schiff base was synthesized from condensation reaction of 2-aminopyridine with 2-pyridine carbaldehyde. Polymerization of Schiff base was achieved by polycondensation reaction using NaOCl oxidant in methanol medium at various time and temperatures. The synthesized Schiff base monomer and polymer (Poly(N-(pyridin-2-ylmethylidene)pyridin-2-amine)) was characterized by UV-vis, FT-IR, 1H-NMR, XRD techniques. Molecular weight distribution and the surface morphology of the polymer was determined by GPC and SEM-EDAX techniques. Thermal behaviour of the monomer and polymer was investigated by TG/DTG, DTA and DSC techniques.

Keywords: polyazomethines, polycondensation reaction, Schiff base polymers, thermal stability

Procedia PDF Downloads 227
10473 The Manufacturing of Metallurgical Grade Silicon from Diatomaceous Silica by an Induction Furnace

Authors: Shahrazed Medeghri, Saad Hamzaoui, Mokhtar Zerdali

Abstract:

The metallurgical grade silicon (MG-Si) is obtained from the reduction of silica (SiO2) in an induction furnace or an electric arc furnace. Impurities inherent in reduction process also depend on the quality of the raw material used. Among the applications of the silicon, it is used as a substrate for the photovoltaic conversion of solar energy and this conversion is wider as the purity of the substrate is important. Research is being done where the purpose is looking for new methods of manufacturing and purification of silicon, as well as new materials that can be used as substrates for the photovoltaic conversion of light energy. In this research, the technique of production of silicon in an induction furnace, using a high vacuum for fusion. Diatomaceous Silica (SiO2) used is 99 mass% initial purities, the carbon used is 6N of purity and the particle size of 63μm as starting materials. The final achieved purity of the material was above 50% by mass. These results demonstrate that this method is a technically reliable, and allows obtaining a better return on the amount 50% of silicon.

Keywords: induction furnaces, amorphous silica, carbon microstructure, silicon

Procedia PDF Downloads 398
10472 The Impact of Artesunate-Amodiaquine on Schistosoma mansoni Infection among Children Infected by Plasmodium in Rural Area of Lemfu, Kongo Central, Democratic Republic of the Congo

Authors: Mbanzulu Kennedy, Zanga Josue, Wumba Roger

Abstract:

Malaria and schistosomiasis remain life-threatening public health problems in sub-Saharan Africa. The infection pattern related to age indicates that preschool and school-age children are at the highest risk of malaria and schistosomiasis. Both parasitic infections, separately or combined, may have negative impacts on the haemoglobin concentration levels. The existing data revealed that artemisinin derivatives commonly used to cure malaria present also in antischistosomal activities. The current study investigated the impact of Artesunate-Amodiaquine (AS-AQ) on schistosomiasis when administered to treat malaria in rural area of Lemfu, DRC. A prospective longitudinal study including 171 coinfected children screened for anaemia, Schistosoma mansoni, and Plasmodium falciparum infections. The egg reduction rate and haemoglobin concentration were assessed four weeks after the treatment with AS-AQ, of all coinfected children of this series. One hundred and twenty-five (74.4%) out of 168 coinfected children treated and present during the assessment were found stool negative for S. mansoni eggs. Out of 43 (25.6%) children who remained positives, 37 (22%) showed a partial reduction of eggs amount, and no reduction was noted in 3.6% of coinfected. The mean of haemoglobin concentration and the prevalence of anaemia were, respectively, 10.74±1.5g/dl , 11.2±1.3g/dl, and 64.8%, 51.8%, respectively, before and after treatment, p<0.001. The AS-AQ commonly used against Plasmodium allowed curing S. mansoni in coinfected children and increasing the Hb level. For the future, the randomized and multicentric clinical trials are needed for a better understanding of the effectiveness of AS-AQ against Schistosoma spp. The trial registration number was 3487183.

Keywords: paludisme, schistosomiase, as-aq, enfants lemfu

Procedia PDF Downloads 100
10471 Modeling of Particle Reduction and Volatile Compounds Profile during Chocolate Conching by Electronic Nose and Genetic Programming (GP) Based System

Authors: Juzhong Tan, William Kerr

Abstract:

Conching is one critical procedure in chocolate processing, where special flavors are developed, and smooth mouse feel the texture of the chocolate is developed due to particle size reduction of cocoa mass and other additives. Therefore, determination of the particle size and volatile compounds profile of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products. Currently, precise particle size measurement is usually done by laser scattering which is expensive and inaccessible to small/medium size chocolate manufacturers. Also, some other alternatives, such as micrometer and microscopy, can’t provide good measurements and provide little information. Volatile compounds analysis of cocoa during conching, has similar problems due to its high cost and limited accessibility. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was inserted to a conching machine and was used to monitoring the volatile compound profile of chocolate during the conching. A model correlated volatile compounds profiles along with factors including the content of cocoa, sugar, and the temperature during the conching to particle size of chocolate particles by genetic programming was established. The model was used to predict the particle size reduction of chocolates with different cocoa mass to sugar ratio (1:2, 1:1, 1.5:1, 2:1) at 8 conching time (15min, 30min, 1h, 1.5h, 2h, 4h, 8h, and 24h). And the predictions were compared to laser scattering measurements of the same chocolate samples. 91.3% of the predictions were within the range of later scatting measurement ± 5% deviation. 99.3% were within the range of later scatting measurement ± 10% deviation.

Keywords: cocoa bean, conching, electronic nose, genetic programming

Procedia PDF Downloads 248
10470 Telecom Infrastructure Outsourcing: An Innovative Approach

Authors: Irfan Zafar

Abstract:

Over the years the Telecom Industry in the country has shown a lot of progress in terms of infrastructure development coupled with the availability of telecom services. This has however led to the cut throat completion among various operators thus leading to reduced tariffs to the customers. The profit margins have seen a reduction thus leading the operators to think of other avenues by adopting new models while keeping the quality of service intact. The outsourcing of the network and the resources is one such model which has shown promising benefits which includes lower costs, less risk, higher levels of customer support and engagement, predictable expenses, access to the emerging technologies, benefiting from a highly skilled workforce, adaptability, focus on the core business while reducing capital costs. A lot of research has been done on outsourcing in terms of reasons of outsourcing and its benefits. However this study is an attempt to analyze the effects of the outsourcing on an organizations performance (Telecommunication Sector) considering the variables (1) Cost Reduction (2) Organizational Performance (3) Flexibility (4) Employee Performance (5) Access to Specialized Skills & Technology and the (6) Outsourcing Risks.

Keywords: outsourcing, ICT, telecommunication, IT, networking

Procedia PDF Downloads 393
10469 Effects of High-Intensity Interval Training versus Traditional Rehabilitation Exercises on Functional Outcomes in Patients with Knee Osteoarthritis: A Randomized Controlled Trial

Authors: Ahmed Torad

Abstract:

Background: Knee osteoarthritis (OA) is a prevalent musculoskeletal condition characterized by pain and functional impairment. While various rehabilitation approaches have been employed, the effectiveness of high-intensity interval training (HIIT) compared to traditional rehabilitation exercises remains unclear. Objective: This randomized controlled trial aimed to compare the effects of HIIT and traditional rehabilitation exercises on pain reduction, functional improvement, and quality of life in individuals with knee OA. Methods: A total of 120 participants diagnosed with knee OA were randomly allocated into two groups: the HIIT group (n=60) and the traditional rehabilitation group (n=60). The HIIT group participated in a 12-week supervised program consisting of high-intensity interval exercises, while the traditional rehabilitation group followed a conventional physiotherapy regimen. Outcome measures included visual analog scale (VAS) pain scores, Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), and the Short Form-36 Health Survey (SF-36) at baseline and after the intervention period. Results: Both groups showed significant improvements in pain scores, functional outcomes (WOMAC), and quality of life (SF-36) after 12 weeks of intervention. However, the HIIT group demonstrated superior pain reduction (p<0.001), functional improvement (p<0.001), and physical health-related quality of life (p=0.002) compared to the traditional rehabilitation group. No significant differences were observed in mental health-related quality of life between the two groups. Conclusion: High-intensity interval training appears to be a more effective rehabilitation approach than traditional exercises for individuals with knee osteoarthritis, resulting in greater pain reduction, improved function, and enhanced physical health-related quality of life. These findings suggest that HIIT may represent a promising intervention strategy for managing knee OA and enhancing the overall well-being of affected individuals.

Keywords: knee osteoarthritis, high-intensity interval training, traditional rehabilitation exercises, randomized controlled trial, pain reduction, functional improvement, quality of life

Procedia PDF Downloads 71
10468 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: bottom elevation, MVS, river, SfM

Procedia PDF Downloads 298
10467 Various Models of Quality Management Systems

Authors: Mehrnoosh Askarizadeh

Abstract:

People, process and IT are the most important assets of any organization. Optimal utilization of these resources has been the question of research in business for many decades. The business world have responded by inventing various methodologies that can be used for addressing problems of quality improvement, efficiency of processes, continuous improvement, reduction of waste, automation, strategy alignments etc. Some of these methodologies can be commonly called as Business Process Quality Management methodologies (BPQM). In essence, the first references to the process management can be traced back to Frederick Taylor and scientific management. Time and motion study was addressed to improvement of manufacturing process efficiency. The ideas of scientific management were in use for quite a long period until more advanced quality management techniques were developed in Japan and USA. One of the first prominent methods had been Total Quality Management (TQM) which evolved during 1980’s. About the same time, Six Sigma (SS) originated at Motorola as a separate method. SS spread and evolved; and later joined with ideas of Lean manufacturing to form Lean Six Sigma. In 1990’s due to emerging IT technologies, beginning of globalization, and strengthening of competition, companies recognized the need for better process and quality management. Business Process Management (BPM) emerged as a novel methodology that has taken all this into account and helped to align IT technologies with business processes and quality management. In this article we will study various aspects of above mentioned methods and identified their relations.

Keywords: e-process, quality, TQM, BPM, lean, six sigma, CPI, information technology, management

Procedia PDF Downloads 435
10466 Heavy Metal Reduction in Plant Using Soil Amendment

Authors: C. Chaiyaraksa, T. Khamko

Abstract:

This study investigated the influence of limestone and sepiolite on heavy metals accumulation in the soil and soybean. The soil was synthesized to contaminate with zinc 150 mg/kg, copper 100 mg/kg, and cadmium 1 mg/kg. The contaminated soil was mixed with limestone and sepiolite at the ratio of 1:0, 0:1, 1:1, and 2:1. The amount of soil modifier added to soil was 0.2%, 0.4%, and 0.8%. The metals determination was performed on soil both before and after soybean planting and in the root, shoot, and seed of soybean after harvesting. The study was also on metal translocate from root to seed and on bioaccumulation factor. Using of limestone and sepiolite resulted in a reduction of metals accumulated in soybean. For soil containing a high concentration of copper, cadmium, and zinc, a mixture of limestone and sepiolite (1:1) was recommended to mix with soil with the amount of 0.2%. Zinc could translocate from root to seed more than copper, and cadmium. From studying the movement of metals from soil to accumulate in soybean, the result was that soybean could absorb the highest amount of cadmium, followed by zinc, and copper, respectively.

Keywords: heavy metals, limestone, sepiolite, soil, soybean

Procedia PDF Downloads 150
10465 Comparison of EMG Normalization Techniques Recommended for Back Muscles Used in Ergonomics Research

Authors: Saif Al-Qaisi, Alif Saba

Abstract:

Normalization of electromyography (EMG) data in ergonomics research is a prerequisite for interpreting the data. Normalizing accounts for variability in the data due to differences in participants’ physical characteristics, electrode placement protocols, time of day, and other nuisance factors. Typically, normalized data is reported as a percentage of the muscle’s isometric maximum voluntary contraction (%MVC). Various MVC techniques have been recommended in the literature for normalizing EMG activity of back muscles. This research tests and compares the recommended MVC techniques in the literature for three back muscles commonly used in ergonomics research, which are the lumbar erector spinae (LES), latissimus dorsi (LD), and thoracic erector spinae (TES). Six healthy males from a university population participated in this research. Five different MVC exercises were compared for each muscle using the Tringo wireless EMG system (Delsys Inc.). Since the LES and TES share similar functions in controlling trunk movements, their MVC exercises were the same, which included trunk extension at -60°, trunk extension at 0°, trunk extension while standing, hip extension, and the arch test. The MVC exercises identified in the literature for the LD were chest-supported shoulder extension, prone shoulder extension, lat-pull down, internal shoulder rotation, and abducted shoulder flexion. The maximum EMG signal was recorded during each MVC trial, and then the averages were computed across participants. A one-way analysis of variance (ANOVA) was utilized to determine the effect of MVC technique on muscle activity. Post-hoc analyses were performed using the Tukey test. The MVC technique effect was statistically significant for each of the muscles (p < 0.05); however, a larger sample of participants was needed to detect significant differences in the Tukey tests. The arch test was associated with the highest EMG average at the LES, and also it resulted in the maximum EMG activity more often than the other techniques (three out of six participants). For the TES, trunk extension at 0° was associated with the largest EMG average, and it resulted in the maximum EMG activity the most often (three out of six participants). For the LD, participants obtained their maximum EMG either from chest-supported shoulder extension (three out of six participants) or prone shoulder extension (three out of six participants). Chest-supported shoulder extension, however, had a larger average than prone shoulder extension (0.263 and 0.240, respectively). Although all the aforementioned techniques were superior in their averages, they did not always result in the maximum EMG activity. If an accurate estimate of the true MVC is desired, more than one technique may have to be performed. This research provides additional MVC techniques for each muscle that may elicit the maximum EMG activity.

Keywords: electromyography, maximum voluntary contraction, normalization, physical ergonomics

Procedia PDF Downloads 189
10464 Chipless RFID Capacity Enhancement Using the E-pulse Technique

Authors: Haythem H. Abdullah, Hesham Elkady

Abstract:

With the fast increase in radio frequency identification (RFID) applications such as medical recording, library management, etc., the limitation of active tags stems from its need to external batteries as well as passive or active chips. The chipless RFID tag reduces the cost to a large extent but at the expense of utilizing the spectrum. The reduction of the cost of chipless RFID is due to the absence of the chip itself. The identification is done by utilizing the spectrum in such a way that the frequency response of the tags consists of some resonance frequencies that represent the bits. The system capacity is decided by the number of resonators within the pre-specified band. It is important to find a solution to enhance the spectrum utilization when using chipless RFID. Target identification is a process that results in a decision that a specific target is present or not. Several target identification schemes are present, but one of the most successful techniques in radar target identification in the oscillatory region is the extinction pulse technique (E-Pulse). The E-Pulse technique is used to identify targets via its characteristics (natural) modes. By introducing an innovative solution for chipless RFID reader and tag designs, the spectrum utilization goes to the optimum case. In this paper, a novel capacity enhancement scheme based on the E-pulse technique is introduced to improve the performance of the chipless RFID system.

Keywords: chipless RFID, E-pulse, natural modes, resonators

Procedia PDF Downloads 70
10463 A Review on Parametric Optimization of Casting Processes Using Optimization Techniques

Authors: Bhrugesh Radadiya, Jaydeep Shah

Abstract:

In Indian foundry industry, there is a need of defect free casting with minimum production cost in short lead time. Casting defect is a very large issue in foundry shop which increases the rejection rate of casting and wastage of materials. The various parameters influences on casting process such as mold machine related parameters, green sand related parameters, cast metal related parameters, mold related parameters and shake out related parameters. The mold related parameters are most influences on casting defects in sand casting process. This paper review the casting produced by foundry with shrinkage and blow holes as a major defects was analyzed and identified that mold related parameters such as mold temperature, pouring temperature and runner size were not properly set in sand casting process. These parameters were optimized using different optimization techniques such as Taguchi method, Response surface methodology, Genetic algorithm and Teaching-learning based optimization algorithm. Finally, concluded that a Teaching-learning based optimization algorithm give better result than other optimization techniques.

Keywords: casting defects, genetic algorithm, parametric optimization, Taguchi method, TLBO algorithm

Procedia PDF Downloads 724
10462 Subsea Processing: Deepwater Operation and Production

Authors: Md Imtiaz, Sanchita Dei, Shubham Damke

Abstract:

In recent years, there has been a rapidly accelerating shift from traditional surface processing operations to subsea processing operation. This shift has been driven by a number of factors including the depletion of shallow fields around the world, technological advances in subsea processing equipment, the need for production from marginal fields, and lower initial upfront investment costs compared to traditional production facilities. Moving production facilities to the seafloor offers a number of advantage, including a reduction in field development costs, increased production rates from subsea wells, reduction in the need for chemical injection, minimization of risks to worker ,reduction in spills due to hurricane damage, and increased in oil production by enabling production from marginal fields. Subsea processing consists of a range of technologies for separation, pumping, compression that enables production from offshore well without the need for surface facilities. At present, there are two primary technologies being used for subsea processing: subsea multiphase pumping and subsea separation. Multiphase pumping is the most basic subsea processing technology. Multiphase pumping involves the use of boosting system to transport the multiphase mixture through pipelines to floating production vessels. The separation system is combined with single phase pumps or water would be removed and either pumped to the surface, re-injected, or discharged to the sea. Subsea processing can allow for an entire topside facility to be decommissioned and the processed fluids to be tied back to a new, more distant, host. This type of application reduces costs and increased both overall facility and integrity and recoverable reserve. In future, full subsea processing could be possible, thereby eliminating the need for surface facilities.

Keywords: FPSO, marginal field, Subsea processing, SWAG

Procedia PDF Downloads 409
10461 Catalytic Conversion of Methane into Benzene over CZO Promoted Mo/HZSM-5 for Methane Dehydroaromatization

Authors: Deepti Mishra, Arindam Modak, K. K. Pant, Xiu Song Zhao

Abstract:

The promotional effect of mixed ceria-zirconia oxides (CZO) over the Mo/HZSM-5 catalyst for methane dehydroaromatization (MDA) reaction was studied. The surface and structural properties of the synthesized catalyst were characterized using a range of spectroscopic and microscopic techniques, and the correlation between catalytic properties and its performance for MDA reaction is discussed. The impregnation of CZO solid solution on Mo/HZSM-5 was observed to give an excellent catalytic performance and improved benzene formation rate (4.5 μmol/gcat. s) as compared to the conventional Mo/HZSM-5 (3.1 μmol/gcat. s) catalyst. In addition, a significant reduction in coke formation was observed in the CZO-modified Mo/HZSM-5 catalyst. The prevailing comprehension for higher catalytic activity could be because of the redox properties of CZO deposited Mo/HZSM-5, which acts as a selective oxygen supplier and performs hydrogen combustion during the reaction, which is indirectly probed by O₂-TPD and H₂-TPR analysis. The selective hydrogen combustion prevents the over-oxidation of aromatic species formed during the reaction while the generated steam helps in reducing the amount of coke generated in the MDA reaction. Thus, the advantage of CZO incorporated Mo/HZSM-5 is manifested as it promotes the reaction equilibrium to shift towards the formation of benzene which is favourable for MDA reaction.

Keywords: Mo/HZSM-5, ceria-zirconia (CZO), in-situ combustion, methane dehydroaromatization

Procedia PDF Downloads 91
10460 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 254
10459 Impact of Pedagogical Techniques on the Teaching of Sports Sciences

Authors: Muhammad Saleem

Abstract:

Background: The teaching of sports sciences encompasses a broad spectrum of disciplines, including biomechanics, physiology, psychology, and coaching. Effective pedagogical techniques are crucial in imparting both theoretical knowledge and practical skills necessary for students to excel in the field. The impact of these techniques on students’ learning outcomes, engagement, and professional preparedness remains a vital area of study. Objective: This study aims to evaluate the effectiveness of various pedagogical techniques used in the teaching of sports sciences. It seeks to identify which methods most significantly enhance student learning, retention, engagement, and practical application of knowledge. Methods: A mixed-methods approach was employed, including both quantitative and qualitative analyses. The study involved a comparative analysis of traditional lecture-based teaching, experiential learning, problem-based learning (PBL), and technology-enhanced learning (TEL). Data were collected through surveys, interviews, and academic performance assessments from students enrolled in sports sciences programs at multiple universities. Statistical analysis was used to evaluate academic performance, while thematic analysis was applied to qualitative data to capture student experiences and perceptions. Results: The findings indicate that experiential learning and PBL significantly improve students' understanding and retention of complex sports science concepts compared to traditional lectures. TEL was found to enhance engagement and provide students with flexible learning opportunities, but its impact on deep learning varied depending on the quality of the digital resources. Overall, a combination of experiential learning, PBL, and TEL was identified as the most effective pedagogical approach, leading to higher student satisfaction and better preparedness for real-world applications. Conclusion: The study underscores the importance of adopting diverse and student-centered pedagogical techniques in the teaching of sports sciences. While traditional lectures remain useful for foundational knowledge, integrating experiential learning, PBL, and TEL can substantially improve student outcomes. These findings suggest that educators should consider a blended approach to pedagogy to maximize the effectiveness of sports science education.

Keywords: sport sciences, pedagogical techniques, health and physical education, problem-based learning, student engagement

Procedia PDF Downloads 18
10458 Implementing Mindfulness into Wellness Plans: Assisting Individuals with Substance Abuse and Addiction

Authors: Michele M. Mahr

Abstract:

The purpose of this study is to educate, inform, and facilitate scholarly conversation and discussion regarding the implementation of mindfulness techniques when working with individuals with substance use disorder (SUD) or addictive behaviors in mental health. Mindfulness can be recognized as the present moment, non-judgmental awareness, initiated by concentrated attention that is non-reactive and as openheartedly as possible. Individuals with SUD or addiction typically are challenged with triggers, environmental situations, cravings, or social pressures which may deter them from remaining abstinent from their drug of choice or addictive behavior. Also, mindfulness is recognized as one of the cognitive and behavioral treatment approaches and is both a physical and mental practice that encompasses individuals to become aware of internal situations and experiences with undivided attention. That said, mindfulness may be an effective strategy for individuals to employ during these experiences. This study will reveal how mental health practitioners and addiction counselors may find mindfulness to be an essential component of increasing wellness when working with individuals seeking mental health treatment. To this end, mindfulness is simply the ability individuals have to know what is actually happening as it is occurring and what they are experiencing at the moment. In the context of substance abuse and addiction, individuals may employ breathing techniques, meditation, and cognitive restructuring of the mind to become aware of present moment experiences. Furthermore, the notion of mindfulness has been directly connected to the development of neuropathways. The creation of the neural pathways then leads to creating thoughts which leads to developing new coping strategies and adaptive behaviors. Mindfulness strategies can assist individuals in connecting the mind with the body, allowing the individual to remain centered and focused. All of these mentioned above are vital components to recovery during substance abuse and addiction treatment. There are a variety of therapeutic modalities applying the key components of mindfulness, such as Mindfulness-Based Stress Reduction (MBSR) and Mindfulness-Based Cognitive Therapy for depression (MBCT). This study will provide an overview of both MBSR and MBCT in relation to treating individuals with substance abuse and addiction. The author will also provide strategies for readers to employ when working with clients. Lastly, the author will create and foster a safe space for discussion and engaging conversation among participants to ask questions, share perspectives, and be educated on the numerous benefits of mindfulness within wellness.

Keywords: mindfulness, wellness, substance abuse, mental health

Procedia PDF Downloads 68
10457 Evaluating the Effectiveness of Combined Psychiatric and Psychotherapeutic Care versus Psychotherapy Alone in the Treatment of Depression and Anxiety in Cancer Patients

Authors: Nathen A. Spitz, Dennis Martin Kivlighan III, Arwa Aburizik

Abstract:

Background and Purpose: Presently, there is a paucity of naturalistic studies that directly compare the effectiveness of psychotherapy versus concurrent psychotherapy and psychiatric care for the treatment of depression and anxiety in cancer patients. Informed by previous clinical trials examining the efficacy of concurrent approaches, this study sought to test the hypothesis that a combined approach would result in the greatest reduction of depression and anxiety symptoms. Methods: Data for this study consisted of 433 adult cancer patients, with 252 receiving only psychotherapy and 181 receiving concurrent psychotherapy and psychiatric care at the University of Iowa Hospitals and Clinics. Longitudinal PHQ9 and GAD7 data were analyzed between both groups using latent growth curve analyses. Results: After controlling for treatment length and provider effects, results indicated that concurrent care was more effective than psychotherapy alone for depressive symptoms (γ₁₂ = -0.12, p = .037). Specifically, the simple slope for concurrent care was -0.25 (p = .022), and the simple slope for psychotherapy alone was -0.13 (p = .006), suggesting that patients receiving concurrent care experienced a greater reduction in depressive symptoms compared to patients receiving psychotherapy alone. In contrast, there were no significant differences between psychotherapy alone and concurrent psychotherapy and psychiatric care in the reduction of anxious symptoms. Conclusions: Overall, as both psychotherapy and psychiatric care may address unique aspects of mental health conditions, in addition to potentially providing synergetic support to each other, a combinatorial approach to mental healthcare for cancer patients may improve outcomes.

Keywords: psychiatry, psychology, psycho-oncology, combined care, psychotherapy, behavioral psychology

Procedia PDF Downloads 113
10456 Modeling of Daily Global Solar Radiation Using Ann Techniques: A Case of Study

Authors: Said Benkaciali, Mourad Haddadi, Abdallah Khellaf, Kacem Gairaa, Mawloud Guermoui

Abstract:

In this study, many experiments were carried out to assess the influence of the input parameters on the performance of multilayer perceptron which is one the configuration of the artificial neural networks. To estimate the daily global solar radiation on the horizontal surface, we have developed some models by using seven combinations of twelve meteorological and geographical input parameters collected from a radiometric station installed at Ghardaïa city (southern of Algeria). For selecting of best combination which provides a good accuracy, six statistical formulas (or statistical indicators) have been evaluated, such as the root mean square errors, mean absolute errors, correlation coefficient, and determination coefficient. We noted that multilayer perceptron techniques have the best performance, except when the sunshine duration parameter is not included in the input variables. The maximum of determination coefficient and correlation coefficient are equal to 98.20 and 99.11%. On the other hand, some empirical models were developed to compare their performances with those of multilayer perceptron neural networks. Results obtained show that the neural networks techniques give the best performance compared to the empirical models.

Keywords: empirical models, multilayer perceptron neural network, solar radiation, statistical formulas

Procedia PDF Downloads 340
10455 Synthesis of Silver Powders Destined for Conductive Paste Metallization of Solar Cells Using Butyl-Carbitol and Butyl-Carbitol Acetate Chemical Reduction

Authors: N. Moudir, N. Moulai-Mostefa, Y. Boukennous, I. Bozetine, N. Kamel, D. Moudir

Abstract:

the study focuses on a novel process of silver powders synthesis for the preparation of conductive pastes used for solar cells metalization. Butyl-Carbitol and butyl-carbitol Acetate have been used as solvents and reducing agents of silver nitrate (AgNO3) as precursor to get silver powders. XRD characterization revealed silver powders with a cubic crystal system. SEM micro graphs showed spherical morphology of the particles. Laser granulometer gives similar particles distribution for the two agents. Using same glass frit and organic vehicle for comparative purposes, two conductive pastes were prepared with the synthesized silver powders for the front-side metalization of multi-crystalline cells. The pastes provided acceptable fill factor of 59.5 % and 60.8 % respectively.

Keywords: chemical reduction, conductive paste, silver nitrate, solar cell

Procedia PDF Downloads 303
10454 Preparation of Zno/Ag Nanocomposite and Coating on Polymers for Anti-Infection Biomaterial Application

Authors: Babak Sadeghi, Parisa Ghayomipour

Abstract:

ZnO/Ag nanocomposites coated with polyvinyl chloride (PVC) were prepared by chemical reduction method, for anti-infection biomaterial application. There is a growing interest in attempts in using biomolecular as the templates to grow inorganic nanocomposites in controlled morphology and structure. By optimizing the experiment conditions, we successfully fabricated high yield of ZnO/Ag nanocomposite with full coverage of high-density polyvinyl chloride (PVC) coating. More importantly, ZnO/Ag nanocomposites were shown to significantly inhibit the growth of S. aureus in solution. It was further shown that ZnO/Ag nanocomposites induced thiol depletion that caused death of S. aureus. The coatings were fully characterized using techniques such as scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-ray diffraction (XRD). Most importantly, compared to uncoated metals, the coatings on PVC promoted healthy antibacterial activity. Importantly, compared to ZnO-Ag -uncoated PVC, the ZnO/Ag nanocomposites coated was approximately triplet more effective in preventing bacteria attachment. The result of Thermal Gravimetric Analysis (TGA) indicates that, the ZnO/Ag nanocomposites are chemically stable in the temperature range from 50 to 900 ºC. This result, for the first time, demonstrates the potential of using ZnO/Ag nanocomposites as a coating material for numerous anti-bacterial applications.

Keywords: nanocomposites, antibacterial activity, scanning electron microscopy (SEM), x-ray diffraction (XRD)

Procedia PDF Downloads 465
10453 Effective Planning of Public Transportation Systems: A Decision Support Application

Authors: Ferdi Sönmez, Nihal Yorulmaz

Abstract:

Decision making on the true planning of the public transportation systems to serve potential users is a must for metropolitan areas. To take attraction of travelers to projected modes of transport, adequately fair overall travel times should be provided. In this fashion, other benefits such as lower traffic congestion, road safety and lower noise and atmospheric pollution may be earned. The congestion which comes with increasing demand of public transportation is becoming a part of our lives and making residents’ life difficult. Hence, regulations should be done to reduce this congestion. To provide a constructive and balanced regulation in public transportation systems, right stations should be located in right places. In this study, it is aimed to design and implement a Decision Support System (DSS) Application to determine the optimal bus stop places for public transport in Istanbul which is one of the biggest and oldest cities in the world. Required information is gathered from IETT (Istanbul Electricity, Tram and Tunnel) Enterprises which manages all public transportation services in Istanbul Metropolitan Area. By using the most real-like values, cost assignments are made. The cost is calculated with the help of equations produced by bi-level optimization model. For this study, 300 buses, 300 drivers, 10 lines and 110 stops are used. The user cost of each station and the operator cost taken place in lines are calculated. Some components like cost, security and noise pollution are considered as significant factors affecting the solution of set covering problem which is mentioned for identifying and locating the minimum number of possible bus stops. Preliminary research and model development for this study refers to previously published article of the corresponding author. Model results are represented with the intent of decision support to the specialists on locating stops effectively.

Keywords: operator cost, bi-level optimization model, user cost, urban transportation

Procedia PDF Downloads 241
10452 Inactivation and Stress Response of Salmonella enterica Serotype Typhimurium lt21 upon Cold Gas-Phase Plasma Treatment

Authors: Zoran Herceg, Tomislava Vukušić, Anet Režek Jambrak, Višnja Stulić

Abstract:

Today one of the greatest challenges are directed to the safety of food supply. If food pathogens are ingested they can cause human illnesses. Because of that new technologies that are effective in microbial reduction are developing to be used in food industries. One of such technology is cold gas phase plasma. Salmonella enterica was studied as one of the pathogenes that can be found in food. The aim of this work was to examine the inactivation rate and stress response of plasma treated cells of Salmonella enterica inoculated in apple juice. After the treatment cellular leakage, phenotypic changes in plasma treated cells-biofilm formation and degree of recovery were conducted. Sample volume was inoculated with 5 mL of pure culture of Salmonella enterica and 15 mL of apple juice. Statgraphics Centurion software (StatPoint Technologies, Inc., VA, USA) was used for experimental design and statistical analyses. Treatment time (1, 3, 5 min) and gas flow (40, 60, 80 L/min) were changed. Complete inactivation and 0 % of recovery after the 48 h was observed at these experimental treatments: 3 min; 40 L/min, 3 min; 80 L/min, 5 min; 40 L/min. Biofilm reduction was observed at all treated samples. Also, there was an increase in cellular leakage with a longer plasma treatment. Although there were a significant reduction and 0 % of recovery after the plasma treatments further investigation of the method is needed to clarify whether there are sensorial, physical and chemical changes in juices after the plasma treatment. Acknowledgments: The authors would like to acknowledge the support by Croatian Science Foundation and research project 'Application of electrical discharge plasma for the preservation of liquid foods'.

Keywords: salmonella enterica serotype typhimurium lt21, gas-phase plasma treatment, inactivation, stress response

Procedia PDF Downloads 311
10451 A Review of Deep Learning Methods in Computer-Aided Detection and Diagnosis Systems based on Whole Mammogram and Ultrasound Scan Classification

Authors: Ian Omung'a

Abstract:

Breast cancer remains to be one of the deadliest cancers for women worldwide, with the risk of developing tumors being as high as 50 percent in Sub-Saharan African countries like Kenya. With as many as 42 percent of these cases set to be diagnosed late when cancer has metastasized and or the prognosis has become terminal, Full Field Digital [FFD] Mammography remains an effective screening technique that leads to early detection where in most cases, successful interventions can be made to control or eliminate the tumors altogether. FFD Mammograms have been proven to multiply more effective when used together with Computer-Aided Detection and Diagnosis [CADe] systems, relying on algorithmic implementations of Deep Learning techniques in Computer Vision to carry out deep pattern recognition that is comparable to the level of a human radiologist and decipher whether specific areas of interest in the mammogram scan image portray abnormalities if any and whether these abnormalities are indicative of a benign or malignant tumor. Within this paper, we review emergent Deep Learning techniques that will prove relevant to the development of State-of-The-Art FFD Mammogram CADe systems. These techniques will span self-supervised learning for context-encoded occlusion, self-supervised learning for pre-processing and labeling automation, as well as the creation of a standardized large-scale mammography dataset as a benchmark for CADe systems' evaluation. Finally, comparisons are drawn between existing practices that pre-date these techniques and how the development of CADe systems that incorporate them will be different.

Keywords: breast cancer diagnosis, computer aided detection and diagnosis, deep learning, whole mammogram classfication, ultrasound classification, computer vision

Procedia PDF Downloads 90
10450 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 157
10449 Global Optimization Techniques for Optimal Placement of HF Antennas on a Shipboard

Authors: Mustafa Ural, Can Bayseferogulari

Abstract:

In this work, radio frequency (RF) coupling between two HF antennas on a shipboard platform is minimized by determining an optimal antenna placement. Unlike the other works, the coupling is minimized not only at single frequency but over the whole frequency band of operation. Similarly, GAO and PSO, are used in order to determine optimal antenna placement. Throughout this work, outputs of two optimization techniques are compared with each other in terms of antenna placements and coupling results. At the end of the work, far-field radiation pattern performances of the antennas at their optimal places are analyzed in terms of directivity and coverage in order to see that.

Keywords: electromagnetic compatibility, antenna placement, optimization, genetic algorithm optimization, particle swarm optimization

Procedia PDF Downloads 228