Search results for: computer generated holograms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5587

Search results for: computer generated holograms

4897 Prospectivity Mapping of Orogenic Lode Gold Deposits Using Fuzzy Models: A Case Study of Saqqez Area, Northwestern Iran

Authors: Fanous Mohammadi, Majid H. Tangestani, Mohammad H. Tayebi

Abstract:

This research aims to evaluate and compare Geographical Information Systems (GIS)-based fuzzy models for producing orogenic gold prospectivity maps in the Saqqez area, NW of Iran. Gold occurrences are hosted in sericite schist and mafic to felsic meta-volcanic rocks in this area and are associated with hydrothermal alterations that extend over ductile to brittle shear zones. The predictor maps, which represent the Pre-(Source/Trigger/Pathway), syn-(deposition/physical/chemical traps) and post-mineralization (preservation/distribution of indicator minerals) subsystems for gold mineralization, were generated using empirical understandings of the specifications of known orogenic gold deposits and gold mineral systems and were then pre-processed and integrated to produce mineral prospectivity maps. Five fuzzy logic operators, including AND, OR, Fuzzy Algebraic Product (FAP), Fuzzy Algebraic Sum (FAS), and GAMMA, were applied to the predictor maps in order to find the most efficient prediction model. Prediction-Area (P-A) plots and field observations were used to assess and evaluate the accuracy of prediction models. Mineral prospectivity maps generated by AND, OR, FAP, and FAS operators were inaccurate and, therefore, unable to pinpoint the exact location of discovered gold occurrences. The GAMMA operator, on the other hand, produced acceptable results and identified potentially economic target sites. The P-A plot revealed that 68 percent of known orogenic gold deposits are found in high and very high potential regions. The GAMMA operator was shown to be useful in predicting and defining cost-effective target sites for orogenic gold deposits, as well as optimizing mineral deposit exploitation.

Keywords: mineral prospectivity mapping, fuzzy logic, GIS, orogenic gold deposit, Saqqez, Iran

Procedia PDF Downloads 118
4896 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 68
4895 Optimizing The Residential Design Process Using Automated Technologies

Authors: Martin Georgiev, Milena Nanova, Damyan Damov

Abstract:

Architects, engineers, and developers need to analyse and implement a wide spectrum of data in different formats, if they want to produce viable residential developments. Usually, this data comes from a number of different sources and is not well structured. The main objective of this research project is to provide parametric tools working with real geodesic data that can generate residential solutions. Various codes, regulations and design constraints are described by variables and prioritized. In this way, we establish a common workflow for architects, geodesists, and other professionals involved in the building and investment process. This collaborative medium ensures that the generated design variants conform to various requirements, contributing to a more streamlined and informed decision-making process. The quantification of distinctive characteristics inherent to typical residential structures allows a systematic evaluation of the generated variants, focusing on factors crucial to designers, such as daylight simulation, circulation analysis, space utilization, view orientation, etc. Integrating real geodesic data offers a holistic view of the built environment, enhancing the accuracy and relevance of the design solutions. The use of generative algorithms and parametric models offers high productivity and flexibility of the design variants. It can be implemented in more conventional CAD and BIM workflow. Experts from different specialties can join their efforts, sharing a common digital workspace. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the building investment during its entire lifecycle.

Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization

Procedia PDF Downloads 46
4894 Effectiveness of Computer Video Games on the Levels of Anxiety of Children Scheduled for Tooth Extraction

Authors: Marji Umil, Miane Karyle Urolaza, Ian Winston Dale Uy, John Charle Magne Valdez, Karen Elizabeth Valdez, Ervin Charles Valencia, Cheryleen Tan-Chua

Abstract:

Objective: Distraction techniques can be successful in reducing the anxiety of children during medical procedures. Dental procedures, in particular, are associated with dental anxiety which has been identified as a significant and common problem in children, however, only limited studies were conducted to address such problem. Thus, this study determined the effectiveness of computer video games on the levels of anxiety of children between 5-12 years old scheduled for tooth extraction. Methods: A pre-test post-test quasi-experimental study was conducted involving 30 randomly-assigned subjects, 15 in the experimental and 15 in the control. Subjects in the experimental group played computer video games for a maximum of 15 minutes, however, no intervention was done on the control. The modified Yale Pre-operative Anxiety Scale (m-YPAS) with a Cronbach’s alpha of 0.9 was used to assess anxiety at two different points: upon arrival in the clinic (pre-test anxiety) and 15 minutes after the first measurement (post-test anxiety). Paired t-test and ANCOVA were used to analyze the gathered data. Results: Results showed that there is a significant difference between the pre-test and post-test anxiety scores of the control group (p=0.0002) which indicates an increased anxiety. A significant difference was also noted between the pre-test and post-test anxiety scores of the experimental group (p=0.0002) which indicates decreased anxiety. Comparatively, the experimental group showed lower anxiety score (p=<0.0001) than the control. Conclusion: The use of computer video games is effective in reducing the pre-operative anxiety among children and can be an alternative non-pharmacological management in giving pre-operative care.

Keywords: play therapy, preoperative anxiety, tooth extraction, video games

Procedia PDF Downloads 444
4893 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling

Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari

Abstract:

A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.

Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis

Procedia PDF Downloads 146
4892 Munting Kamay, Munting Gawa: Children's Development Training, a UCU Experience

Authors: Elizabeth A. Montero

Abstract:

The project contemplated in this study particularly aimed at enabling public school children of ages ten to twelve who belong to low and middle income families. The pupils were provided training on communication, work, computer and social skills. In this study, the researcher hypothesized that children given the opportunity to develop a skill through guidance and proper supervision will significantly learn, improve and develop a skill. Since children’s minds are highly absorbent like a sponge absorbing anything within its capacity to take, it is ideal and necessary that education should provide an environment that is rich offering an array of meaningful experiences. The context of this study is well balanced since it catered to the children’s communication, work, computer and social skills.

Keywords: Munting Kamay, Munting Gawa, children’s development training, UCU experience

Procedia PDF Downloads 429
4891 Diagnosis and Analysis of Automated Liver and Tumor Segmentation on CT

Authors: R. R. Ramsheeja, R. Sreeraj

Abstract:

For view the internal structures of the human body such as liver, brain, kidney etc have a wide range of different modalities for medical images are provided nowadays. Computer Tomography is one of the most significant medical image modalities. In this paper use CT liver images for study the use of automatic computer aided techniques to calculate the volume of the liver tumor. Segmentation method is used for the detection of tumor from the CT scan is proposed. Gaussian filter is used for denoising the liver image and Adaptive Thresholding algorithm is used for segmentation. Multiple Region Of Interest(ROI) based method that may help to characteristic the feature different. It provides a significant impact on classification performance. Due to the characteristic of liver tumor lesion, inherent difficulties appear selective. For a better performance, a novel proposed system is introduced. Multiple ROI based feature selection and classification are performed. In order to obtain of relevant features for Support Vector Machine(SVM) classifier is important for better generalization performance. The proposed system helps to improve the better classification performance, reason in which we can see a significant reduction of features is used. The diagnosis of liver cancer from the computer tomography images is very difficult in nature. Early detection of liver tumor is very helpful to save the human life.

Keywords: computed tomography (CT), multiple region of interest(ROI), feature values, segmentation, SVM classification

Procedia PDF Downloads 507
4890 Attenuation of Endotoxin Induced Hepatotoxicity by Dexamethasone, Melatonin and Pentoxifylline in White Albino Mice: A Comparative Study

Authors: Ammara Khan

Abstract:

Sepsis is characterized by an overwhelming surge of cytokines and oxidative stress to one of many factors, gram-negative bacteria commonly implicated. Despite major expansion and elaboration of sepsis pathophysiology and therapeutic approach; death rate remains very high in septic patients due to multiple organ damages including hepatotoxicity.The present study was aimed to ascertain the adequacy of three different drugs delivered separately and collectively- low dose steroid-dexamethasone (3mg/kg i.p) ,antioxidant-melatonin(10 mg/kg i.p) ,and phosphodiesterases inhibitor - pentoxifylline (75 mg/kg i.p)in endotoxin-induced hepatotoxicity in mice. Endotoxin/lipopolysaccharides induced hepatotoxicity was reproduced in mice by giving lipopolysaccharide of serotype E.Coli intraperitoneally. The preventive role was questioned by giving the experimental agent half an hour prior to LPS injection whereas the therapeutic potential of the experimental agent was searched out via post-LPS delivering. The extent of liver damage was adjudged via serum alanine aminotransferases (ALT) and aspartate aminotransferase (AST) estimation along with a histopathological examination of liver tissue. Dexamethasone is given before (Group 3) and after LPS (group 4) significantly attenuated LPS generated liver injury.Pentoxifylline generated similar results and serum ALT; AST histological alteration abated considerably (p≤ 0.05) both in animals subjected to pentoxifylline pre (Group 5) and post-treatment(Group 6). Melatonin was also prosperous in aversion (Group 7) and curation (Group 8) of LPS invoked hepatotoxicity as evident by lessening of augmented ALT (≤0.01) and AST (≤0.01) along with restoration of pathological changes in liver sections (p≤0.05). Combination therapies with dexamethasone in conjunction with melatonin (Group 9), dexamethasone together with pentoxifylline (Group 10), and pentoxifylline along with melatonin (Group 11) after LPS administration tapered LPS evoked hepatic dysfunction statistically considerably. In conclusion, both melatonin and pentoxifylline set up promising results in endotoxin-induced hepatotoxicity and can be used therapeutic adjuncts to conventional treatment strategies in sepsis-induced liver failure.

Keywords: endotoxin/lipopolysacchride, dexamethasone, hepatotoxicity, melatonin, pentoxifylline

Procedia PDF Downloads 276
4889 Experimental Study of an Isobaric Expansion Heat Engine with Hydraulic Power Output for Conversion of Low-Grade-Heat to Electricity

Authors: Maxim Glushenkov, Alexander Kronberg

Abstract:

Isobaric expansion (IE) process is an alternative to conventional gas/vapor expansion accompanied by a pressure decrease typical of all state-of-the-art heat engines. The elimination of the expansion stage accompanied by useful work means that the most critical and expensive parts of ORC systems (turbine, screw expander, etc.) are also eliminated. In many cases, IE heat engines can be more efficient than conventional expansion machines. In addition, IE machines have a very simple, reliable, and inexpensive design. They can also perform all the known operations of existing heat engines and provide usable energy in a very convenient hydraulic or pneumatic form. This paper reports measurement made with the engine operating as a heat-to-shaft-power or electricity converter and a comparison of the experimental results to a thermodynamic model. Experiments were carried out at heat source temperature in the range 30–85 °C and heat sink temperature around 20 °C; refrigerant R134a was used as the engine working fluid. The pressure difference generated by the engine varied from 2.5 bar at the heat source temperature 40 °C to 23 bar at the heat source temperature 85 °C. Using a differential piston, the generated pressure was quadrupled to pump hydraulic oil through a hydraulic motor that generates shaft power and is connected to an alternator. At the frequency of about 0.5 Hz, the engine operates with useful powers up to 1 kW and an oil pumping flowrate of 7 L/min. Depending on the temperature of the heat source, the obtained efficiency was 3.5 – 6 %. This efficiency looks very high, considering such a low temperature difference (10 – 65 °C) and low power (< 1 kW). The engine’s observed performance is in good agreement with the predictions of the model. The results are very promising, showing that the engine is a simple and low-cost alternative to ORC plants and other known energy conversion systems, especially at low temperatures (< 100 °C) and low power range (< 500 kW) where other known technologies are not economic. Thus low-grade solar, geothermal energy, biomass combustion, and waste heat with a temperature above 30 °C can be involved into various energy conversion processes.

Keywords: isobaric expansion, low-grade heat, heat engine, renewable energy, waste heat recovery

Procedia PDF Downloads 217
4888 Online Dietary Management System

Authors: Kyle Yatich Terik, Collins Oduor

Abstract:

The current healthcare system has made healthcare more accessible and efficient by the use of information technology through the implementation of computer algorithms that generate menus based on the diagnosis. While many systems just like these have been created over the years, their main objective is to help healthy individuals calculate their calorie intake and assist them by providing food selections based on a pre-specified calorie. That application has been proven to be useful in some ways, and they are not suitable for monitoring, planning, and managing hospital patients, especially that critical condition their dietary needs. The system also addresses a number of objectives, such as; the main objective is to be able to design, develop and implement an efficient, user-friendly as well as and interactive dietary management system. The specific design development objectives include developing a system that will facilitate a monitoring feature for users using graphs, developing a system that will provide system-generated reports to the users, dietitians, and system admins, design a system that allows users to measure their BMI (Body Mass Index), the system will also provide food template feature that will guide the user on a balanced diet plan. In order to develop the system, further research was carried out in Kenya, Nairobi County, using online questionnaires being the preferred research design approach. From the 44 respondents, one could create discussions such as the major challenges encountered from the manual dietary system, which include no easily accessible information of the calorie intake for food products, expensive to physically visit a dietitian to create a tailored diet plan. Conclusively, the system has the potential of improving the quality of life of people as a whole by providing a standard for healthy living and allowing individuals to have readily available knowledge through food templates that will guide people and allow users to create their own diet plans that consist of a balanced diet.

Keywords: DMS, dietitian, patient, administrator

Procedia PDF Downloads 157
4887 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures

Authors: Mariem Saied, Jens Gustedt, Gilles Muller

Abstract:

We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.

Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments

Procedia PDF Downloads 124
4886 Dynamic Wind Effects in Tall Buildings: A Comparative Study of Synthetic Wind and Brazilian Wind Standard

Authors: Byl Farney Cunha Junior

Abstract:

In this work the dynamic three-dimensional analysis of a 47-story building located in Goiania city when subjected to wind loads generated using both the Wind Brazilian code, NBR6123 (ABNT, 1988) and the Synthetic-Wind method is realized. To model the frames three different methodologies are used: the shear building model and both bi and three-dimensional finite element models. To start the analysis, a plane frame is initially studied to validate the shear building model and, in order to compare the results of natural frequencies and displacements at the top of the structure the same plane frame was modeled using the finite element method through the SAP2000 V10 software. The same steps were applied to an idealized 20-story spacial frame that helps in the presentation of the stiffness correction process applied to columns. Based on these models the two methods used to generate the Wind loads are presented: a discrete model proposed in the Wind Brazilian code, NBR6123 (ABNT, 1988) and the Synthetic-Wind method. The method uses the Davenport spectrum which is divided into a variety of frequencies to generate the temporal series of loads. Finally, the 47- story building was analyzed using both the three-dimensional finite element method through the SAP2000 V10 software and the shear building model. The models were loaded with Wind load generated by the Wind code NBR6123 (ABNT, 1988) and by the Synthetic-Wind method considering different wind directions. The displacements and internal forces in columns and beams were compared and a comparative study considering a situation of a full elevated reservoir is realized. As can be observed the displacements obtained by the SAP2000 V10 model are greater when loaded with NBR6123 (ABNT, 1988) wind load related to the permanent phase of the structure’s response.

Keywords: finite element method, synthetic wind, tall buildings, shear building

Procedia PDF Downloads 269
4885 Geometric Optimisation of Piezoelectric Fan Arrays for Low Energy Cooling

Authors: Alastair Hales, Xi Jiang

Abstract:

Numerical methods are used to evaluate the operation of confined face-to-face piezoelectric fan arrays as pitch, P, between the blades is varied. Both in-phase and counter-phase oscillation are considered. A piezoelectric fan consists of a fan blade, which is clamped at one end, and an extremely low powered actuator. This drives the blade tip’s oscillation at its first natural frequency. Sufficient blade tip speed, created by the high oscillation frequency and amplitude, is required to induce vortices and downstream volume flow in the surrounding air. A single piezoelectric fan may provide the ideal solution for low powered hot spot cooling in an electronic device, but is unable to induce sufficient downstream airflow to replace a conventional air mover, such as a convection fan, in power electronics. Piezoelectric fan arrays, which are assemblies including multiple fan blades usually in face-to-face orientation, must be developed to widen the field of feasible applications for the technology. The potential energy saving is significant, with a 50% power demand reduction compared to convection fans even in an unoptimised state. A numerical model of a typical piezoelectric fan blade is derived and validated against experimental data. Numerical error is found to be 5.4% and 9.8% using two data comparison methods. The model is used to explore the variation of pitch as a function of amplitude, A, for a confined two-blade piezoelectric fan array in face-to-face orientation, with the blades oscillating both in-phase and counter-phase. It has been reported that in-phase oscillation is optimal for generating maximum downstream velocity and flow rate in unconfined conditions, due at least in part to the beneficial coupling between the adjacent blades that leads to an increased oscillation amplitude. The present model demonstrates that confinement has a significant detrimental effect on in-phase oscillation. Even at low pitch, counter-phase oscillation produces enhanced downstream air velocities and flow rates. Downstream air velocity from counter-phase oscillation can be maximally enhanced, relative to that generated from a single blade, by 17.7% at P = 8A. Flow rate enhancement at the same pitch is found to be 18.6%. By comparison, in-phase oscillation at the same pitch outputs 23.9% and 24.8% reductions in peak downstream air velocity and flow rate, relative to that generated from a single blade. This optimal pitch, equivalent to those reported in the literature, suggests that counter-phase oscillation is less affected by confinement. The optimal pitch for generating bulk airflow from counter-phase oscillation is large, P > 16A, due to the small but significant downstream velocity across the span between adjacent blades. However, by considering design in a confined space, counterphase pitch should be minimised to maximise the bulk airflow generated from a certain cross-sectional area within a channel flow application. Quantitative values are found to deviate to a small degree as other geometric and operational parameters are varied, but the established relationships are maintained.

Keywords: piezoelectric fans, low energy cooling, power electronics, computational fluid dynamics

Procedia PDF Downloads 215
4884 The Impact of Information and Communication Technology on Learning Quality and Conceptual Change in Moroccan High School Students

Authors: Azzeddine Atibi, Khadija El Kababi, Salim Ahmed, Mohamed Radid

Abstract:

Teaching and learning occupy a significant position globally, as the sustainable development of all sectors is intrinsically linked to the improvement of the educational system. The COVID-19 pandemic demonstrated that the integration of Information and Communication Technology (ICT) in the learning process is not optional but essential, and that proficiency in computer tools is an asset that will enhance pedagogy and ensure the continuity of learning under any circumstances. The objective of our study is to evaluate the impact of introducing computer tools on the quality of learning and the realization of conceptual change in learners. To this end, a learning situation was meticulously prepared, targeting first-year baccalaureate students in experimental sciences at a public high school, "Khadija Oum Almouminin," focusing on the chapter on glycemia regulation in the Moroccan Life and Earth Sciences (LES) curriculum. The learning situation was implemented with a pilot group that utilized computer tools and a control group that studied the same chapter without using ICT. The analysis and comparison of the results allowed us to verify the research question posed and to propose perspectives to ensure conceptual change in learners.

Keywords: information and communication technology, conceptual change, continuity of learning, life and earth sciences, glycemia regulation

Procedia PDF Downloads 33
4883 Induction Heating Process Design Using Comsol® Multiphysics Software Version 4.2a

Authors: K. Djellabi, M. E. H. Latreche

Abstract:

Induction heating computer simulation is a powerful tool for process design and optimization, induction coil design, equipment selection, as well as education and business presentations. The authors share their vast experience in the practical use of computer simulation for different induction heating and heat treating processes. In this paper deals with mathematical modeling and numerical simulation of induction heating furnaces with axisymmetric geometries. For the numerical solution, we propose finite element methods combined with boundary (FEM) for the electromagnetic model using COMSOL® Multiphysics Software. Some numerical results for an industrial furnace are shown with high frequency.

Keywords: numerical methods, induction furnaces, induction heating, finite element method, Comsol multiphysics software

Procedia PDF Downloads 443
4882 Time-Domain Simulations of the Coupled Dynamics of Surface Riding Wave Energy Converter

Authors: Chungkuk Jin, Moo-Hyun Kim, HeonYong Kang

Abstract:

A surface riding (SR) wave energy converter (WEC) is designed and its feasibility and performance are numerically simulated by the author-developed floater-mooring-magnet-electromagnetics fully-coupled dynamic analysis computer program. The biggest advantage of the SR-WEC is that the performance is equally effective even in low sea states and its structural robustness is greatly improved by simply riding along the wave surface compared to other existing WECs. By the numerical simulations and actuator testing, it is clearly demonstrated that the concept works and through the optimization process, its efficiency can be improved.

Keywords: computer simulation, electromagnetics fully-coupled dynamics, floater-mooring-magnet, optimization, performance evaluation, surface riding, WEC

Procedia PDF Downloads 141
4881 Analysis of The Effect about Different Automatic Sprinkler System Extinguishing The Scooter Fire in Underground Parking Space

Authors: Yu-Hsiu Li, Chun-Hsun Chen

Abstract:

Analysis of automatic sprinkler system protects the scooter in underground parking space, the current of general buildings is mainly equipped with foam fire-extinguishing equipment in Taiwan, the automatic sprinkling system has economic and environmental benefits, even high stability, China and the United States allow the parking space to set the automatic sprinkler system under certain conditions. The literature about scooter full-scale fire indicates that the average fire growth coefficient is 0.19 KW/sec2, it represents the scooter fire is classified as ultra-fast time square fire growth model, automatic sprinkler system can suppress the flame height and prevent extending burning. According to the computer simulation (FDS) literature, no matter computer simulation or full-scale experiments, the active order and trend about sprinkler heads are the same. This study uses the computer simulation program (FDS), the simulation scenario designed includes using a different system (enclosed wet type and open type), and different configurations. The simulation result demonstrates that the open type requires less time to extinguish the fire than the enclosed wet type if the horizontal distance between the sprinkler and the scooter ignition source is short, the sprinkler can act quickly, the heat release rate of fire can be suppressed in advance.

Keywords: automatic sprinkler system, underground parking Spac, FDS, scooter fire extinguishing

Procedia PDF Downloads 137
4880 Microfluidic Based High Throughput Screening System for Photodynamic Therapy against Cancer Cells

Authors: Rina Lee, Chung-Hun Oh, Eunjin Lee, Jeongyun Kim

Abstract:

The Photodynamic therapy (PDT) is a treatment that uses a photosensitizer as a drug to damage and kill cancer cells. After injecting the photosensitizer into the bloodstream, the drug is absorbed by cancer cells selectively. Then the area to be treated is exposed to specific wavelengths of light and the photosensitizer produces a form of oxygen that kills nearby cancer cells. PDT is has an advantage to destroy the tumor with minimized side-effects on normal cells. But, PDT is not a completed method for cancer therapy. Because the mechanism of PDT is quite clear yet and the parameters such as intensity of light and dose of photosensitizer are not optimized for different types of cancers. To optimize these parameters, we suggest a novel microfluidic system to automatically control intensity of light exposure with a personal computer (PC). A polydimethylsiloxane (PDMS) microfluidic chip is composed with (1) a cell culture channels layer where cancer cells were trapped to be tested with various dosed photofrin (1μg/ml used for the test) as the photosensitizer and (2) a color dye layer as a neutral density (ND) filter to reduce intensity of light which exposes the cell culture channels filled with cancer cells. Eight different intensity of light (10%, 20%, …, 100%) are generated through various concentrations of blue dye filling the ND filter. As a light source, a light emitting diode (LED) with 635nm wavelength was placed above the developed PDMS microfluidic chip. The total time for light exposure was 30 minutes and HeLa and PC3 cell lines of cancer cells were tested. The cell viability of cells was evaluated with a Live/Dead assay kit (L-3224, Invitrogen, USA). The stronger intensity of light exposed, the lower viability of the cell was observed, and vice versa. Therefore, this system was demonstrated through investigating the PDT against cancer cell to optimize the parameters as critical light intensity and dose of photosensitizer. Our results suggest that the system can be used for optimizing the combinational parameters of light intensity and photosensitizer dose against diverse cancer cell types.

Keywords: photodynamic therapy, photofrin, high throughput screening, hela

Procedia PDF Downloads 381
4879 Hand Motion and Gesture Control of Laboratory Test Equipment Using the Leap Motion Controller

Authors: Ian A. Grout

Abstract:

In this paper, the design and development of a system to provide hand motion and gesture control of laboratory test equipment is considered and discussed. The Leap Motion controller is used to provide an input to control a laboratory power supply as part of an electronic circuit experiment. By suitable hand motions and gestures, control of the power supply is provided remotely and without the need to physically touch the equipment used. As such, it provides an alternative manner in which to control electronic equipment via a PC and is considered here within the field of human computer interaction (HCI).

Keywords: control, hand gesture, human computer interaction, test equipment

Procedia PDF Downloads 311
4878 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography

Authors: O’Day Luke

Abstract:

Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.

Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison

Procedia PDF Downloads 136
4877 Impact of Fluid Flow Patterns on Metastable Zone Width of Borax in Dual Radial Impeller Crystallizer at Different Impeller Spacings

Authors: A. Čelan, M. Ćosić, D. Rušić, N. Kuzmanić

Abstract:

Conducting crystallization in an agitated vessel requires a proper selection of mixing parameters that would result in a production of crystals of specific properties. In dual impeller systems, which are characterized by a more complex hydrodynamics due to the possible fluid flow interactions, revealing a clear link between mixing parameters and crystallization kinetics is still an open issue. The aim of this work is to establish this connection by investigating how fluid flow patterns, generated by two impellers mounted on the same shaft, reflect on metastable zone width of borax decahydrate, one of the most important parameters of the crystallization process. Investigation was carried out in a 15-dm3 bench scale batch cooling crystallizer with an aspect ratio (H/T) equal to 1.3. For this reason, two radial straight blade turbines (4-SBT) were used for agitation. Experiments were conducted at different impeller spacings at the state of complete suspension. During the process of an unseeded batch cooling crystallization, solution temperature and supersaturation were continuously monitored what enabled a determination of the metastable zone width. Hydrodynamic conditions in the vessel achieved at different impeller spacings investigated were analyzed in detail. This was done firstly by measuring the mixing time required to attain the desired level of homogeneity. Secondly, fluid flow patterns generated in a described dual impeller system were both photographed and simulated by VisiMix Turbulent software. Also, a comparison of these two visualization methods was performed. Experimentally obtained results showed that metastable zone width is definitely affected by the hydrodynamics in the crystallizer. This means that this crystallization parameter can be controlled not only by adjusting the saturation temperature or cooling rate, as is usually done, but also by choosing a suitable impeller spacing that will result in a formation of crystals of wanted size distribution.

Keywords: dual impeller crystallizer, fluid flow pattern, metastable zone width, mixing time, radial impeller

Procedia PDF Downloads 192
4876 Investigation of Produced and Ground Water Contamination of Al Wahat Area South-Eastern Part of Sirt Basin, Libya

Authors: Khalifa Abdunaser, Salem Eljawashi

Abstract:

Study area is threatened by numerous petroleum activities. The most important risk is associated with dramatic dangers of misuse and oil and gas pollutions, such as significant volumes of produced water, which refers to waste water generated during the production of oil and natural gas and disposed on the surface surrounded oil and gas fields. This work concerns the impact of oil exploration and production activities on the physical and environment fate of the area, focusing on the investigation and observation of crude oil migration as toxic fluid. Its penetration in groundwater resulted from the produced water impacted by oilfield operations disposed to the earth surface in Al Wahat area. Describing the areal distribution of the dominant groundwater quality constituents has been conducted to identify the major hydro-geochemical processes that affect the quality of water and to evaluate the relations between rock types and groundwater flow to the quality and geochemistry of water in Post-Eocene aquifer. The chemical and physical characteristics of produced water, where it is produced, and its potential impacts on the environment and on oil and gas operations have been discussed. Field work survey was conducted to identify and locate a large number of monitoring wells previously drilled throughout the study area. Groundwater samples were systematically collected in order to detect the fate of spills resulting from the various activities at the oil fields in the study area. Spatial distribution maps of the water quality parameters were built using Kriging methods of interpolation in ArcMap software. Thematic maps were generated using GIS and remote sensing techniques, which were applied to include all these data layers as an active database for the area for the purpose of identifying hot spots and prioritizing locations based on their environmental conditions as well as for monitoring plans.

Keywords: Sirt Basin, produced water, Al Wahat area, Ground water

Procedia PDF Downloads 141
4875 Urban Rail Transit CBTC Computer Interlocking Subsystem Relying on Multi-Template Pen Point Tracking Algorithm

Authors: Xinli Chen, Xue Su

Abstract:

In the urban rail transit CBTC system, interlocking is considered one of the most basic sys-tems, which has the characteristics of logical complexity and high-security requirements. The development and verification of traditional interlocking subsystems are entirely manual pro-cesses and rely too much on the designer, which often hides many uncertain factors. In order to solve this problem, this article is based on the multi-template nib tracking algorithm for model construction and verification, achieving the main safety attributes and using SCADE for formal verification. Experimental results show that this method helps to improve the quality and efficiency of interlocking software.

Keywords: computer interlocking subsystem, penpoint tracking, communication-based train control system, multi-template tip tracking

Procedia PDF Downloads 152
4874 An Affordability Evaluation of Computer-Based Social-Emotional Skills Interventions for School-Aged Children with Autism Spectrum Disorder

Authors: Ezra N. S. Lockhart

Abstract:

The number of children diagnosed with autism spectrum disorder (ASD) has increased approximately 173% during the last decade making ASD the fastest growing developmental disability in the United States. This rise in prevalence rates indeed has an effect on schools. ASD is overwhelmingly the most reported primary special education eligibility category for students accessing special education, at a national average of 61.3%. ASD is regarded as an urgent public health concern at an estimated annual per capita cost of $3.2 million. Furthermore, considering that ASD is a lifelong disorder estimated lifetime per capita cost reach $35 billion. The resources available to special education programs are insufficient to meet the educational needs of the 6.4 million students receiving special educational services. This is especially true given that there has been and continues to be a chronic shortage of fully certified special education teachers for decades. Reports indicate that 81.1% of students with special needs spend 40% or more in general education classrooms. Regardless of whether support is implemented in the special education or general education classroom the resource demand is obvious. Schools are actively seeking to implement low-cost alternatives and budget saving measures in response to this demand. In public school settings, programs such as Applied Behavior Analysis are challenging to implement and fund at $40,000 per student per year. As an alternative, computer-based interventions are inexpensive, less time-consuming to implement, and require minimal teacher or paraprofessional training to administer. Affordability, pricing schemes, availability, and compatibility of computer-based interventions that support social and emotional skill development in individuals with ASD are discussed.

Keywords: affordability, autism spectrum disorder, computer-based intervention, emotional skills, social skills

Procedia PDF Downloads 163
4873 Designing the Management Plan for Health Care (Medical) Wastes in the Cities of Semnan, Mahdishahr and Shahmirzad

Authors: Rasouli Divkalaee Zeinab, Kalteh Safa, Roudbari Aliakbar

Abstract:

Introduction: Medical waste can lead to the generation and transmission of many infectious and contagious diseases due to the presence of pathogenic agents, thereby necessitating the need for special management to collect, decontaminate, and finally dispose of such products. This study aimed to design a centralized health care (medical) waste management program for the cities of Semnan, Mahdishahr, and Shahmirzad. Methods: This descriptive-analytical study was conducted for six months in the cities of Semnan, Mahdishahr, and Shahmirzad. In this study, the quantitative and qualitative characteristics of the generated wastes were determined by taking samples from all medical waste production centers. Then, the equipment, devices, and machines required for separate collection of the waste from the production centers and for their subsequent decontamination were estimated. Next, the investment costs, current costs, and working capital required for collection, decontamination, and final disposal of the wastes were determined. Finally, the payment for proper waste management of each category of medical waste-producing centers was determined. Results: 1021 kilograms of medical waste are produced daily in the cities of Semnan, Mahdishahr, and Shahmirzad. It was estimated that a 1000-liter autoclave, a machine for collecting medical waste, four 60-liter bins, four 120-liter bins, and four 1200-liter bins were required for implementing the study plan. Also, the estimated total annual medical waste management costs for Semnan City were determined (23,283,903,720 Iranian Rials). Conclusion: The study results showed that establishing a proper management system for medical wastes generated in the three studied cities will cost between 334,280 and 1,253,715 Iranian Rials in fees for the medical centers. The findings of this study provided comprehensive data regarding medical wastes from the generation point to the landfill site, which is vital for the government and the private sector.

Keywords: clinics, decontamination, management, medical waste

Procedia PDF Downloads 75
4872 Modeling of Cf-252 and PuBe Neutron Sources by Monte Carlo Method in Order to Develop Innovative BNCT Therapy

Authors: Marta Błażkiewicz, Adam Konefał

Abstract:

Currently, boron-neutron therapy is carried out mainly with the use of a neutron beam generated in research nuclear reactors. This fact limits the possibility of realization of a BNCT in centers distant from the above-mentioned reactors. Moreover, the number of active nuclear reactors in operation in the world is decreasing due to the limited lifetime of their operation and the lack of new installations. Therefore, the possibilities of carrying out boron-neutron therapy based on the neutron beam from the experimental reactor are shrinking. However, the use of nuclear power reactors for BNCT purposes is impossible due to the infrastructure not intended for radiotherapy. Therefore, a serious challenge is to find ways to perform boron-neutron therapy based on neutrons generated outside the research nuclear reactor. This work meets this challenge. Its goal is to develop a BNCT technique based on commonly available neutron sources such as Cf-252 and PuBe, which will enable the above-mentioned therapy in medical centers unrelated to nuclear research reactors. Advances in the field of neutron source fabrication make it possible to achieve strong neutron fluxes. The current stage of research focuses on the development of virtual models of the above-mentioned sources using the Monte Carlo simulation method. In this study, the GEANT4 tool was used, including the model for simulating neutron-matter interactions - High Precision Neutron. Models of neutron sources were developed on the basis of experimental verification based on the activation detectors method with the use of indium foil and the cadmium differentiation method allowing to separate the indium activation contribution from thermal and resonance neutrons. Due to the large number of factors affecting the result of the verification experiment, the 10% discrepancy between the simulation and experiment results was accepted.

Keywords: BNCT, virtual models, neutron sources, monte carlo, GEANT4, neutron activation detectors, gamma spectroscopy

Procedia PDF Downloads 182
4871 Using Computer Vision to Detect and Localize Fractures in Wrist X-ray Images

Authors: John Paul Q. Tomas, Mark Wilson L. de los Reyes, Kirsten Joyce P. Vasquez

Abstract:

The most frequent type of fracture is a wrist fracture, which often makes it difficult for medical professionals to find and locate. In this study, fractures in wrist x-ray pictures were located and identified using deep learning and computer vision. The researchers used image filtering, masking, morphological operations, and data augmentation for the image preprocessing and trained the RetinaNet and Faster R-CNN models with ResNet50 backbones and Adam optimizers separately for each image filtering technique and projection. The RetinaNet model with Anisotropic Diffusion Smoothing filter trained with 50 epochs has obtained the greatest accuracy of 99.14%, precision of 100%, sensitivity/recall of 98.41%, specificity of 100%, and an IoU score of 56.44% for the Posteroanterior projection utilizing augmented data. For the Lateral projection using augmented data, the RetinaNet model with an Anisotropic Diffusion filter trained with 50 epochs has produced the highest accuracy of 98.40%, precision of 98.36%, sensitivity/recall of 98.36%, specificity of 98.43%, and an IoU score of 58.69%. When comparing the test results of the different individual projections, models, and image filtering techniques, the Anisotropic Diffusion filter trained with 50 epochs has produced the best classification and regression scores for both projections.

Keywords: Artificial Intelligence, Computer Vision, Wrist Fracture, Deep Learning

Procedia PDF Downloads 68
4870 Static and Dynamic Hand Gesture Recognition Using Convolutional Neural Network Models

Authors: Keyi Wang

Abstract:

Similar to the touchscreen, hand gesture based human-computer interaction (HCI) is a technology that could allow people to perform a variety of tasks faster and more conveniently. This paper proposes a training method of an image-based hand gesture image and video clip recognition system using a CNN (Convolutional Neural Network) with a dataset. A dataset containing 6 hand gesture images is used to train a 2D CNN model. ~98% accuracy is achieved. Furthermore, a 3D CNN model is trained on a dataset containing 4 hand gesture video clips resulting in ~83% accuracy. It is demonstrated that a Cozmo robot loaded with pre-trained models is able to recognize static and dynamic hand gestures.

Keywords: deep learning, hand gesture recognition, computer vision, image processing

Procedia PDF Downloads 132
4869 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 249
4868 Monitoring Prospective Sites for Water Harvesting Structures Using Remote Sensing and Geographic Information Systems-Based Modeling in Egypt

Authors: Shereif. H. Mahmoud

Abstract:

Egypt has limited water resources, and it will be under water stress by the year 2030. Therefore, Egypt should consider natural and non-conventional water resources to overcome such a problem. Rain harvesting is one solution. This Paper presents a geographic information system (GIS) methodology - based on decision support system (DSS) that uses remote sensing data, filed survey, and GIS to identify potential RWH areas. The input into the DSS includes a map of rainfall surplus, slope, potential runoff coefficient (PRC), land cover/use, soil texture. In addition, the outputs are map showing potential sites for RWH. Identifying suitable RWH sites implemented in the ArcGIS model environment using the model builder of ArcGIS 10.1. Based on Analytical hierarchy process (AHP) analysis taking into account five layers, the spatial extents of RWH suitability areas identified using Multi-Criteria Evaluation (MCE). The suitability model generated a suitability map for RWH with four suitability classes, i.e. Excellent, Moderate, Poor, and unsuitable. The spatial distribution of the suitability map showed that the excellent suitable areas for RWH concentrated in the northern part of Egypt. According to their averages, 3.24% of the total area have excellent and good suitability for RWH, while 45.04 % and 51.48 % of the total area are moderate and unsuitable suitability, respectively. The majority of the areas with excellent suitability have slopes between 2 and 8% and with an intensively cultivated area. The major soil type in the excellent suitable area is loam and the rainfall range from 100 up to 200 mm. Validation of the used technique depends on comparing existing RWH structures locations with the generated suitability map using proximity analysis tool of ArcGIS 10.1. The result shows that most of exiting RWH structures categorized as successful.

Keywords: rainwater harvesting (RWH), geographic information system (GIS), analytical hierarchy process (AHP), multi-criteria evaluation (MCE), decision support system (DSS)

Procedia PDF Downloads 355