Search results for: bar model method
30005 Integrating Knowledge Distillation of Multiple Strategies
Authors: Min Jindong, Wang Mingxia
Abstract:
With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.Keywords: object detection, knowledge distillation, convolutional network, model compression
Procedia PDF Downloads 28230004 Effects of Research-Based Blended Learning Model Using Adaptive Scaffolding to Enhance Graduate Students' Research Competency and Analytical Thinking Skills
Authors: Panita Wannapiroon, Prachyanun Nilsook
Abstract:
This paper is a report on the findings of a Research and Development (R&D) aiming to develop the model of Research-Based Blended Learning Model Using Adaptive Scaffolding (RBBL-AS) to enhance graduate students’ research competency and analytical thinking skills, to study the result of using such model. The sample consisted of 10 experts in the fields during the model developing stage, while there were 23 graduate students of KMUTNB for the RBBL-AS model try out stage. The research procedures included 4 phases: 1) literature review, 2) model development, 3) model experiment, and 4) model revision and confirmation. The research results were divided into 3 parts according to the procedures as described in the following session. First, the data gathering from the literature review were reported as a draft model; followed by the research finding from the experts’ interviews indicated that the model should be included 8 components to enhance graduate students’ research competency and analytical thinking skills. The 8 components were 1) cloud learning environment, 2) Ubiquitous Cloud Learning Management System (UCLMS), 3) learning courseware, 4) learning resources, 5) adaptive Scaffolding, 6) communication and collaboration tolls, 7) learning assessment, and 8) research-based blended learning activity. Second, the research finding from the experimental stage found that there were statistically significant difference of the research competency and analytical thinking skills posttest scores over the pretest scores at the .05 level. The Graduate students agreed that learning with the RBBL-AS model was at a high level of satisfaction. Third, according to the finding from the experimental stage and the comments from the experts, the developed model was revised and proposed in the report for further implication and references.Keywords: research based learning, blended learning, adaptive scaffolding, research competency, analytical thinking skills
Procedia PDF Downloads 42430003 Investigation on Machine Tools Energy Consumptions
Authors: Shiva Abdoli, Daniel T.Semere
Abstract:
Several researches have been conducted to study consumption of energy in cutting process. Most of these researches are focusing to measure the consumption and propose consumption reduction methods. In this work, the relation between the cutting parameters and the consumption is investigated in order to establish a generalized energy consumption model that can be used for process and production planning in real production lines. Using the generalized model, the process planning will be carried out by taking into account the energy as a function of the selected process parameters. Similarly, the generalized model can be used in production planning to select the right operational parameters like batch sizes, routing, buffer size, etc. in a production line. The description and derivation of the model as well as a case study are given in this paper to illustrate the applicability and validity of the model.Keywords: process parameters, cutting process, energy efficiency, Material Removal Rate (MRR)
Procedia PDF Downloads 50630002 Development of EREC IF Model to Increase Critical Thinking and Creativity Skills of Undergraduate Nursing Students
Authors: Kamolrat Turner, Boontuan Wattanakul
Abstract:
Critical thinking and creativity are prerequisite skills for working professionals in the 21st century. A survey conducted in 2014 at the Boromarajonani College of Nursing, Chon Buri, Thailand, revealed that these skills within students across all academic years was at a low to moderate level. An action research study was conducted to develop the EREC IF Model, a framework which includes the concepts of experience, reflection, engagement, culture and language, ICT, and flexibility and fun, to guide pedagogic activities for 75 sophomores of the undergraduate nursing science program at the college. The model was applied to all professional nursing courses. Prior to implementation, workshops were held to prepare lecturers and students. Both lecturers and students initially expressed their discomfort and pointed to the difficulties with the model. However, later they felt more comfortable, and by the end of the project they expressed their understanding and appreciation of the model. A survey conducted four and eight months after implementation found that the critical thinking and creativity skills of the sophomores were significantly higher than those recorded in the pretest. It could be concluded that the EREC IF model is efficient for fostering critical thinking and creativity skills in the undergraduate nursing science program. This model should be used for other levels of students.Keywords: critical thinking, creativity, undergraduate nursing students, EREC IF model
Procedia PDF Downloads 32430001 Development of a Model for Predicting Radiological Risks in Interventional Cardiology
Authors: Stefaan Carpentier, Aya Al Masri, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis, and ulceration to appear. In order to prevent these deterministic effects, a prediction of the peak skin-dose for the patient is important in order to improve the post-operative care to be given to the patient. The objective of this study is to estimate, before the intervention, the patient dose for ‘Chronic Total Occlusion (CTO)’ procedures by selecting relevant clinical indicators. Materials and methods: 103 procedures were performed in the ‘Interventional Cardiology (IC)’ department using a Siemens Artis Zee image intensifier that provides the Air Kerma of each IC exam. Peak Skin Dose (PSD) was measured for each procedure using radiochromic films. Patient parameters such as sex, age, weight, and height were recorded. The complexity index J-CTO score, specific to each intervention, was determined by the cardiologist. A correlation method applied to these indicators allowed to specify their influence on the dose. A predictive model of the dose was created using multiple linear regressions. Results: Out of 103 patients involved in the study, 5 were excluded for clinical reasons and 2 for placement of radiochromic films outside the exposure field. 96 2D-dose maps were finally used. The influencing factors having the highest correlation with the PSD are the patient's diameter and the J-CTO score. The predictive model is based on these parameters. The comparison between estimated and measured skin doses shows an average difference of 0.85 ± 0.55 Gy for doses of less than 6 Gy. The mean difference between air-Kerma and PSD is 1.66 Gy ± 1.16 Gy. Conclusion: Using our developed method, a first estimate of the dose to the skin of the patient is available before the start of the procedure, which helps the cardiologist in carrying out its intervention. This estimation is more accurate than that provided by the Air-Kerma.Keywords: chronic total occlusion procedures, clinical experimentation, interventional radiology, patient's peak skin dose
Procedia PDF Downloads 14230000 Synthetic Aperture Radar Remote Sensing Classification Using the Bag of Visual Words Model to Land Cover Studies
Authors: Reza Mohammadi, Mahmod R. Sahebi, Mehrnoosh Omati, Milad Vahidi
Abstract:
Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images.Keywords: Bag of Visual Words (BOVW), classification, feature extraction, land cover management, Polarimetric Synthetic Aperture Radar (PolSAR)
Procedia PDF Downloads 21529999 Effects of Aerodynamic on Suspended Cables Using Non-Linear Finite Element Approach
Authors: Justin Nwabanne, Sam Omenyi, Jeremiah Chukwuneke
Abstract:
This work presents structural nonlinear static analysis of a horizontal taut cable using Finite Element Analysis (FEA) method. The FEA was performed analytically to determine the tensions at each nodal point and subsequently, performed based on finite element displacement method computationally using the FEA software, ANSYS 14.0 to determine their behaviour under the influence of aerodynamic forces imposed on the cable. The convergence procedure is adapted into the method to prevent excessive displacements through the computations. The work compared the two FEA cases by examining the effectiveness of the analytical model in describing the response with few degrees of freedom and the ability of the nonlinear finite element procedure adopted to capture the complex features of cable dynamics with reference to the aerodynamic external influence. Results obtained from this work explain that the analytic FEM results without aerodynamic influence show a parabolic response with an optimum deflection at nodal points 12 and 13 with the cable weight at nodes 12 and 13 having the value -1.002936N while for the cable tension shows an optimum deflection value for nodes 12 and 13 at -189396.97kg/km. The maximum displacement for the cable system was obtained from ANSYS 14.0 as 4483.83 mm for X, Y and Z components of displacements at node number 2 while the maximum displacement obtained is 4218.75mm for all the directional components. The dynamic behaviour of a taut cable investigated has application in a typical power transmission line. Aerodynamic influences on the cables were considered using FEA approach by employing ANSYS 14.0 showed a complex modal behaviour as expected.Keywords: aerodynamics, cable tension and weight, finite element analysis, nodal, non-linear model, optimum deflection, suspended cable, transmission line
Procedia PDF Downloads 28229998 A Method for Clinical Concept Extraction from Medical Text
Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg
Abstract:
Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization
Procedia PDF Downloads 13929997 Fully Eulerian Finite Element Methodology for the Numerical Modeling of the Dynamics of Heart Valves
Authors: Aymen Laadhari
Abstract:
During the last decade, an increasing number of contributions have been made in the fields of scientific computing and numerical methodologies applied to the study of the hemodynamics in the heart. In contrast, the numerical aspects concerning the interaction of pulsatile blood flow with highly deformable thin leaflets have been much less explored. This coupled problem remains extremely challenging and numerical difficulties include e.g. the resolution of full Fluid-Structure Interaction problem with large deformations of extremely thin leaflets, substantial mesh deformations, high transvalvular pressure discontinuities, contact between leaflets. Although the Lagrangian description of the structural motion and strain measures is naturally used, many numerical complexities can arise when studying large deformations of thin structures. Eulerian approaches represent a promising alternative to readily model large deformations and handle contact issues. We present a fully Eulerian finite element methodology tailored for the simulation of pulsatile blood flow in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets. Our method enables to use a fluid solver on a fixed mesh, whilst being able to easily model the mechanical properties of the valve. We introduce a semi-implicit time integration scheme based on a consistent NewtonRaphson linearization. A variant of the classical Newton method is introduced and guarantees a third-order convergence. High-fidelity computational geometries are built and simulations are performed under physiological conditions. We address in detail the main features of the proposed method, and we report several experiments with the aim of illustrating its accuracy and efficiency.Keywords: eulerian, level set, newton, valve
Procedia PDF Downloads 28229996 Impact of Facility Disruptions on Demand Allocation Strategies in Reliable Facility Location Models
Authors: Abdulrahman R. Alenezi
Abstract:
This research investigates the effects of facility disruptions on-demand allocation within the context of the Reliable Facility Location Problem (RFLP). We explore two distinct scenarios: one where primary and backup facilities can fail simultaneously and another where such simultaneous failures are not possible. The RFLP model is tailored to reflect these scenarios, incorporating different approaches to transportation cost calculations. Utilizing a Lagrange relaxation method, the model achieves high efficiency, yielding an average optimality gap of 0.1% within 12.2 seconds of CPU time. Findings indicate that primary facilities are typically sited closer to demand points than backup facilities. In cases where simultaneous failures are prohibited, demand points are predominantly assigned to the nearest available facility. Conversely, in scenarios permitting simultaneous failures, demand allocation may prioritize factors beyond mere proximity, such as failure rates. This study highlights the critical influence of facility reliability on strategic location decisions, providing insights for enhancing resilience in supply chain networks.Keywords: reliable supply chain network, facility location problem, reliable facility location model, LaGrange relaxation
Procedia PDF Downloads 3329995 Feasibility of Using Bike Lanes in Conjunctions with Sidewalks for Ground Drone Applications in Last Mile Delivery for Dense Urban Areas
Authors: N. Bazyar Shourabi, K. Nyarko, C. Scott, M. Jeihnai
Abstract:
Ground drones have the potential to reduce the cost and time of making last-mile deliveries. They also have the potential to make a huge impact on human life. Despite this potential, little work has gone into developing a suitable feasibility model for ground drone delivery in dense urban areas. Today, most of the experimental ground delivery drones utilize sidewalks only, with just a few of them starting to use bike lanes, which a significant portion of some urban areas have. This study works on the feasibility of using bike lanes in conjunction with sidewalks for ground drone applications in last-mile delivery for dense urban areas. This work begins with surveying bike lanes and sidewalks within the city of Boston using Geographic Information System (GIS) software to determine the percentage of coverage currently available within the city. Then six scenarios are examined. Based on this research, a mathematical model is developed. The daily cost of delivering packages using each scenario is calculated by the mathematical model. Comparing the drone delivery scenarios with the traditional method of package delivery using trucks will provide essential information concerning the feasibility of implementing routing protocols that combine the use of sidewalks and bike lanes. The preliminary results of the model show that ground drones that can travel via sidewalks or bike lanes have the potential to significantly reduce delivery cost.Keywords: ground drone, intelligent transportation system, last-mile delivery, sidewalk robot
Procedia PDF Downloads 15129994 High Resolution Sandstone Connectivity Modelling: Implications for Outcrop Geological and Its Analog Studies
Authors: Numair Ahmed Siddiqui, Abdul Hadi bin Abd Rahman, Chow Weng Sum, Wan Ismail Wan Yousif, Asif Zameer, Joel Ben-Awal
Abstract:
Advances in data capturing from outcrop studies have made possible the acquisition of high-resolution digital data, offering improved and economical reservoir modelling methods. Terrestrial laser scanning utilizing LiDAR (Light detection and ranging) provides a new method to build outcrop based reservoir models, which provide a crucial piece of information to understand heterogeneities in sandstone facies with high-resolution images and data set. This study presents the detailed application of outcrop based sandstone facies connectivity model by acquiring information gathered from traditional fieldwork and processing detailed digital point-cloud data from LiDAR to develop an intermediate small-scale reservoir sandstone facies model of the Miocene Sandakan Formation, Sabah, East Malaysia. The software RiScan pro (v1.8.0) was used in digital data collection and post-processing with an accuracy of 0.01 m and point acquisition rate of up to 10,000 points per second. We provide an accurate and descriptive workflow to triangulate point-clouds of different sets of sandstone facies with well-marked top and bottom boundaries in conjunction with field sedimentology. This will provide highly accurate qualitative sandstone facies connectivity model which is a challenge to obtain from subsurface datasets (i.e., seismic and well data). Finally, by applying this workflow, we can build an outcrop based static connectivity model, which can be an analogue to subsurface reservoir studies.Keywords: LiDAR, outcrop, high resolution, sandstone faceis, connectivity model
Procedia PDF Downloads 23329993 Assessment of Pre-Processing Influence on Near-Infrared Spectra for Predicting the Mechanical Properties of Wood
Authors: Aasheesh Raturi, Vimal Kothiyal, P. D. Semalty
Abstract:
We studied mechanical properties of Eucalyptus tereticornis using FT-NIR spectroscopy. Firstly, spectra were pre-processed to eliminate useless information. Then, prediction model was constructed by partial least squares regression. To study the influence of pre-processing on prediction of mechanical properties for NIR analysis of wood samples, we applied various pretreatment methods like straight line subtraction, constant offset elimination, vector-normalization, min-max normalization, multiple scattering. Correction, first derivative, second derivatives and their combination with other treatment such as First derivative + straight line subtraction, First derivative+ vector normalization and First derivative+ multiplicative scattering correction. The data processing methods in combination of preprocessing with different NIR regions, RMSECV, RMSEP and optimum factors/rank were obtained by optimization process of model development. More than 350 combinations were obtained during optimization process. More than one pre-processing method gave good calibration/cross-validation and prediction/test models, but only the best calibration/cross-validation and prediction/test models are reported here. The results show that one can safely use NIR region between 4000 to 7500 cm-1 with straight line subtraction, constant offset elimination, first derivative and second derivative preprocessing method which were found to be most appropriate for models development.Keywords: FT-NIR, mechanical properties, pre-processing, PLS
Procedia PDF Downloads 36529992 Bias Prevention in Automated Diagnosis of Melanoma: Augmentation of a Convolutional Neural Network Classifier
Authors: Kemka Ihemelandu, Chukwuemeka Ihemelandu
Abstract:
Melanoma remains a public health crisis, with incidence rates increasing rapidly in the past decades. Improving diagnostic accuracy to decrease misdiagnosis using Artificial intelligence (AI) continues to be documented. Unfortunately, unintended racially biased outcomes, a product of lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone, have increasingly been recognized as a problem.Resulting in noted limitations of the accuracy of the Convolutional neural network (CNN)models. CNN models are prone to biased output due to biases in the dataset used to train them. Our aim in this study was the optimization of convolutional neural network algorithms to mitigate bias in the automated diagnosis of melanoma. We hypothesized that our proposed training algorithms based on a data augmentation method to optimize the diagnostic accuracy of a CNN classifier by generating new training samples from the original ones will reduce bias in the automated diagnosis of melanoma. We applied geometric transformation, including; rotations, translations, scale change, flipping, and shearing. Resulting in a CNN model that provided a modifiedinput data making for a model that could learn subtle racial features. Optimal selection of the momentum and batch hyperparameter increased our model accuracy. We show that our augmented model reduces bias while maintaining accuracy in the automated diagnosis of melanoma.Keywords: bias, augmentation, melanoma, convolutional neural network
Procedia PDF Downloads 21629991 Proactive Pure Handoff Model with SAW-TOPSIS Selection and Time Series Predict
Authors: Harold Vásquez, Cesar Hernández, Ingrid Páez
Abstract:
This paper approach cognitive radio technic and applied pure proactive handoff Model to decrease interference between PU and SU and comparing it with reactive handoff model. Through the study and analysis of multivariate models SAW and TOPSIS join to 3 dynamic prediction techniques AR, MA ,and ARMA. To evaluate the best model is taken four metrics: number failed handoff, number handoff, number predictions, and number interference. The result presented the advantages using this type of pure proactive models to predict changes in the PU according to the selected channel and reduce interference. The model showed better performance was TOPSIS-MA, although TOPSIS-AR had a higher predictive ability this was not reflected in the interference reduction.Keywords: cognitive radio, spectrum handoff, decision making, time series, wireless networks
Procedia PDF Downloads 49529990 Software Assessment Using Ant Colony Optimization Algorithm
Authors: Saad M. Darwish
Abstract:
Recently, software quality issues have come to be seen as important subject as we see an enormous growth of agencies involved in software industries. However,these agencies cannot guarantee the quality of their products, thus leaving users in uncertainties. Software certification is the extension of quality by means that quality needs to be measured prior to certification granting process. This research participates in solving the problem of software assessment by proposing a model for assessment and certification of software product that uses a fuzzy inference engine to integrate both of process–driven and application-driven quality assurance strategies. The key idea of the on hand model is to improve the compactness and the interpretability of the model’s fuzzy rules via employing an ant colony optimization algorithm (ACO), which tries to find good rules description by dint of compound rules initially expressed with traditional single rules. The model has been tested by case study and the results have demonstrated feasibility and practicability of the model in a real environment.Keywords: optimization technique, quality assurance, software certification model, software assessment
Procedia PDF Downloads 49229989 Simulation Study on Vehicle Drag Reduction by Surface Dimples
Authors: S. F. Wong, S. S. Dol
Abstract:
Automotive designers have been trying to use dimples to reduce drag in vehicles. In this work, a car model has been applied with dimple surface with a parameter called dimple ratio DR, the ratio between the depths of the half dimple over the print diameter of the dimple, has been introduced and numerically simulated via k-ε turbulence model to study the aerodynamics performance with the increasing depth of the dimples The Ahmed body car model with 25 degree slant angle is simulated with the DR of 0.05, 0.2, 0.3 0.4 and 0.5 at Reynolds number of 176387 based on the frontal area of the car model. The geometry of dimple changes the kinematics and dynamics of flow. Complex interaction between the turbulent fluctuating flow and the mean flow escalates the turbulence quantities. The maximum level of turbulent kinetic energy occurs at DR = 0.4. It can be concluded that the dimples have generated extra turbulence energy at the surface and as a result, the application of dimples manages to reduce the drag coefficient of the car model compared to the model with smooth surface.Keywords: aerodynamics, boundary layer, dimple, drag, kinetic energy, turbulence
Procedia PDF Downloads 31829988 A Comparative Analysis of the Performance of COSMO and WRF Models in Quantitative Rainfall Prediction
Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Mary Nsabagwa, Triphonia Jacob Ngailo, Joachim Reuder, Sch¨attler Ulrich, Musa Semujju
Abstract:
The Numerical weather prediction (NWP) models are considered powerful tools for guiding quantitative rainfall prediction. A couple of NWP models exist and are used at many operational weather prediction centers. This study considers two models namely the Consortium for Small–scale Modeling (COSMO) model and the Weather Research and Forecasting (WRF) model. It compares the models’ ability to predict rainfall over Uganda for the period 21st April 2013 to 10th May 2013 using the root mean square (RMSE) and the mean error (ME). In comparing the performance of the models, this study assesses their ability to predict light rainfall events and extreme rainfall events. All the experiments used the default parameterization configurations and with same horizontal resolution (7 Km). The results show that COSMO model had a tendency of largely predicting no rain which explained its under–prediction. The COSMO model (RMSE: 14.16; ME: -5.91) presented a significantly (p = 0.014) higher magnitude of error compared to the WRF model (RMSE: 11.86; ME: -1.09). However the COSMO model (RMSE: 3.85; ME: 1.39) performed significantly (p = 0.003) better than the WRF model (RMSE: 8.14; ME: 5.30) in simulating light rainfall events. All the models under–predicted extreme rainfall events with the COSMO model (RMSE: 43.63; ME: -39.58) presenting significantly higher error magnitudes than the WRF model (RMSE: 35.14; ME: -26.95). This study recommends additional diagnosis of the models’ treatment of deep convection over the tropics.Keywords: comparative performance, the COSMO model, the WRF model, light rainfall events, extreme rainfall events
Procedia PDF Downloads 26429987 3D Human Body Reconstruction Based on Multiple Viewpoints
Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo
Abstract:
The aim of this study was to improve the effects of human body 3D reconstruction. The MvP algorithm was adopted to obtain key point information from multiple perspectives. This algorithm allowed the capture of human posture and joint positions from multiple angles, providing more comprehensive and accurate data. The study also incorporated the SMPL-X model, which has been widely used for human body modeling, to achieve more accurate 3D reconstruction results. The use of the MvP algorithm made it possible to observe the reconstructed object from multiple angles, thus reducing the problems of blind spots and missing information. This algorithm was able to effectively capture key point information, including the position and rotation angle of limbs, providing key data for subsequent 3D reconstruction. Compared with traditional single-view methods, the method of multi-view fusion significantly improved the accuracy and stability of reconstruction. By combining the MvP algorithm with the SMPL-X model, we successfully achieved better human body 3D reconstruction effects. The SMPL-X model is highly scalable and can generate highly realistic 3D human body models, thus providing more detail and shape information.Keywords: 3D human reconstruction, multi-view, joint point, SMPL-X
Procedia PDF Downloads 7429986 A Cohesive Zone Model with Parameters Determined by Uniaxial Stress-Strain Curve
Abstract:
A key issue of cohesive zone models is how to determine the cohesive zone model parameters based on real material test data. In this paper, uniaxial nominal stress-strain curve (SS curve) is used to determine two key parameters of a cohesive zone model (CZM): The maximum traction and the area under the curve of traction-separation law (TSL). To this end, the true SS curve is obtained based on the nominal SS curve, and the relationship between the nominal SS curve and TSL is derived based on an assumption that the stress for cracking should be the same in both CZM and the real material. In particular, the true SS curve after necking is derived from the nominal SS curve by taking the average of the power law extrapolation and the linear extrapolation, and a damage factor is introduced to offset the true stress reduction caused by the voids generated at the necking zone. The maximum traction of the TSL is equal to the maximum true stress calculated based on the damage factor at the end of hardening. In addition, a simple specimen is modeled by Abaqus/Standard to calculate the critical J-integral, and the fracture energy calculated by the critical J-integral represents the stored strain energy in the necking zone calculated by the true SS curve. Finally, the CZM parameters obtained by the present method are compared to those used in a previous related work for a simulation of the drop-weight tear test.Keywords: dynamic fracture, cohesive zone model, traction-separation law, stress-strain curve, J-integral
Procedia PDF Downloads 47629985 Mathematical Model for Output Yield Obtained by Single Slope Solar Still
Authors: V. Nagaraju, G. Murali, Nagarjunavarma Ganna, Atluri Pavan Kalyan, N. Sree Sai Ganesh, V. S. V. S. Badrinath
Abstract:
The present work focuses on the development of a mathematical model for the yield obtained by single slope solar still incorporated with cylindrical pipes filled with sand. The mathematical results obtained were validated with the experimental results for the 3 cm of water level at the basin. The mathematical model and results obtained with the experimental investigation are within 11% of deviation. The theoretical model to predict the yield obtained due to the capillary effect was proposed first. And then, to predict the total yield obtained, the thermal effect model was integrated with the capillary effect model. With the obtained results, it is understood that the yield obtained is more in the case of solar stills with sand-filled cylindrical pipes when compared to solar stills without sand-filled cylindrical pipes. And later model was used for predicting yield for 1 cm and 2 cm of water levels at the basin. And it is observed that the maximum yield was obtained for a 1 cm water level at the basin. It means solar still produces better yield with the lower depth of water level at the basin; this may be because of the availability of more space in the sand for evaporation.Keywords: solar still, cylindrical pipes, still efficiency, mathematical modeling, capillary effect model, yield, solar desalination
Procedia PDF Downloads 12329984 Audio-Lingual Method and the English-Speaking Proficiency of Grade 11 Students
Authors: Marthadale Acibo Semacio
Abstract:
Speaking skill is a crucial part of English language teaching and learning. This actually shows the great importance of this skill in English language classes. Through speaking, ideas and thoughts are shared with other people, and a smooth interaction between people takes place. The study examined the levels of speaking proficiency of the control and experimental groups on pronunciation, grammatical accuracy, and fluency. As a quasi-experimental study, it also determined the presence or absence of significant changes in their speaking proficiency levels in terms of pronouncing the words correctly, the accuracy of grammar and fluency of a language given the two methods to the groups of students in the English language, using the traditional and audio-lingual methods. Descriptive and inferential statistics were employed according to the stated specific problems. The study employed a video presentation with prior information about it. In the video, the teacher acts as model one, giving instructions on what is going to be done, and then the students will perform the activity. The students were paired purposively based on their learning capabilities. Observing proper ethics, their performance was audio recorded to help the researcher assess the learner using the modified speaking rubric. The study revealed that those under the traditional method were more fluent than those in the audio-lingual method. With respect to the way in which each method deals with the feelings of the student, the audio-lingual one fails to provide a principle that would relate to this area and follows the assumption that the intrinsic motivation of the students to learn the target language will spring from their interest in the structure of the language. However, the speaking proficiency levels of the students were remarkably reinforced in reading different words through the aid of aural media with their teachers. The study concluded that using an audio-lingual method of teaching is not a stand-alone method but only an aid of the teacher in helping the students improve their speaking proficiency in the English Language. Hence, audio-lingual approach is encouraged to be used in teaching English language, on top of the chalk-talk or traditional method, to improve the speaking proficiency of students.Keywords: audio-lingual, speaking, grammar, pronunciation, accuracy, fluency, proficiency
Procedia PDF Downloads 7229983 Iterative Panel RC Extraction for Capacitive Touchscreen
Authors: Chae Hoon Park, Jong Kang Park, Jong Tae Kim
Abstract:
Electrical characteristics of capacitive touchscreen need to be accurately analyzed to result in better performance for multi-channel capacitance sensing. In this paper, we extracted the panel resistances and capacitances of the touchscreen by comparing measurement data and model data. By employing a lumped RC model for driver-to-receiver paths in touchscreen, we estimated resistance and capacitance values according to the physical lengths of channel paths which are proportional to the RC model. As a result, we obtained the model having 95.54% accuracy of the measurement data.Keywords: electrical characteristics of capacitive touchscreen, iterative extraction, lumped RC model, physical lengths of channel paths
Procedia PDF Downloads 33929982 Finite Element Analysis of Oil-Lubricated Elliptical Journal Bearings
Authors: Marco Tulio C. Faria
Abstract:
Fixed-geometry hydrodynamic journal bearings are one of the best supporting systems for several applications of rotating machinery. Cylindrical journal bearings present excellent load-carrying capacity and low manufacturing costs, but they are subjected to the oil-film instability at high speeds. An attempt of overcoming this instability problem has been the development of non-circular journal bearings. This work deals with an analysis of oil-lubricated elliptical journal bearings using the finite element method. Steady-state and dynamic performance characteristics of elliptical bearings are rendered by zeroth- and first-order lubrication equations obtained through a linearized perturbation method applied on the classical Reynolds equation. Four-node isoparametric rectangular finite elements are employed to model the bearing thin film flow. Curves of elliptical bearing load capacity and dynamic force coefficients are rendered at several operating conditions. The results presented in this work demonstrate the influence of the bearing ellipticity on its performance at different loading conditions.Keywords: elliptical journal bearings, non-circular journal bearings, hydrodynamic bearings, finite element method
Procedia PDF Downloads 45129981 Second Order Statistics of Dynamic Response of Structures Using Gamma Distributed Damping Parameters
Authors: Badreddine Chemali, Boualem Tiliouine
Abstract:
This article presents the main results of a numerical investigation on the uncertainty of dynamic response of structures with statistically correlated random damping Gamma distributed. A computational method based on a Linear Statistical Model (LSM) is implemented to predict second order statistics for the response of a typical industrial building structure. The significance of random damping with correlated parameters and its implications on the sensitivity of structural peak response in the neighborhood of a resonant frequency are discussed in light of considerable ranges of damping uncertainties and correlation coefficients. The results are compared to those generated using Monte Carlo simulation techniques. The numerical results obtained show the importance of damping uncertainty and statistical correlation of damping coefficients when obtaining accurate probabilistic estimates of dynamic response of structures. Furthermore, the effectiveness of the LSM model to efficiently predict uncertainty propagation for structural dynamic problems with correlated damping parameters is demonstrated.Keywords: correlated random damping, linear statistical model, Monte Carlo simulation, uncertainty of dynamic response
Procedia PDF Downloads 28329980 Pilot Scale Deproteinization Study on Fish Scale Using Response Surface Methodology
Authors: Fatima Bellali, Mariem Kharroubi
Abstract:
Fish scale wastes are one of the main sources of production of value-added products such as collagen. The main aim of this study is to investigate the optimization conditions of the sardine scale deproteinization using response surface methodology (RSM) on a pilot scale. In order to look for the optimal conditions, a Box–Behnken-based design of experiment (DOE) method was carried out. The model predicted values of product coal ash content were in good agreement with the experiment values (R2 = 0.9813). Finally, model-based optimization was carried out to identify the operating parameters (reaction time=4h and the solid-liquid ratio= 1/10) and to obtain the lowest collagen content.Keywords: pilot scale, Plackett and Burman design, fish waste, deproteinization
Procedia PDF Downloads 16329979 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models
Authors: Suriya
Abstract:
Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar
Procedia PDF Downloads 5229978 Prediction of Thermodynamic Properties of N-Heptane in the Critical Region
Authors: Sabrina Ladjama, Aicha Rizi, Azzedine Abbaci
Abstract:
In this work, we use the crossover model to formulate a comprehensive fundamental equation of state for the thermodynamic properties for several n-alkanes in the critical region that extends to the classical region. This equation of state is constructed on the basis of comparison of selected measurements of pressure-density-temperature data, isochoric and isobaric heat capacity. The model can be applied in a wide range of temperatures and densities around the critical point for n-heptane. It is found that the developed model represents most of the reliable experimental data accurately.Keywords: crossover model, critical region, fundamental equation, n-heptane
Procedia PDF Downloads 48029977 Long Short-Term Memory Stream Cruise Control Method for Automated Drift Detection and Adaptation
Authors: Mohammad Abu-Shaira, Weishi Shi
Abstract:
Adaptive learning, a commonly employed solution to drift, involves updating predictive models online during their operation to react to concept drifts, thereby serving as a critical component and natural extension for online learning systems that learn incrementally from each example. This paper introduces LSTM-SCCM “Long Short-Term Memory Stream Cruise Control Method”, a drift adaptation-as-a-service framework for online learning. LSTM-SCCM automates drift adaptation through prompt detection, drift magnitude quantification, dynamic hyperparameter tuning, performing shortterm optimization and model recalibration for immediate adjustments, and, when necessary, conducting long-term model recalibration to ensure deeper enhancements in model performance. LSTM-SCCM is incorporated into a suite of cutting-edge online regression models, assessing their performance across various types of concept drift using diverse datasets with varying characteristics. The findings demonstrate that LSTM-SCCM represents a notable advancement in both model performance and efficacy in handling concept drift occurrences. LSTM-SCCM stands out as the sole framework adept at effectively tackling concept drifts within regression scenarios. Its proactive approach to drift adaptation distinguishes it from conventional reactive methods, which typically rely on retraining after significant degradation to model performance caused by drifts. Additionally, LSTM-SCCM employs an in-memory approach combined with the Self-Adjusting Memory (SAM) architecture to enhance real-time processing and adaptability. The framework incorporates variable thresholding techniques and does not assume any particular data distribution, making it an ideal choice for managing high-dimensional datasets and efficiently handling large-scale data. Our experiments, which include abrupt, incremental, and gradual drifts across both low- and high-dimensional datasets with varying noise levels, and applied to four state-of-the-art online regression models, demonstrate that LSTM-SCCM is versatile and effective, rendering it a valuable solution for online regression models to address concept drift.Keywords: automated drift detection and adaptation, concept drift, hyperparameters optimization, online and adaptive learning, regression
Procedia PDF Downloads 2129976 Understanding Cognitive Fatigue From FMRI Scans With Self-supervised Learning
Authors: Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Fillia Makedon, Glenn Wylie
Abstract:
Functional magnetic resonance imaging (fMRI) is a neuroimaging technique that records neural activations in the brain by capturing the blood oxygen level in different regions based on the task performed by a subject. Given fMRI data, the problem of predicting the state of cognitive fatigue in a person has not been investigated to its full extent. This paper proposes tackling this issue as a multi-class classification problem by dividing the state of cognitive fatigue into six different levels, ranging from no-fatigue to extreme fatigue conditions. We built a spatio-temporal model that uses convolutional neural networks (CNN) for spatial feature extraction and a long short-term memory (LSTM) network for temporal modeling of 4D fMRI scans. We also applied a self-supervised method called MoCo (Momentum Contrast) to pre-train our model on a public dataset BOLD5000 and fine-tuned it on our labeled dataset to predict cognitive fatigue. Our novel dataset contains fMRI scans from Traumatic Brain Injury (TBI) patients and healthy controls (HCs) while performing a series of N-back cognitive tasks. This method establishes a state-of-the-art technique to analyze cognitive fatigue from fMRI data and beats previous approaches to solve this problem.Keywords: fMRI, brain imaging, deep learning, self-supervised learning, contrastive learning, cognitive fatigue
Procedia PDF Downloads 194