Search results for: conventional learning method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26395

Search results for: conventional learning method

20485 The Effect of Ice in Pain Control before Digital Nerve Block

Authors: Fatemeh Rasooli, Behzad Simiari, Pooya Payandemehr, Amir Nejati, Maryam Bahreini, Atefeh Abdollahi

Abstract:

Introduction: Pain is a complex physiological reaction to tissue injury. In the course of painful procedures such as nerve block, ice has been shown to be a feasible and inexpensive material to control pain. It delays nerve conduction, actives other senses and reduces inflammatory and painful responses. This study assessed the effect of ice in reducing pain caused by needling and infiltration during digital block. Patient satisfaction recorded as a secondary outcome. Methods: This study was designed as a non-blinded randomized clinical trial approved by Tehran University of Medical Sciences Ethical Committee. Informed consent was taken from all the participants who were then randomly divided into two groups. Digital block performed by standard approach in selected patients. Tubes of ice were prepared in gloves and were fragmented at a time of application for circling around the finger. Tubes were applied for 6 minutes before digital nerve block in the site of needling in the case group. Patients in the control group underwent digital nerve block with the conventional method without ice administration. Numeric Rating Scale (NRS) used for grading pain. 0 used for no pain and 10 for the worst pain that patient had experienced until now. Scores were analyzed by Wilcoxon Rank Sum test and compared in case and control groups. Results: 100 patients aged 16-50 years were enrolled. Mean NRS scores with and without ice were 1.5 mm (S.D ± 1.44) and 6.8 mm (S.D ± 1.40) for needling pain and for infiltration pain were 2.7mm ( S.D ±1.65) and 8.5mm ( S.D ± 1.47), respectively (p<0.001). Besides, patients’ satisfactions were significantly higher in the ice group (p<0.001). Conclusion: Application of ice for 6 minutes significantly reduced pain of needling and infiltration in digital nerve block; thus, it seems to be a feasible and inexpensive material which acts effectively to decrease pain and stress before the procedure.

Keywords: digital block, ice, needle, pain

Procedia PDF Downloads 222
20484 Evaluation of Three Digital Graphical Methods of Baseflow Separation Techniques in the Tekeze Water Basin in Ethiopia

Authors: Alebachew Halefom, Navsal Kumar, Arunava Poddar

Abstract:

The purpose of this work is to specify the parameter values, the base flow index (BFI), and to rank the methods that should be used for base flow separation. Three different digital graphical approaches are chosen and used in this study for the purpose of comparison. The daily time series discharge data were collected from the site for a period of 30 years (1986 up to 2015) and were used to evaluate the algorithms. In order to separate the base flow and the surface runoff, daily recorded streamflow (m³/s) data were used to calibrate procedures and get parameter values for the basin. Additionally, the performance of the model was assessed by the use of the standard error (SE), the coefficient of determination (R²), and the flow duration curve (FDC) and baseflow indexes. The findings indicate that, in general, each strategy can be used worldwide to differentiate base flow; however, the Sliding Interval Method (SIM) performs significantly better than the other two techniques in this basin. The average base flow index was calculated to be 0.72 using the local minimum method, 0.76 using the fixed interval method, and 0.78 using the sliding interval method, respectively.

Keywords: baseflow index, digital graphical methods, streamflow, Emba Madre Watershed

Procedia PDF Downloads 64
20483 On the PTC Thermistor Model with a Hyperbolic Tangent Electrical Conductivity

Authors: M. O. Durojaye, J. T. Agee

Abstract:

This paper is on the one-dimensional, positive temperature coefficient (PTC) thermistor model with a hyperbolic tangent function approximation for the electrical conductivity. The method of asymptotic expansion was adopted to obtain the steady state solution and the unsteady-state response was obtained using the method of lines (MOL) which is a well-established numerical technique. The approach is to reduce the partial differential equation to a vector system of ordinary differential equations and solve numerically. Our analysis shows that the hyperbolic tangent approximation introduced is well suitable for the electrical conductivity. Numerical solutions obtained also exhibit correct physical characteristics of the thermistor and are in good agreement with the exact steady state solutions.

Keywords: electrical conductivity, hyperbolic tangent function, PTC thermistor, method of lines

Procedia PDF Downloads 310
20482 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 31
20481 A Wide View Scheme for Automobile's Black Box

Authors: Jaemyoung Lee

Abstract:

We propose a wide view camera scheme for automobile's black box. The proposed scheme uses the commercially available camera lenses of which view angles are about 120°}^{\circ}°. In the proposed scheme, we extend the view angle to approximately 200° ^{\circ}° using two cameras at the front side instead of three lenses with conventional black boxes.

Keywords: camera, black box, view angle, automobile

Procedia PDF Downloads 398
20480 Spray Nebulisation Drying: Alternative Method to Produce Microparticulated Proteins

Authors: Josef Drahorad, Milos Beran, Ondrej Vltavsky, Marian Urban, Martin Fronek, Jiri Sova

Abstract:

Engineering efforts of researchers of the Food research institute Prague and the Czech Technical University in spray drying technologies led to the introduction of a demonstrator ATOMIZER and a new technology of Carbon Dioxide-Assisted Spray Nebulization Drying (CASND). The equipment combines the spray drying technology, when the liquid to be dried is atomized by a rotary atomizer, with Carbon Dioxide Assisted Nebulization - Bubble Dryer (CAN-BD) process in an original way. A solution, emulsion or suspension is saturated by carbon dioxide at pressure up to 80 bar before the drying process. The atomization process takes place in two steps. In the first step, primary droplets are produced at the outlet of the rotary atomizer of special construction. In the second step, the primary droplets are divided in secondary droplets by the CO2 expansion from the inside of primary droplets. The secondary droplets, usually in the form of microbubbles, are rapidly dried by warm air stream at temperatures up to 60ºC and solid particles are formed in a drying chamber. Powder particles are separated from the drying air stream in a high efficiency fine powder separator. The product is frequently in the form of submicron hollow spheres. The CASND technology has been used to produce microparticulated protein concentrates for human nutrition from alternative plant sources - hemp and canola seed filtration cakes. Alkali extraction was used to extract the proteins from the filtration cakes. The protein solutions after the alkali extractions were dried with the demonstrator ATOMIZER. Aerosol particle size distribution and concentration in the draying chamber were determined by two different on-line aerosol spectrometers SMPS (Scanning Mobility Particle Sizer) and APS (Aerodynamic Particle Sizer). The protein powders were in form of hollow spheres with average particle diameter about 600 nm. The particles were characterized by the SEM method. The functional properties of the microparticulated protein concentrates were compared with the same protein concentrates dried by the conventional spray drying process. Microparticulated protein has been proven to have improved foaming and emulsifying properties, water and oil absorption capacities and formed long-term stable water dispersions. This work was supported by the research grants TH03010019 of the Technology Agency of the Czech Republic.

Keywords: carbon dioxide-assisted spray nebulization drying, canola seed, hemp seed, microparticulated proteins

Procedia PDF Downloads 154
20479 Thai Student Teachers' Prior Understanding of Nature of Science (NOS)

Authors: N. Songumpai, W. Sumranwanich, S. Chatmaneerungcharoen

Abstract:

This research aims to study the understanding of 8 aspects of nature of science (NOS). The research participants were 39 General Science student teachers who were selected by purposive sampling. In 2015 academic year, they enrolled in the course of Science Education Learning Management. Qualitative research was used as research methodology to understand how the student teachers propose on NOS. The research instruments consisted of open-ended questionnaires and semi-structure interviews that were used to assess students’ understanding of NOS. Research data was collected by 8 items- questionnaire and was categorized into students’ understanding of NOS, which consisted of complete understanding (CU), partial understanding (PU), misunderstanding (MU) and no understanding (NU). The findings reveal the majority of students’ misunderstanding of NOS regarding the aspects of theory and law(89.7%), scientific method(61.5%) and empirical evidence(15.4%) respectively. From the interview data, the student teachers present their misconceptions of NOS that indicate about theory and law cannot change; science knowledge is gained through experiment only (step by step); science is the things that are around humans. These results suggest that for effective science teacher education, the composition of design of NOS course needs to be considered. Therefore, teachers’ understanding of NOS is necessary to integrate into professional development program/course for empowering student teachers to begin their careers as strong science teachers in schools.

Keywords: nature of science, student teacher, no understanding, misunderstanding, partial understanding, complete understanding

Procedia PDF Downloads 247
20478 Domain Adaptation Save Lives - Drowning Detection in Swimming Pool Scene Based on YOLOV8 Improved by Gaussian Poisson Generative Adversarial Network Augmentation

Authors: Simiao Ren, En Wei

Abstract:

Drowning is a significant safety issue worldwide, and a robust computer vision-based alert system can easily prevent such tragedies in swimming pools. However, due to domain shift caused by the visual gap (potentially due to lighting, indoor scene change, pool floor color etc.) between the training swimming pool and the test swimming pool, the robustness of such algorithms has been questionable. The annotation cost for labeling each new swimming pool is too expensive for mass adoption of such a technique. To address this issue, we propose a domain-aware data augmentation pipeline based on Gaussian Poisson Generative Adversarial Network (GP-GAN). Combined with YOLOv8, we demonstrate that such a domain adaptation technique can significantly improve the model performance (from 0.24 mAP to 0.82 mAP) on new test scenes. As the augmentation method only require background imagery from the new domain (no annotation needed), we believe this is a promising, practical route for preventing swimming pool drowning.

Keywords: computer vision, deep learning, YOLOv8, detection, swimming pool, drowning, domain adaptation, generative adversarial network, GAN, GP-GAN

Procedia PDF Downloads 77
20477 Organizational Innovations of the 20th Century as High Tech of the 21st: Evidence from Patent Data

Authors: Valery Yakubovich, Shuping wu

Abstract:

Organization theorists have long claimed that organizational innovations are nontechnological, in part because they are unpatentable. The claim rests on the assumption that organizational innovations are abstract ideas embodied in persons and contexts rather than in context-free practical tools. However, over the last three decades, organizational knowledge has been increasingly embodied in digital tools which, in principle, can be patented. To provide the first empirical evidence regarding the patentability of organizational innovations, we trained two machine learning algorithms to identify a population of 205,434 patent applications for organizational technologies (OrgTech) and, among them, 141,285 applications that use organizational innovations accumulated over the 20th century. Our event history analysis of the probability of patenting an OrgTech invention shows that ideas from organizational innovations decrease the probability of patent allowance unless they describe a practical tool. We conclude that the present-day digital transformation places organizational innovations in the realm of high tech and turns the debate about organizational technologies into the challenge of designing practical organizational tools that embody big ideas about organizing. We outline an agenda for patent-based research on OrgTech as an emerging phenomenon.

Keywords: organizational innovation, organizational technology, high tech, patents, machine learning

Procedia PDF Downloads 110
20476 Adaption to Climate Change as a Challenge for the Manufacturing Industry: Finding Business Strategies by Game-Based Learning

Authors: Jan Schmitt, Sophie Fischer

Abstract:

After the Corona pandemic, climate change is a further, long-lasting challenge the society must deal with. An ongoing climate change need to be prevented. Nevertheless, the adoption tothe already changed climate conditionshas to be focused in many sectors. Recently, the decisive role of the economic sector with high value added can be seen in the Corona crisis. Hence, manufacturing industry as such a sector, needs to be prepared for climate change and adaption. Several examples from the manufacturing industry show the importance of a strategic effort in this field: The outsourcing of a major parts of the value chain to suppliers in other countries and optimizing procurement logistics in a time-, storage- and cost-efficient manner within a network of global value creation, can lead vulnerable impacts due to climate-related disruptions. E.g. the total damage costs after the 2011 flood disaster in Thailand, including costs for delivery failures, were estimated at 45 billion US dollars worldwide. German car manufacturers were also affected by supply bottlenecks andhave close its plant in Thailand for a short time. Another OEM must reduce the production output. In this contribution, a game-based learning approach is presented, which should enable manufacturing companies to derive their own strategies for climate adaption out of a mix of different actions. Based on data from a regional study of small, medium and large manufacturing companies in Mainfranken, a strongly industrialized region of northern Bavaria (Germany) the game-based learning approach is designed. Out of this, the actual state of efforts due to climate adaption is evaluated. First, the results are used to collect single actions for manufacturing companies and second, further actions can be identified. Then, a variety of climate adaption activities can be clustered according to the scope of activity of the company. The combination of different actions e.g. the renewal of the building envelope with regard to thermal insulation, its benefits and drawbacks leads to a specific strategy for climate adaption for each company. Within the game-based approach, the players take on different roles in a fictionalcompany and discuss the order and the characteristics of each action taken into their climate adaption strategy. Different indicators such as economic, ecologic and stakeholder satisfaction compare the success of the respective measures in a competitive format with other virtual companies deriving their own strategy. A "play through" climate change scenarios with targeted adaptation actions illustrate the impact of different actions and their combination onthefictional company.

Keywords: business strategy, climate change, climate adaption, game-based learning

Procedia PDF Downloads 196
20475 Penetration Analysis for Composites Applicable to Military Vehicle Armors, Aircraft Engines and Nuclear Power Plant Structures

Authors: Dong Wook Lee

Abstract:

This paper describes a method for analyzing penetration for composite material using an explicit nonlinear Finite Element Analysis (FEA). This method may be used in the early stage of design for the protection of military vehicles, aircraft engines and nuclear power plant structures made of composite materials. This paper deals with simple ballistic penetration tests for composite materials and the FEA modeling method and results. The FEA was performed to interpret the ballistic field test phenomenon regarding the damage propagation in the structure subjected to local foreign object impact.

Keywords: computer aided engineering, finite element analysis, impact analysis, penetration analysis, composite material

Procedia PDF Downloads 112
20474 Digital Economy as an Alternative for Post-Pandemic Recovery in Latin America: A Literature Review

Authors: Armijos-Orellana Ana, González-Calle María, Maldonado-Matute Juan, Guerrero-Maxi Pedro

Abstract:

Nowadays, the digital economy represents a fundamental element to guarantee economic and social development, whose importance increased significantly with the arrival of the COVID-19 pandemic. However, despite the benefits it offers, it can also be detrimental to those developing countries characterized by a wide digital divide. It is for this reason that the objective of this research was to identify and describe the main characteristics, benefits, and obstacles of the digital economy for Latin American countries. Through a bibliographic review, using the analytical-synthetic method in the period 1995-2021, it was determined that the digital economy could give way to structural changes, reduce inequality, and promote processes of social inclusion, as well as promote the construction and participatory development of organizational structures and institutional capacities in Latin American countries. However, the results showed that the digital economy is still incipient in the region and at least three factors are needed to establish it: joint work between academia, the business sector and the State, greater emphasis on learning and application of digital transformation and the creation of policies that encourage the creation of digital organizations.

Keywords: developing countries, digital divide, digital economy, digital literacy, digital transformation

Procedia PDF Downloads 124
20473 Structural Health Monitoring of Buildings–Recorded Data and Wave Method

Authors: Tzong-Ying Hao, Mohammad T. Rahmani

Abstract:

This article presents the structural health monitoring (SHM) method based on changes in wave traveling times (wave method) within a layered 1-D shear beam model of structure. The wave method measures the velocity of shear wave propagating in a building from the impulse response functions (IRF) obtained from recorded data at different locations inside the building. If structural damage occurs in a structure, the velocity of wave propagation through it changes. The wave method analysis is performed on the responses of Torre Central building, a 9-story shear wall structure located in Santiago, Chile. Because events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded at this building, therefore it can serve as a full-scale benchmark to validate the structural health monitoring method utilized. The analysis of inter-story drifts and the Fourier spectra for the EW and NS motions during 2010 Chile earthquake are presented. The results for the NS motions suggest the coupling of translation and torsion responses. The system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) were detected initially decreasing approximately 24% in the EW motion. Near the end of shaking, an increase of about 17% was detected. These analysis and results serve as baseline indicators of the occurrence of structural damage. The detected changes in wave velocities of the shear beam model are consistent with the observed damage. However, the 1-D shear beam model is not sufficient to simulate the coupling of translation and torsion responses in the NS motion. The wave method is proven for actual implementation in structural health monitoring systems based on carefully assessing the resolution and accuracy of the model for its effectiveness on post-earthquake damage detection in buildings.

Keywords: Chile earthquake, damage detection, earthquake response, impulse response function, shear beam model, shear wave velocity, structural health monitoring, torre central building, wave method

Procedia PDF Downloads 356
20472 Dialogic Approaches to Writing Pedagogy

Authors: Yael Leibovitch

Abstract:

Teaching academic writing is a source of concern for secondary schools. Many students struggle to meet the basic standards of literacy while teacher confidence in this arena remains low. These issues are compounded by the conventionally prescriptive character of writing instruction, which fails to engage student writers. At the same time, a growing body of research on dialogic teaching has highlighted the powerful role of talk in student learning. With the intent of enhancing pedagogical capability, this paper shares finding from a co-inquiry case study that investigated how teachers think about and negotiate classroom discourse to position students as effective academic writers and thinkers. Using a range of qualitative methods, this project closely documents the iterative collaboration of educators as they sought to create more opportunities for dialogic engagement. More specifically, it triangulates both teacher and student data regarding the efficacy of interdependent thinking and collaborative reasoning as organizing principals for literacy learning. Findings indicate that a dialogic teaching repertoire helps to develop the cognitive and metacognitive skills of adolescent writers. In addition, they underscore the importance of sustained professional collaboration to the uptake of new writing pedagogies.

Keywords: dialogic teaching, writing, teacher professional development, student literacy

Procedia PDF Downloads 203
20471 Ultra-Fast pH-Gradient Ion Exchange Chromatography for the Separation of Monoclonal Antibody Charge Variants

Authors: Robert van Ling, Alexander Schwahn, Shanhua Lin, Ken Cook, Frank Steiner, Rowan Moore, Mauro de Pra

Abstract:

Purpose: Demonstration of fast high resolution charge variant analysis for monoclonal antibody (mAb) therapeutics within 5 minutes. Methods: Three commercially available mAbs were used for all experiments. The charge variants of therapeutic mAbs (Bevacizumab, Cetuximab, Infliximab, and Trastuzumab) are analyzed on a strong cation exchange column with a linear pH gradient separation method. The linear gradient from pH 5.6 to pH 10.2 is generated over time by running a linear pump gradient from 100% Thermo Scientific™ CX-1 pH Gradient Buffer A (pH 5.6) to 100% CX-1 pH Gradient Buffer B (pH 10.2), using the Thermo Scientific™ Vanquish™ UHPLC system. Results: The pH gradient method is generally applicable to monoclonal antibody charge variant analysis. In conjunction with state-of-the-art column and UHPLC technology, ultra fast high-resolution separations are consistently achieved in under 5 minutes for all mAbs analyzed. Conclusion: The linear pH gradient method is a platform method for mAb charge variant analysis. The linear pH gradient method can be easily optimized to improve separations and shorten cycle times. Ultra-fast charge variant separation is facilitated with UHPLC that complements, and in some instances outperforms CE approaches in terms of both resolution and throughput.

Keywords: charge variants, ion exchange chromatography, monoclonal antibody, UHPLC

Procedia PDF Downloads 429
20470 Support Services in Open and Distance Education: An Integrated Model of Open Universities

Authors: Evrim Genc Kumtepe, Elif Toprak, Aylin Ozturk, Gamze Tuna, Hakan Kilinc, Irem Aydin Menderis

Abstract:

Support services are very significant elements for all educational institutions in general; however, for distance learners, these services are more essential than traditional (face-to-face) counterparts. One of the most important reasons for this is that learners and instructors do not share the same physical environment and that distance learning settings generally require intrapersonal interactions rather than interpersonal ones. Some learners in distance learning programs feel isolated. Furthermore, some fail to feel a sense of belonging to the institution because of lack of self-management skills, lack of motivation levels, and the need of being socialized, so that they are more likely to fail or drop out of an online class. In order to overcome all these problems, support services have emerged as a critical element for an effective and sustainable distance education system. Within the context of distance education support services, it is natural to include technology-based and web-based services and also the related materials. Moreover, institutions in education sector are expected to use information and communication technologies effectively in order to be successful in educational activities and programs. In terms of the sustainability of the system, an institution should provide distance education services through ICT enabled processes to support all stakeholders in the system, particularly distance learners. In this study, it is envisaged to develop a model based on the current support services literature in the field of open and distance learning and the applications of the distance higher education institutions. Specifically, content analysis technique is used to evaluate the existing literature in the distance education support services, the information published on websites, and applications of distance higher education institutions across the world. A total of 60 institutions met the inclusion criteria which are language option (English) and availability of materials in the websites. The six field experts contributed to brainstorming process to develop and extract codes for the coding scheme. During the coding process, these preset and emergent codes are used to conduct analyses. Two coders independently reviewed and coded each assigned website to ensure that all coders are interpreting the data the same way and to establish inter-coder reliability. Once each web page is included in descriptive and relational analysis, a model of support services is developed by examining the generated codes and themes. It is believed that such a model would serve as a quality guide for future institutions, as well as the current ones.

Keywords: support services, open education, distance learning, support model

Procedia PDF Downloads 186
20469 Dissimilarity Measure for General Histogram Data and Its Application to Hierarchical Clustering

Authors: K. Umbleja, M. Ichino

Abstract:

Symbolic data mining has been developed to analyze data in very large datasets. It is also useful in cases when entry specific details should remain hidden. Symbolic data mining is quickly gaining popularity as datasets in need of analyzing are becoming ever larger. One type of such symbolic data is a histogram, which enables to save huge amounts of information into a single variable with high-level of granularity. Other types of symbolic data can also be described in histograms, therefore making histogram a very important and general symbolic data type - a method developed for histograms - can also be applied to other types of symbolic data. Due to its complex structure, analyzing histograms is complicated. This paper proposes a method, which allows to compare two histogram-valued variables and therefore find a dissimilarity between two histograms. Proposed method uses the Ichino-Yaguchi dissimilarity measure for mixed feature-type data analysis as a base and develops a dissimilarity measure specifically for histogram data, which allows to compare histograms with different number of bins and bin widths (so called general histogram). Proposed dissimilarity measure is then used as a measure for clustering. Furthermore, linkage method based on weighted averages is proposed with the concept of cluster compactness to measure the quality of clustering. The method is then validated with application on real datasets. As a result, the proposed dissimilarity measure is found producing adequate and comparable results with general histograms without the loss of detail or need to transform the data.

Keywords: dissimilarity measure, hierarchical clustering, histograms, symbolic data analysis

Procedia PDF Downloads 151
20468 Use of Computer and Machine Learning in Facial Recognition

Authors: Neha Singh, Ananya Arora

Abstract:

Facial expression measurement plays a crucial role in the identification of emotion. Facial expression plays a key role in psychophysiology, neural bases, and emotional disorder, to name a few. The Facial Action Coding System (FACS) has proven to be the most efficient and widely used of the various systems used to describe facial expressions. Coders can manually code facial expressions with FACS and, by viewing video-recorded facial behaviour at a specified frame rate and slow motion, can decompose into action units (AUs). Action units are the most minor visually discriminable facial movements. FACS explicitly differentiates between facial actions and inferences about what the actions mean. Action units are the fundamental unit of FACS methodology. It is regarded as the standard measure for facial behaviour and finds its application in various fields of study beyond emotion science. These include facial neuromuscular disorders, neuroscience, computer vision, computer graphics and animation, and face encoding for digital processing. This paper discusses the conceptual basis for FACS, a numerical listing of discrete facial movements identified by the system, the system's psychometric evaluation, and the software's recommended training requirements.

Keywords: facial action, action units, coding, machine learning

Procedia PDF Downloads 96
20467 Electro-Hydrodynamic Analysis of Low-Pressure DC Glow Discharge by Lattice Boltzmann Method

Authors: Ji-Hyok Kim, Il-Gyong Paek, Yong-Jun Kim

Abstract:

We propose a numerical model based on drift-diffusion theory and lattice Boltzmann method (LBM) to analyze the electro-hydrodynamic behavior in low-pressure direct current (DC) glow discharge plasmas. We apply the drift-diffusion theory for 4-species and employ the standard lattice Boltzmann model (SLBM) for the electron, the finite difference-lattice Boltzmann model (FD-LBM) for heavy particles, and the finite difference model (FDM) for the electric potential, respectively. Our results are compared with those of other methods, and emphasize the necessity of a two-dimensional analysis for glow discharge.

Keywords: glow discharge, lattice Boltzmann method, numerical analysis, plasma simulation, electro-hydrodynamic

Procedia PDF Downloads 83
20466 The Climate Change and Soil Degradation in the Czech Republic

Authors: Miroslav Dumbrovsky

Abstract:

The paper deals with impacts of climate change with the main emphasis on land degradation, agriculture and forestry management in the landscape. Land degradation, due to adverse effect of farmers activities, as a result of inappropriate conventional technologies, was a major issue in the Czech Republic during the 20th century and will remain for solving in the 21st century. The importance of land degradation is very high because of its impact on crop productivity and many other adverse effects. Land degradation through soil degradation is causing losses on crop productivity and quality of the environment, through decreasing quality of soil and water (especially water resources). Negative effects of conventional farming practices are increased water erosion, as well as crusting and compaction of the topsoil and subsoil. Soil erosion caused by water destructs the soil’s structure, reduces crop productivity due to deterioration in soil physical and chemical properties such as infiltration rate, water-holding capacity, loss of nutrients needed for crop production, and loss of soil carbon. Water erosion occurs on fields with row crops (maize, sunflower), especially during the rainfall period from April to October. Recently there is a serious problem of greatly expanded production of biofuels and bioenergy from field crops. The result is accelerated soil degradation. The damages (on and off- site) are greater than the benefits. An effective soil conservation requires an appropriate complex system of measures in the landscape. They are also important to continue to develop new sophisticated methods and technologies for decreasing land degradation. The system of soil conservation solving land degradation depend on the ability and the willingness of land users to apply them. When we talk about land degradation, it is not just a technical issue but also an economic and political issue. From a technical point of view, we have already made many positive steps, but for successful solving the problem of land degradation is necessary to develop suitable economic and political tools to increase the willingness and ability of land users to adopt conservation measures.

Keywords: land degradation, soil erosion, soil conservation, climate change

Procedia PDF Downloads 364
20465 Understanding English Language in Career Development of Academics in Non-English Speaking HEIs: A Systematic Literature Review

Authors: Ricardo Pinto Mario Covele, Patricio V. Langa, Patrick Swanzy

Abstract:

The English language has been recognized as a universal medium of instruction in academia, especially in Higher Education Institutions (HEIs) hence exerting enormous influence within the context of research and publication. By extension, the English Language has been embraced by scholars from non-English speaking countries. The purpose of this review was to synthesize the discussions using four databases. Discussion in the English language in the career development of academics, particularly in non-English speaking universities, is largely less visible. This paper seeks to fill this gap and to improve the visibility of the English language in the career development of academics focusing on non-English language speaking universities by undertaking a systematic literature review. More specifically, the paper addresses the language policy, English language learning model as a second language, sociolinguistic field and career development, methods, as well as its main findings. This review analyzed 75 relevant resources sourced from Western Cape’s Library, Scopus, Google scholar, and web of science databases from November 2020 to July 2021 using the PQRS framework as an analytical lens. The paper’s findings demonstrate that, while higher education continues to be under-challenges of English language usage, literature targeting non-English speaking universities remains less discussed than it is often described. The findings also demonstrate the dominance of English language policy, both for knowledge production and dissemination of literature challenging emerging scholars from non-English speaking HEIs. Hence, the paper argues for the need to reconsider the context of non-English language speakers in the English language in the career development of academics’ research, both as empirical fields and as emerging knowledge producers. More importantly, the study reveals two bodies of literature: (1) the instrumentalist approach to English Language learning and (2) Intercultural approach to the English Language for career opportunities, classified as the appropriate to explain the English language learning process and how is it perceived towards scholars’ academic careers in HEIs.

Keywords: English language, public and private universities, language policy, career development, non-English speaking countries

Procedia PDF Downloads 135
20464 The Role of Metacognitive Strategy Intervention through Dialogic Interaction on Listeners’ Level of Cognitive Load

Authors: Ali Babajanzade, Hossein Bozorgian

Abstract:

Cognitive load plays an important role in learning in general and L2 listening comprehension in particular. This study is an attempt to investigate the effect of metacognitive strategy intervention through dialogic interaction (MSIDI) on L2 listeners’ cognitive load. A mixed-method design with 50 participants of male and female Iranian lower-intermediate learners between 20 to 25 years of age was used. An experimental group (n=25) received weekly interventions based on metacognitive strategy intervention through dialogic interaction for ten sessions. The second group, which was control (n=25), had the same listening samples with the regular procedure without a metacognitive intervention program in each session. The study used three different instruments: a) a modified version of the cognitive load questionnaire, b) digit span tests, and c) focused group interviews to investigate listeners’ level of cognitive load throughout the process. Results testified not only improvements in listening comprehension in MSIDI but a radical shift of cognitive load rate within this group. In other words, listeners experienced a lower level of cognitive load in MSIDI in comparison with their peers in the control group.

Keywords: cognitive load theory, human mental functioning, metacognitive theory, listening comprehension, sociocultural theory

Procedia PDF Downloads 137
20463 Thermal Resistance of Special Garments Exposed to a Radiant Heat

Authors: Jana Pichova, Lubos Hes, Vladimir Bajzik

Abstract:

Protective clothing is designed to keep a wearer save in hazardous conditions or enable perform short time working operation without being injured or feeling discomfort. Firefighters or other related workers are exposed to abnormal heat which can be conductive, convective or radiant type. Their garment is proposed to resist this conditions and prevent burn injuries or dead of human. However thermal comfort of firefighter exposed to high heat source have not been studied yet. Thermal resistance is the best representative parameter of thermal comfort. In this study a new method of testing of thermal resistance of special clothing exposed to high radiation heat source was designed. This method simulates human body wearing single or multi-layered garment which is exposed to radiative heat. Setup of this method enables measuring of radiative heat flow in time without effect of convection. The new testing method is verified on chosen group of textiles for firefighters.

Keywords: protective clothing, radiative heat, thermal comfort of firefighters, thermal resistance of special garments

Procedia PDF Downloads 361
20462 A Smart Contract Project: Peer-to-Peer Energy Trading with Price Forecasting in Microgrid

Authors: Şakir Bingöl, Abdullah Emre Aydemir, Abdullah Saado, Ahmet Akıl, Elif Canbaz, Feyza Nur Bulgurcu, Gizem Uzun, Günsu Bilge Dal, Muhammedcan Pirinççi

Abstract:

Smart contracts, which can be applied in many different areas, from financial applications to the internet of things, come to the fore with their security, low cost, and self-executing features. In this paper, it is focused on peer-to-peer (P2P) energy trading and the implementation of the smart contract on the Ethereum blockchain. It is assumed a microgrid consists of consumers and prosumers that can produce solar and wind energy. The proposed architecture is a system where the prosumer makes the purchase or sale request in the smart contract and the maximum price obtained through the distribution system operator (DSO) by forecasting. It is aimed to forecast the hourly maximum unit price of energy by using deep learning instead of a fixed pricing. In this way, it will make the system more reliable as there will be more dynamic and accurate pricing. For this purpose, Istanbul's energy generation, energy consumption and market clearing price data were used. The consistency of the available data and forecasting results is observed and discussed with graphs.

Keywords: energy trading smart contract, deep learning, microgrid, forecasting, Ethereum, peer to peer

Procedia PDF Downloads 114
20461 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach

Authors: Jerry Q. Cheng

Abstract:

Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.

Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing

Procedia PDF Downloads 150
20460 Thermal Elastic Stress Analysis of Steel Fiber Reinforced Aluminum Composites

Authors: Mustafa Reşit Haboğlu, Ali Kurşun , Şafak Aksoy, Halil Aykul, Numan Behlül Bektaş

Abstract:

A thermal elastic stress analysis of steel fiber reinforced aluminum laminated composite plate is investigated. Four sides of the composite plate are clamped and subjected to a uniform temperature load. The analysis is performed both analytically and numerically. Laminated composite is manufactured via hot pressing method. The investigation of the effects of the orientation angle is provided. Different orientation angles are used such as [0°/90°]s, [30°/-30°]s, [45°/-45°]s and [60/-60]s. The analytical solution is obtained via classical laminated composite theory and the numerical solution is obtained by applying finite element method via ANSYS.

Keywords: laminated composites, thermo elastic stress, finite element method.

Procedia PDF Downloads 485
20459 Fast Adjustable Threshold for Uniform Neural Network Quantization

Authors: Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev

Abstract:

The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with quantization procedure by introducing the trained scale factors for discretization thresholds that are separate for each filter. Using the proposed technique, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available in the GitHub repository.

Keywords: distillation, machine learning, neural networks, quantization

Procedia PDF Downloads 307
20458 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection

Authors: Mahshid Arabi

Abstract:

With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.

Keywords: data protection, digital technologies, information security, modern management

Procedia PDF Downloads 15
20457 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: big data, k-NN, machine learning, traffic speed prediction

Procedia PDF Downloads 346
20456 A New Center of Motion in Cabling Robots

Authors: Alireza Abbasi Moshaii, Farshid Najafi

Abstract:

In this paper a new model for centre of motion creating is proposed. This new method uses cables. So, it is very useful in robots because it is light and has easy assembling process. In the robots which need to be in touch with some things this method is very good. It will be described in the following. The accuracy of the idea is proved by an experiment. This system could be used in the robots which need a fixed point in the contact with some things and make a circular motion. Such as dancer, physician or repair robots.

Keywords: centre of motion, robotic cables, permanent touching, mechatronics engineering

Procedia PDF Downloads 418