Search results for: dynamic explicit
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4394

Search results for: dynamic explicit

3074 A Review of Research on Pre-training Technology for Natural Language Processing

Authors: Moquan Gong

Abstract:

In recent years, with the rapid development of deep learning, pre-training technology for natural language processing has made great progress. The early field of natural language processing has long used word vector methods such as Word2Vec to encode text. These word vector methods can also be regarded as static pre-training techniques. However, this context-free text representation brings very limited improvement to subsequent natural language processing tasks and cannot solve the problem of word polysemy. ELMo proposes a context-sensitive text representation method that can effectively handle polysemy problems. Since then, pre-training language models such as GPT and BERT have been proposed one after another. Among them, the BERT model has significantly improved its performance on many typical downstream tasks, greatly promoting the technological development in the field of natural language processing, and has since entered the field of natural language processing. The era of dynamic pre-training technology. Since then, a large number of pre-trained language models based on BERT and XLNet have continued to emerge, and pre-training technology has become an indispensable mainstream technology in the field of natural language processing. This article first gives an overview of pre-training technology and its development history, and introduces in detail the classic pre-training technology in the field of natural language processing, including early static pre-training technology and classic dynamic pre-training technology; and then briefly sorts out a series of enlightening technologies. Pre-training technology, including improved models based on BERT and XLNet; on this basis, analyze the problems faced by current pre-training technology research; finally, look forward to the future development trend of pre-training technology.

Keywords: natural language processing, pre-training, language model, word vectors

Procedia PDF Downloads 60
3073 Time Series Forecasting (TSF) Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window

Procedia PDF Downloads 156
3072 Social Studies Teachers’ Sustained, Collaborative Professional Development Centered Round Innovative Curriculum Materials

Authors: Cory Callahan

Abstract:

Here the author synthesizes findings and implications from two research studies that comprise a continuing line of inquiry into the potential of an innovative professional development program to help in-service teachers understand and implement a complex model of social studies instruction. The paper specifically explores the question: To what degree can a collaborative professional development program centered around innovative curriculum materials help social studies teachers understand and implement a powerful social studies approach? Findings suggest the teachers increasingly incorporated substantive thinking (i.e., second-order historical domain knowledge) into their respective practice and they facilitated students’ use of historical photographs as evidence to begin to answer a compelling question. The teachers also began to effectively support students’ abilities to make claims about the past. Implications include the foregrounding of high-quality questions during planning and the need for explicit guidance in the form of structures and procedures (i.e., scaffolds) to help teachers systematically review students’ work products. The work shared here may contribute to scholarship that posits explanations for why teacher-support is routinely ineffectual and suggests ways to provide substantive collaborative support for in-service social studies teachers.

Keywords: educative curriculum, social studies, professional development, lesson study

Procedia PDF Downloads 65
3071 A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System

Authors: Mulugeta K. Tefera, Xiaolong Yang, Jian Liu

Abstract:

Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used.

Keywords: background modeling, Gaussian mixture model, inter-frame difference, object detection and tracking, video surveillance

Procedia PDF Downloads 477
3070 Ethicality of Algorithmic Pricing and Consumers’ Resistance

Authors: Zainab Atia, Hongwei He, Panagiotis Sarantopoulos

Abstract:

Over the past few years, firms have witnessed a massive increase in sophisticated algorithmic deployment, which has become quite pervasive in today’s modern society. With the wide availability of data for retailers, the ability to track consumers using algorithmic pricing has become an integral option in online platforms. As more companies are transforming their businesses and relying more on massive technological advancement, pricing algorithmic systems have brought attention and given rise to its wide adoption, with many accompanying benefits and challenges to be found within its usage. With the overall aim of increasing profits by organizations, algorithmic pricing is becoming a sound option by enabling suppliers to cut costs, allowing better services, improving efficiency and product availability, and enhancing overall consumer experiences. The adoption of algorithms in retail has been pioneered and widely used in literature across varied fields, including marketing, computer science, engineering, economics, and public policy. However, what is more, alarming today is the comprehensive understanding and focus of this technology and its associated ethical influence on consumers’ perceptions and behaviours. Indeed, due to algorithmic ethical concerns, consumers are found to be reluctant in some instances to share their personal data with retailers, which reduces their retention and leads to negative consumer outcomes in some instances. This, in its turn, raises the question of whether firms can still manifest the acceptance of such technologies by consumers while minimizing the ethical transgressions accompanied by their deployment. As recent modest research within the area of marketing and consumer behavior, the current research advances the literature on algorithmic pricing, pricing ethics, consumers’ perceptions, and price fairness literature. With its empirical focus, this paper aims to contribute to the literature by applying the distinction of the two common types of algorithmic pricing, dynamic and personalized, while measuring their relative effect on consumers’ behavioural outcomes. From a managerial perspective, this research offers significant implications that pertain to providing a better human-machine interactive environment (whether online or offline) to improve both businesses’ overall performance and consumers’ wellbeing. Therefore, by allowing more transparent pricing systems, businesses can harness their generated ethical strategies, which fosters consumers’ loyalty and extend their post-purchase behaviour. Thus, by defining the correct balance of pricing and right measures, whether using dynamic or personalized (or both), managers can hence approach consumers more ethically while taking their expectations and responses at a critical stance.

Keywords: algorithmic pricing, dynamic pricing, personalized pricing, price ethicality

Procedia PDF Downloads 92
3069 Geographic Information System and Dynamic Segmentation of Very High Resolution Images for the Semi-Automatic Extraction of Sandy Accumulation

Authors: A. Bensaid, T. Mostephaoui, R. Nedjai

Abstract:

A considerable area of Algerian lands is threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mecheria department generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of LANDSAT images (5, 7, and 8) of three scenes 197/37, 198/36 and 198/37 for the year 2020. As a second step, we prospect the use of geospatial techniques to monitor the progression of sand dunes on developed (urban) lands as well as on the formation of sandy accumulations (dune, dunes fields, nebkha, barkhane, etc.). For this purpose, this study made use of the semi-automatic processing method for the dynamic segmentation of images with very high spatial resolution (SENTINEL-2 and Google Earth). This study was able to demonstrate that urban lands under current conditions are located in sand transit zones that are mobilized by the winds from the northwest and southwest directions.

Keywords: land development, GIS, segmentation, remote sensing

Procedia PDF Downloads 155
3068 Development of Noninvasive Method to Analyze Dynamic Changes of Matrix Stiffness and Elasticity Characteristics

Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Dobdin, Anatoly Skripal, Andrey Usanov, Dmitry Usanov

Abstract:

One of the most important unsolved problems in modern medicine is the increase of chronic diseases that lead to organ dysfunction or even complete loss of function. Current methods of treatment do not result in decreased mortality and disability statistics. Currently, the best treatment for many patients is still transplantation of organs and/or tissues. Therefore, finding a way of correct artificial matrix biofabrication in case of limited number of natural organs for transplantation is a critical task. One important problem that needs to be solved is development of a nondestructive and noninvasive method to analyze dynamic changes of mechanical characteristics of a matrix with minimal side effects on the growing cells. This research was focused on investigating the properties of matrix as a marker of graft condition. In this study, the collagen gel with human primary dermal fibroblasts in suspension (60, 120, 240*103 cells/mL) and collagen gel with cell spheroids were used as model objects. The stiffness and elasticity characteristics were evaluated by a semiconductor laser autodyne. The time and cell concentration dependency of the stiffness and elasticity were investigated. It was shown that these properties changed in a non-linear manner with respect to cell concentration. The maximum matrix stiffness was observed in the collagen gel with the cell concentration of 120*103 cells/mL. This study proved the opportunity to use the mechanical properties of matrix as a marker of graft condition, which can be measured by noninvasive semiconductor laser autodyne technique.

Keywords: graft, matrix, noninvasive method, regenerative medicine, semiconductor laser autodyne

Procedia PDF Downloads 344
3067 Predictions of Thermo-Hydrodynamic State for Single and Three Pads Gas Foil Bearings Operating at Steady-State Based on Multi-Physics Coupling Computer Aided Engineering Simulations

Authors: Tai Yuan Yu, Pei-Jen Wang

Abstract:

Oil-free turbomachinery is considered one of the critical technologies for future green power generation systems as rotor machinery systems. Oil-free technology allows clean, compact, and maintenance-free working, and gas foil bearings, abbreviated as GFBs, are important for the technology. Since the first applications in the auxiliary power units and air cycle machines in the 1970s, obvious improvement has been created to the computational models for dynamic rotor behavior. However, many technical issues are still poorly understood or remain unsolved, and some of those are thermal management and the pattern of how pressure will be distributed in bearing clearance. This paper presents a three-dimensional, abbreviated as 3D, fluid-structure interaction model of single pad foil bearings and three pad foil bearings to predict bearing working behavior that researchers could compare characteristics of those. The coupling analysis model involves dynamic working characteristics applied to all the gas film and mechanical structures. Therefore, the elastic deformation of foil structure and the hydrodynamic pressure of gas film can both be calculated by a finite element method program. As a result, the temperature distribution pattern could also be iteratively solved by coupling analysis. In conclusion, the working fluid state in a gas film of various pad forms of bearings working characteristic at constant rotational speed for both can be solved for comparisons with the experimental results.

Keywords: fluid-structure interaction, multi-physics simulations, gas foil bearing, oil-free, transient thermo-hydrodynamic

Procedia PDF Downloads 163
3066 Nuclear Characteristics of a Heterogeneous Thorium-Based Fuel Design Aimed at Increasing Fuel Cycle Length of a Typical PWR

Authors: Hendrik Bernard Van Der Walt, Frik Van Niekerk

Abstract:

Heterogeneous thorium-based fuels have been proposed as an alternative for conventional reactor fuels and many studies have shown promising results. Fuel cycle characteristics still have to be explored in detail. This study investigates the use of a novel thorium-based fuel design aimed at increasing fuel cycle length of a typical PWR with an explicit focus on thorium- uranium content, neutron spectrum, flux considerations and neutron economy.As nuclear reactions are highly dependent on reactor flux and material matrix, analytical and numerical calculations have been completed to predict the behaviour of the proposed nuclear fuel. The proposed design utilizes various ratios of thorium oxide and uranium oxide pellets within fuel pins, divided into heterogeneous sections of specified length. This design renders multiple regions with unique characteristics. The goal of this study is to determine and optimally utilize these characteristics. Proliferation considerations result in the need for denaturing of heterogeneous regions, which renders more unique characteristics, these aspects were examined in this study. Finally, the use of fertile thorium to emulate a burnable poison for managing excess BOL reactivity has been investigated, as well as an option for flux shaping in a typical PWR.

Keywords: nuclear fuel, nuclear characteristics, nuclear fuel cycle, thorium-based fuel, heterogeneous design

Procedia PDF Downloads 138
3065 Assessment of Pakistan-China Economic Corridor: An Emerging Dynamic of 21st Century

Authors: Naad-E-Ali Sulehria

Abstract:

Pakistan and china have stepped in a new phase of strengthening fraternity as the dream of economic corridor once discerned by both countries is going to take a pragmatic shape. Pak-China economic corridor an under construction program is termed to be an emerging dynamic of 21st century that anticipates a nexus between Asian continent and Indian Ocean by extending its functions to adjoining East, South, Central and Western Asian regions. The $45.6 billion worth heavily invested megaprojects by China are meant to revive energy sector and building economic infrastructure in Pakistan. Evidently, these projects are a part of ‘southern extension’ of Silk Road economic belt which is going to draw out prominent incentives for both countries particularly bolstering China to acquire influential dominance over the regional trade and beyond. In pursuit to adhere, by these progressive plans both countries have began working on their respective assignments. This article discusses the economical development programs under China’s peripheral diplomacy regarding its region-specific-approach to accumulate trade of Persian Gulf and access the landlocked Central Asian states through Pakistan in a sublimate context to break US encirclement of Asia. Pakistan’s utmost preference to utilize its strategic channel as a trade hub to become an emerging economy and surpass its arch-rival India for strategic concerns is contemplated accordingly. The needs and feasibility of the economic gateway and the dividends it can provide in the contemporary scenario are examined carefully and analysis is drawn upon the future prospects of the Pakistan-China Economic corridor once completed.

Keywords: pak-china economic corridor (PCEC), central asian republic states (CARs), new silk road economic belt, gawadar

Procedia PDF Downloads 369
3064 A Finite Element Study of Laminitis in Horses

Authors: Naeim Akbari Shahkhosravi, Reza Kakavand, Helen M. S. Davies, Amin Komeili

Abstract:

Equine locomotion and performance are significantly affected by hoof health. One of the most critical diseases of the hoof is laminitis, which can lead to horse lameness in a severe condition. This disease exhibits the mechanical properties degradation of the laminar junction tissue within the hoof. Therefore, it is essential to investigate the biomechanics of the hoof, focusing specifically on excessive and cumulatively accumulated stresses within the laminar junction tissue. For this aim, the current study generated a novel equine hoof Finite Element (FE) model under dynamic physiological loading conditions and employing a hyperelastic material model. Associated tissues of the equine hoof were segmented from computed tomography scans of an equine forelimb, including the navicular bone, third phalanx, sole, frog, laminar junction, digital cushion, and medial- dorsal- lateral wall areas. The inner tissues were connected based on the hoof anatomy, and the hoof was under a dynamic loading over cyclic strides at the trot. The strain distribution on the hoof wall of the model was compared with the published in vivo strain measurements to validate the model. Then the validated model was used to study the development of laminitis. The ultimate stress tolerated by the laminar junction before rupture was considered as a stress threshold. The tissue damage was simulated through iterative reduction of the tissue’s mechanical properties in the presence of excessive maximum principal stresses. The findings of this investigation revealed how damage initiates from the medial and lateral sides of the tissue and propagates through the hoof dorsal area.

Keywords: horse hoof, laminitis, finite element model, continuous damage

Procedia PDF Downloads 184
3063 Numerical Assessment of Fire Characteristics with Bodies Engulfed in Hydrocarbon Pool Fire

Authors: Siva Kumar Bathina, Sudheer Siddapureddy

Abstract:

Fires accident becomes even worse when the hazardous equipment like reactors or radioactive waste packages are engulfed in fire. In this work, large-eddy numerical fire simulations are performed using fire dynamic simulator to predict the thermal behavior of such bodies engulfed in hydrocarbon pool fires. A radiatively dominated 0.3 m circular burner with n-heptane as the fuel is considered in this work. The fire numerical simulation results without anybody inside the fire are validated with the reported experimental data. The comparison is in good agreement for different flame properties like predicted mass burning rate, flame height, time-averaged center-line temperature, time-averaged center-line velocity, puffing frequency, the irradiance at the surroundings, and the radiative heat feedback to the pool surface. Cask of different sizes is simulated with SS304L material. The results are independent of the material of the cask simulated as the adiabatic surface temperature concept is employed in this study. It is observed that the mass burning rate increases with the blockage ratio (3% ≤ B ≤ 32%). However, the change in this increment is reduced at higher blockage ratios (B > 14%). This is because the radiative heat feedback to the fuel surface is not only from the flame but also from the cask volume. As B increases, the volume of the cask increases and thereby increases the radiative contribution to the fuel surface. The radiative heat feedback in the case of the cask engulfed in the fire is increased by 2.5% to 31% compared to the fire without cask.

Keywords: adiabatic surface temperature, fire accidents, fire dynamic simulator, radiative heat feedback

Procedia PDF Downloads 129
3062 Regret-Regression for Multi-Armed Bandit Problem

Authors: Deyadeen Ali Alshibani

Abstract:

In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.

Keywords: optimal, bandit problem, optimization, dynamic programming

Procedia PDF Downloads 453
3061 The Power of Words: The Use of Language in Ethan Frome

Authors: Ritu Sharma

Abstract:

In order to be objective, critics must examine the dynamic relationships between the author, the reader, the text, and the outside world. However, it is also crucial to recognize that because the language was created by God, meaning is ingrained in it. Meaning is located in and discovered through literature rather than being limited to the author, reader, text, or the outside world. The link between the author, the reader, and the text is crucial because literature unites an author and a reader through the use of language. Literature is a potent kind of communication, and Ethan Frome's audience is forever changed as a result of the book's language and the language its characters use. The narrative of Ethan Frome and his wife Zeena is presented in Ethan Frome. Ethan's story is told throughout the course of the book, revealed through the eyes of the narrator, an outsider passing through Starkfield, as well as through the insight that the narrator gains from the townspeople and his stay on the Frome farm. The story is set in the rural New England community of Starkfield, Massachusetts. The weather provides the ideal setting for Ethan and the narrator to get to know one another as the narrator gets preoccupied with unraveling the narrative that underlies Ethan's physical anomalies. In addition to telling a gripping tale and capturing human nature as it is, Ethan Frome uses its storyline to achieve something more significant. The book by Edith Wharton supports language. Zeena's deliberate and convincing language challenges relativity and meaninglessness. Ethan and Mattie's effort to effectively use words reflects the complexity of language, and their battle illustrates the influence that language may have if and when it is used. Ethan Frome defends the written word, the foundation upon which it is constructed, as a literary work. Communication is based on language, and as the characters respond to and get involved in disputes throughout the book, Zeena, Ethan, and Mattie, each reflects particular theories of communication that help define their uses of communication within the broader context of language.

Keywords: dynamic relationships, potent, communication, complexity

Procedia PDF Downloads 91
3060 The Dynamics of a 3D Vibrating and Rotating Disc Gyroscope

Authors: Getachew T. Sedebo, Stephan V. Joubert, Michael Y. Shatalov

Abstract:

Conventional configuration of the vibratory disc gyroscope is based on in-plane non-axisymmetric vibrations of the disc with a prescribed circumferential wave number. Due to the Bryan's effect, the vibrating pattern of the disc becomes sensitive to the axial component of inertial rotation of the disc. Rotation of the vibrating pattern relative to the disc is proportional to the inertial angular rate and is measured by sensors. In the present paper, the authors investigate a possibility of making a 3D sensor on the basis of both in-plane and bending vibrations of the disc resonator. We derive equations of motion for the disc vibratory gyroscope, where both in-plane and bending vibrations are considered. Hamiltonian variational principle is used in setting up equations of motion and the corresponding boundary conditions. The theory of thin shells with the linear elasticity principles is used in formulating the problem and also the disc is assumed to be isotropic and obeys Hooke's Law. The governing equation for a specific mode is converted to an ODE to determine the eigenfunction. The resulting ODE has exact solution as a linear combination of Bessel and Neumann functions. We demonstrate how to obtain an explicit solution and hence the eigenvalues and corresponding eigenfunctions for annular disc with fixed inner boundary and free outer boundary. Finally, the characteristics equations are obtained and the corresponding eigenvalues are calculated. The eigenvalues are used for the calculation of tuning conditions of the 3D disc vibratory gyroscope.

Keywords: Bryan’s effect, bending vibrations, disc gyroscope, eigenfunctions, eigenvalues, tuning conditions

Procedia PDF Downloads 326
3059 Understanding Seismic Behavior of Masonry Buildings in Earthquake

Authors: Alireza Mirzaee, Soosan Abdollahi, Mohammad Abdollahi

Abstract:

Unreinforced Masonry (URM) wall is vulnerable in resisting horizontal load such as wind and seismic loading. It is due to the low tensile strength of masonry, the mortar connection between the brick units. URM structures are still widely used in the world as an infill wall and commonly constructed with door and window openings. This research aimed to investigate the behavior of URM wall with openings when horizontal load acting on it and developed load-drift relationship of the wall. The finite element (FE) method was chosen to numerically simulate the behavior of URM with openings. In this research, ABAQUS, commercially available FE software with explicit solver was employed. In order to ensure the numerical model can accurately represent the behavior of an URM wall, the model was validated for URM wall without openings using available experimental results. Load-displacement relationship of numerical model is well agreed with experimental results. Evidence shows the same load displacement curve shape obtained from the FE model. After validating the model, parametric study conducted on URM wall with openings to investigate the influence of area of openings and pre-compressive load on the horizontal load capacity of the wall. The result showed that the increasing of area of openings decreases the capacity of the wall in resisting horizontal loading. It is also well observed from the result that capacity of the wall increased with the increasing of pre-compressive load applied on the top of the walls.

Keywords: masonry constructions, performance at earthquake, MSJC-08 (ASD), bearing wall, tie-column

Procedia PDF Downloads 252
3058 Proactive Competence Management for Employees: A Bottom-up Process Model for Developing Target Competence Profiles Based on the Employee's Tasks

Authors: Maximilian Cedzich, Ingo Dietz Von Bayer, Roland Jochem

Abstract:

In order for industrial companies to continue to succeed in dynamic, globalized markets, they must be able to train their employees in an agile manner and at short notice in line with the exogenous conditions that arise. For this purpose, it is indispensable to operate a proactive competence management system for employees that recognizes qualification needs timely in order to be able to address them promptly through qualification measures. However, there are hardly any approaches to be found in the literature that includes systematic, proactive competence management. In order to help close this gap, this publication presents a process model that systematically develops bottom-up, future-oriented target competence profiles based on the tasks of the employees. Concretely, in the first step, the tasks of the individual employees are examined for assumed future conditions. In other words, qualitative scenarios are considered for the individual tasks to determine how they are likely to change. In a second step, these scenario-based future tasks are translated into individual future-related target competencies of the employee using a matrix of generic task properties. The final step pursues the goal of validating the target competence profiles formed in this way within the framework of a management workshop. This process model provides industrial companies with a tool that they can use to determine the competencies required by their own employees in the future and compare them with the actual prevailing competencies. If gaps are identified between the target and the actual, these qualification requirements can be closed in the short term by means of qualification measures.

Keywords: dynamic globalized markets, employee competence management, industrial companies, knowledge management

Procedia PDF Downloads 189
3057 Experimental Research of Smoke Impact on the Performance of Cylindrical Eight Channel Cyclone

Authors: Pranas Baltrėnas, Dainius Paliulis

Abstract:

Cyclones are widely used for separating particles from gas in energy production objects. Efficiency of normal centrifugal air cleaning devices ranges from 85 to 90%, but weakness of many cyclones is low collection efficiency of particles less than 10 μm in diameter. Many factors have impact on cyclone efficiency – humidity, temperature, gas (air) composition, airflow velocity and etc. Many scientists evaluated only effect of origin and size of PM on cyclone efficiency. Effect of gas (air) composition and temperature on cyclone efficiency still demands contributions. Complex experimental research on efficiency of cylindrical eight-channel system with adjustable half-rings for removing fine dispersive particles (< 20 μm) was carried out. The impact of gaseous smoke components on removal of wood ashes was analyzed. Gaseous components, present in the smoke mixture, with the dynamic viscosity lower than that of same temperature air, decrease the d50 value, simultaneously increasing the overall particulate matter removal efficiency in the cyclone, i.e. this effect is attributed to CO2 and CO, while O2 and NO have the opposite effect. Air temperature influences the d50 value, an increase in air temperature yields an increase in d50 value, i.e. the overall particulate matter removal efficiency declines, the reason for this being an increasing dynamic air viscosity. At 120 °C temperature the d50 value is approximately 11.8 % higher than at air temperature of 20 °C. With an increase in smoke (gas) temperature from 20 °C to 50 °C, the aerodynamic resistance in a 1-tier eight-channel cylindrical cyclone drops from 1605 to 1380 Pa, from 1660 to 1420 Pa in a 2-tier eight-channel cylindrical cyclone, from 1715 to 1450 Pa in a 3-tier eight-channel cylindrical cyclone. The reason for a decline in aerodynamic resistance is the declining gas density. The aim of the paper is to analyze the impact of gaseous smoke components on the eight–channel cyclone with tangential inlet.

Keywords: cyclone, adjustable half-rings, particulate matter, efficiency, gaseous compounds, smoke

Procedia PDF Downloads 290
3056 Dialectics of Modern Law: Perspectives and Strategies of Resistance from the Margins

Authors: Nisar Alungal Chungath

Abstract:

“No human being is illegal" has become a dictum strongly upheld in the context of global immigration and migration, highlighting the ethical and moral dimensions of how societies and governments treat individuals and communities who have crossed political borders or are living in a country without legal authorization. It seeks to shift the focus from categorizing human beings as illegal immigrants to recognizing their inherent human rights and the complexities of their circumstances. As a complex social phenomenon, law has been a crucial instrument in shaping, regulating and governing human societies and vice versa. The law has now become a humongous political project of the modern majoritarian regimes to democratically illegitimize and illegalize the unpopular sections and minorities. Drawing from the theoretical frameworks of dialectics, the paper explores the philosophical underpinnings of the historical evolution and dynamic nature of modern law. The paper employs a phenomenological approach to analyze the dialectical relations between individuals, societies, and legal systems, aiming to shed light on the ethical and political implications of these interactions. By examining the historical essence of law, its relationship with social and cultural norms, and the role of power dynamics, this article argues for constantly maintaining the dialectics of law—the dynamic interplay between legal norms, social practices, cultural values, and historical contexts through a philosophical and phenomenological lens, in order to bridge the gap between universal principles and particular contexts. The paper will shed light to the dialectics of the law in the context of instances of the legal persecutions of the modern secular democracies such as Citizenship Amendment Act-2019, India.

Keywords: phenomenology, dialectic, modern law, politics, resistance, margins

Procedia PDF Downloads 56
3055 On the Monitoring of Structures and Soils by Tromograph

Authors: Magarò Floriana, Zinno Raffaele

Abstract:

Since 2009, with the coming into force of the January 14, 2008 Ministerial Decree "New technical standards for construction", and the explanatory ministerial circular N°.617 of February 2, 2009, the question of seismic hazard and the design of seismic-resistant structures in Italy has acquired increasing importance. One of the most discussed aspects in recent Italian and international scientific literature concerns the dynamic interaction between land and structure, and the effects which dynamic coupling may have on individual buildings. In effect, from systems dynamics, it is well known that resonance can have catastrophic effects on a stimulated system, leading to a response that is not compatible with the previsions in the design phase. The method used in this study to estimate the frequency of oscillation of the structure is as follows: the analysis of HVSR (Horizontal to Vertical Spectral Ratio) relations. This allows for evaluation of very simple oscillation frequencies for land and structures. The tool used for data acquisition is an experimental digital tromograph. This is an engineered development of the experimental Languamply RE 4500 tromograph, equipped with an engineered amplification circuit and improved electronically using extremely small electronic components (size of each individual amplifier 16 x 26 mm). This tromograph is a modular system, completely "free" and "open", designed to interface Windows, Linux, OSX and Android with the outside world. It an amplifier designed to carry out microtremor measurements, yet which will also be useful for seismological and seismic measurements in general. The development of single amplifiers of small dimension allows for a very clean signal since being able to position it a few centimetres from the geophone eliminates cable “antenna” phenomena, which is a necessary characteristic in seeking to have signals which are clean at the very low voltages to be measured.

Keywords: microtremor, HVSR, tromograph, structural engineering

Procedia PDF Downloads 409
3054 C-eXpress: A Web-Based Analysis Platform for Comparative Functional Genomics and Proteomics in Human Cancer Cell Line, NCI-60 as an Example

Authors: Chi-Ching Lee, Po-Jung Huang, Kuo-Yang Huang, Petrus Tang

Abstract:

Background: Recent advances in high-throughput research technologies such as new-generation sequencing and multi-dimensional liquid chromatography makes it possible to dissect the complete transcriptome and proteome in a single run for the first time. However, it is almost impossible for many laboratories to handle and analysis these “BIG” data without the support from a bioinformatics team. We aimed to provide a web-based analysis platform for users with only limited knowledge on bio-computing to study the functional genomics and proteomics. Method: We use NCI-60 as an example dataset to demonstrate the power of the web-based analysis platform and data delivering system: C-eXpress takes a simple text file that contain the standard NCBI gene or protein ID and expression levels (rpkm or fold) as input file to generate a distribution map of gene/protein expression levels in a heatmap diagram organized by color gradients. The diagram is hyper-linked to a dynamic html table that allows the users to filter the datasets based on various gene features. A dynamic summary chart is generated automatically after each filtering process. Results: We implemented an integrated database that contain pre-defined annotations such as gene/protein properties (ID, name, length, MW, pI); pathways based on KEGG and GO biological process; subcellular localization based on GO cellular component; functional classification based on GO molecular function, kinase, peptidase and transporter. Multiple ways of sorting of column and rows is also provided for comparative analysis and visualization of multiple samples.

Keywords: cancer, visualization, database, functional annotation

Procedia PDF Downloads 619
3053 Free Vibration Analysis of Timoshenko Beams at Higher Modes with Central Concentrated Mass Using Coupled Displacement Field Method

Authors: K. Meera Saheb, K. Krishna Bhaskar

Abstract:

Complex structures used in many fields of engineering are made up of simple structural elements like beams, plates etc. These structural elements, sometimes carry concentrated masses at discrete points, and when subjected to severe dynamic environment tend to vibrate with large amplitudes. The frequency amplitude relationship is very much essential in determining the response of these structural elements subjected to the dynamic loads. For Timoshenko beams, the effects of shear deformation and rotary inertia are to be considered to evaluate the fundamental linear and nonlinear frequencies. A commonly used method for solving vibration problem is energy method, or a finite element analogue of the same. In the present Coupled Displacement Field method the number of undetermined coefficients is reduced to half when compared to the famous Rayleigh Ritz method, which significantly simplifies the procedure to solve the vibration problem. This is accomplished by using a coupling equation derived from the static equilibrium of the shear flexible structural element. The prime objective of the present paper here is to study, in detail, the effect of a central concentrated mass on the large amplitude free vibrations of uniform shear flexible beams. Accurate closed form expressions for linear frequency parameter for uniform shear flexible beams with a central concentrated mass was developed and the results are presented in digital form.

Keywords: coupled displacement field, coupling equation, large amplitude vibrations, moderately thick plates

Procedia PDF Downloads 226
3052 Real-Time Course Recommendation System for Online Learning Platforms

Authors: benabbess anja

Abstract:

This research presents the design and implementation of a real-time course recommendation system for online learning platforms, leveraging user competencies and expertise levels. The system begins by extracting and classifying the complexity levels of courses from Udemy datasets using semantic enrichment techniques and resources such as WordNet and BERT. A predictive model assigns complexity levels to each course, adding columns that represent the course category, sub-category, and complexity level to the existing dataset. Simultaneously, user profiles are constructed through questionnaires capturing their skills, sub-skills, and proficiency levels. The recommendation process involves generating embeddings with BERT, followed by calculating cosine similarity between user profiles and courses. Courses are ranked based on their relevance, with the BERT model delivering the most accurate results. To enable real-time recommendations, Apache Kafka is integrated to track user interactions (clicks, comments, time spent, completed courses, feedback) and update user profiles. The embeddings are regenerated, and similarities with courses are recalculated to reflect users' evolving needs and behaviors, incorporating a progressive weighting of interactions for more personalized suggestions. This approach ensures dynamic and real-time course recommendations tailored to user progress and engagement, providing a more personalized and effective learning experience. This system aims to improve user engagement and optimize learning paths by offering courses that precisely match users' needs and current skill levels.

Keywords: recommendation system, online learning, real-time, user skills, expertise level, personalized recommendations, dynamic suggestions

Procedia PDF Downloads 10
3051 High-Efficiency Comparator for Low-Power Application

Authors: M. Yousefi, N. Nasirzadeh

Abstract:

In this paper, dynamic comparator structure employing two methods for power consumption reduction with applications in low-power high-speed analog-to-digital converters have been presented. The proposed comparator has low consumption thanks to power reduction methods. They have the ability for offset adjustment. The comparator consumes 14.3 μW at 100 MHz which is equal to 11.8 fJ. The comparator has been designed and simulated in 180 nm CMOS. Layouts occupy 210 μm2.

Keywords: efficiency, comparator, power, low

Procedia PDF Downloads 358
3050 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 97
3049 Transformer Life Enhancement Using Dynamic Switching of Second Harmonic Feature in IEDs

Authors: K. N. Dinesh Babu, P. K. Gargava

Abstract:

Energization of a transformer results in sudden flow of current which is an effect of core magnetization. This current will be dominated by the presence of second harmonic, which in turn is used to segregate fault and inrush current, thus guaranteeing proper operation of the relay. This additional security in the relay sometimes obstructs or delays differential protection in a specific scenario, when the 2nd harmonic content was present during a genuine fault. This kind of scenario can result in isolation of the transformer by Buchholz and pressure release valve (PRV) protection, which is acted when fault creates more damage in transformer. Such delays involve a huge impact on the insulation failure, and chances of repairing or rectifying fault of problem at site become very dismal. Sometimes this delay can cause fire in the transformer, and this situation becomes havoc for a sub-station. Such occurrences have been observed in field also when differential relay operation was delayed by 10-15 ms by second harmonic blocking in some specific conditions. These incidences have led to the need for an alternative solution to eradicate such unwarranted delay in operation in future. Modern numerical relay, called as intelligent electronic device (IED), is embedded with advanced protection features which permit higher flexibility and better provisions for tuning of protection logic and settings. Such flexibility in transformer protection IEDs, enables incorporation of alternative methods such as dynamic switching of second harmonic feature for blocking the differential protection with additional security. The analysis and precautionary measures carried out in this case, have been simulated and discussed in this paper to ensure that similar solutions can be adopted to inhibit analogous issues in future.

Keywords: differential protection, intelligent electronic device (IED), 2nd harmonic inhibit, inrush inhibit

Procedia PDF Downloads 300
3048 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid

Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang

Abstract:

Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.

Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid

Procedia PDF Downloads 433
3047 Land Use/Land Cover Mapping Using Landsat 8 and Sentinel-2 in a Mediterranean Landscape

Authors: Moschos Vogiatzis, K. Perakis

Abstract:

Spatial-explicit and up-to-date land use/land cover information is fundamental for spatial planning, land management, sustainable development, and sound decision-making. In the last decade, many satellite-derived land cover products at different spatial, spectral, and temporal resolutions have been developed, such as the European Copernicus Land Cover product. However, more efficient and detailed information for land use/land cover is required at the regional or local scale. A typical Mediterranean basin with a complex landscape comprised of various forest types, crops, artificial surfaces, and wetlands was selected to test and develop our approach. In this study, we investigate the improvement of Copernicus Land Cover product (CLC2018) using Landsat 8 and Sentinel-2 pixel-based classification based on all available existing geospatial data (Forest Maps, LPIS, Natura2000 habitats, cadastral parcels, etc.). We examined and compared the performance of the Random Forest classifier for land use/land cover mapping. In total, 10 land use/land cover categories were recognized in Landsat 8 and 11 in Sentinel-2A. A comparison of the overall classification accuracies for 2018 shows that Landsat 8 classification accuracy was slightly higher than Sentinel-2A (82,99% vs. 80,30%). We concluded that the main land use/land cover types of CLC2018, even within a heterogeneous area, can be successfully mapped and updated according to CLC nomenclature. Future research should be oriented toward integrating spatiotemporal information from seasonal bands and spectral indexes in the classification process.

Keywords: classification, land use/land cover, mapping, random forest

Procedia PDF Downloads 127
3046 Thermal Image Segmentation Method for Stratification of Freezing Temperatures

Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

The study uses an image analysis technique employing thermal imaging to measure the percentage of areas with various temperatures on a freezing surface. An image segmentation method using threshold values is applied to a sequence of image recording the freezing process. The phenomenon is transient and temperatures vary fast to reach the freezing point and complete the freezing process. Freezing salt water is subjected to the salt rejection that makes the freezing point dynamic and dependent on the salinity at the phase interface. For a specific area of freezing, nucleation starts from one side and end to another side, which causes a dynamic and transient temperature in that area. Thermal cameras are able to reveal a difference in temperature due to their sensitivity to infrared radiance. Using Experimental setup, a video is recorded by a thermal camera to monitor radiance and temperatures during the freezing process. Image processing techniques are applied to all frames to detect and classify temperatures on the surface. Image processing segmentation method is used to find contours with same temperatures on the icing surface. Each segment is obtained using the temperature range appeared in the image and correspond pixel values in the image. Using the contours extracted from image and camera parameters, stratified areas with different temperatures are calculated. To observe temperature contours on the icing surface using the thermal camera, the salt water sample is dropped on a cold surface with the temperature of -20°C. A thermal video is recorded for 2 minutes to observe the temperature field. Examining the results obtained by the method and the experimental observations verifies the accuracy and applicability of the method.

Keywords: ice contour boundary, image processing, image segmentation, salt ice, thermal image

Procedia PDF Downloads 322
3045 Molecular Dynamic Simulation of Cold Spray Process

Authors: Aneesh Joshi, Sagil James

Abstract:

Cold Spray (CS) process is deposition of solid particles over a substrate above a certain critical impact velocity. Unlike thermal spray processes, CS process does not melt the particles thus retaining their original physical and chemical properties. These characteristics make CS process ideal for various engineering applications involving metals, polymers, ceramics and composites. The bonding mechanism involved in CS process is extremely complex considering the dynamic nature of the process. Though CS process offers great promise for several engineering applications, the realization of its full potential is limited by the lack of understanding of the complex mechanisms involved in this process and the effect of critical process parameters on the deposition efficiency. The goal of this research is to understand the complex nanoscale mechanisms involved in CS process. The study uses Molecular Dynamics (MD) simulation technique to understand the material deposition phenomenon during the CS process. Impact of a single crystalline copper nanoparticle on copper substrate is modelled under varying process conditions. The quantitative results of the impacts at different velocities, impact angle and size of the particles are evaluated using flattening ratio, von Mises stress distribution and local shear strain. The study finds that the flattening ratio and hence the quality of deposition was highest for an impact velocity of 700 m/s, particle size of 20 Å and an impact angle of 90°. The stress and strain analysis revealed regions of shear instabilities in the periphery of impact and also revealed plastic deformation of the particles after the impact. The results of this study can be used to augment our existing knowledge in the field of CS processes.

Keywords: cold spray process, molecular dynamics simulation, nanoparticles, particle impact

Procedia PDF Downloads 369