Search results for: leadership models
2374 Preliminary Geophysical Assessment of Soil Contaminants around Wacot Rice Factory Argungu, North-Western Nigeria
Authors: A. I. Augie, Y. Alhassan, U. Z. Magawata
Abstract:
Geophysical investigation was carried out at wacot rice factory Argungu north-western Nigeria, using the 2D electrical resistivity method. The area falls between latitude 12˚44′23ʺN to 12˚44′50ʺN and longitude 4032′18′′E to 4032′39′′E covering a total area of about 1.85 km. Two profiles were carried out with Wenner configuration using resistivity meter (Ohmega). The data obtained from the study area were modeled using RES2DIVN software which gave an automatic interpretation of the apparent resistivity data. The inverse resistivity models of the profiles show the high resistivity values ranging from 208 Ωm to 651 Ωm. These high resistivity values in the overburden were due to dryness and compactness of the strata that lead to consolidation, which is an indication that the area is free from leachate contaminations. However, from the inverse model, there are regions of low resistivity values (1 Ωm to 18 Ωm), these zones were observed and identified as clayey and the most contaminated zones. The regions of low resistivity thereby indicated the leachate plume or the highly leachate concentrated zones due to similar resistivity values in both clayey and leachate. The regions of leachate are mainly from the factory into the surrounding area and its groundwater. The maximum leachate infiltration was found at depths 1 m to 15.9 m (P1) and 6 m to 15.9 m (P2) vertically, as well as distance along the profiles from 67 m to 75 m (P1), 155 m to 180 m (P1), and 115 m to 192 m (P2) laterally.Keywords: contaminant, leachate, soil, groundwater, electrical, resistivity
Procedia PDF Downloads 1612373 Seismic Performance of Various Grades of Steel Columns through Finite Element Analysis
Authors: Asal Pournaghshband, Roham Maher
Abstract:
This study presents a numerical analysis of the cyclic behavior of H-shaped steel columns, focusing on different steel grades, including austenitic, ferritic, duplex stainless steel, and carbon steel. Finite Element (FE) models were developed and validated against experimental data, demonstrating a predictive accuracy of up to 6.5%. The study examined key parameters such as energy dissipation and failure modes. Results indicate that duplex stainless steel offers the highest strength, with superior energy dissipation but a tendency for brittle failure at maximum strains of 0.149. Austenitic stainless steel demonstrated balanced performance with excellent ductility and energy dissipation, showing a maximum strain of 0.122, making it highly suitable for seismic applications. Ferritic stainless steel, while stronger than carbon steel, exhibited reduced ductility and energy absorption. Carbon steel displayed the lowest performance in terms of energy dissipation and ductility, with significant strain concentrations leading to earlier failure. These findings provide critical insights into optimizing material selection for earthquake-resistant structures, balancing strength, ductility, and energy dissipation under seismic conditions.Keywords: energy dissipation, finite element analysis, H-shaped columns, seismic performance, stainless steel grades
Procedia PDF Downloads 302372 DesignChain: Automated Design of Products Featuring a Large Number of Variants
Authors: Lars Rödel, Jonas Krebs, Gregor Müller
Abstract:
The growing price pressure due to the increasing number of global suppliers, the growing individualization of products and ever-shorter delivery times are upcoming challenges in the industry. In this context, Mass Personalization stands for the individualized production of customer products in batch size 1 at the price of standardized products. The possibilities of digitalization and automation of technical order processing open up the opportunity for companies to significantly reduce their cost of complexity and lead times and thus enhance their competitiveness. Many companies already use a range of CAx tools and configuration solutions today. Often, the expert knowledge of employees is hidden in "knowledge silos" and is rarely networked across processes. DesignChain describes the automated digital process from the recording of individual customer requirements, through design and technical preparation, to production. Configurators offer the possibility of mapping variant-rich products within the Design Chain. This transformation of customer requirements into product features makes it possible to generate even complex CAD models, such as those for large-scale plants, on a rule-based basis. With the aid of an automated CAx chain, production-relevant documents are thus transferred digitally to production. This process, which can be fully automated, allows variants to always be generated on the basis of current version statuses.Keywords: automation, design, CAD, CAx
Procedia PDF Downloads 772371 Knowledge Diffusion via Automated Organizational Cartography: Autocart
Authors: Mounir Kehal, Adel Al Araifi
Abstract:
The post-globalisation epoch has placed businesses everywhere in new and different competitive situations where knowledgeable, effective and efficient behaviour has come to provide the competitive and comparative edge. Enterprises have turned to explicit- and even conceptualising on tacit- Knowledge Management to elaborate a systematic approach to develop and sustain the Intellectual Capital needed to succeed. To be able to do that, you have to be able to visualize your organization as consisting of nothing but knowledge and knowledge flows, whilst being presented in a graphical and visual framework, referred to as automated organizational cartography. Hence, creating the ability of further actively classifying existing organizational content evolving from and within data feeds, in an algorithmic manner, potentially giving insightful schemes and dynamics by which organizational know-how is visualised. It is discussed and elaborated on most recent and applicable definitions and classifications of knowledge management, representing a wide range of views from mechanistic (systematic, data driven) to a more socially (psychologically, cognitive/metadata driven) orientated. More elaborate continuum models, for knowledge acquisition and reasoning purposes, are being used for effectively representing the domain of information that an end user may contain in their decision making process for utilization of available organizational intellectual resources (i.e. Autocart). In this paper we present likewise an empirical research study conducted previously to try and explore knowledge diffusion in a specialist knowledge domain.Keywords: knowledge management, knowledge maps, knowledge diffusion, organizational cartography
Procedia PDF Downloads 4182370 Traffic Analysis and Prediction Using Closed-Circuit Television Systems
Authors: Aragorn Joaquin Pineda Dela Cruz
Abstract:
Road traffic congestion is continually deteriorating in Hong Kong. The largest contributing factor is the increase in vehicle fleet size, resulting in higher competition over the utilisation of road space. This study proposes a project that can process closed-circuit television images and videos to provide real-time traffic detection and prediction capabilities. Specifically, a deep-learning model involving computer vision techniques for video and image-based vehicle counting, then a separate model to detect and predict traffic congestion levels based on said data. State-of-the-art object detection models such as You Only Look Once and Faster Region-based Convolutional Neural Networks are tested and compared on closed-circuit television data from various major roads in Hong Kong. It is then used for training in long short-term memory networks to be able to predict traffic conditions in the near future, in an effort to provide more precise and quicker overviews of current and future traffic conditions relative to current solutions such as navigation apps.Keywords: intelligent transportation system, vehicle detection, traffic analysis, deep learning, machine learning, computer vision, traffic prediction
Procedia PDF Downloads 1042369 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales
Authors: Philipp Sommer, Amgad Agoub
Abstract:
The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning
Procedia PDF Downloads 602368 The Relationship between Knowledge Management Processes and Strategic Thinking at the Organization Level
Authors: Bahman Ghaderi, Hedayat Hosseini, Parviz Kafche
Abstract:
The role of knowledge management processes in achieving the strategic goals of organizations is crucial. To this end, understanding the relationship between knowledge management processes and different aspects of strategic thinking (followed by long-term organizational planning) should be considered. This research examines the relationship between each of the five knowledge management processes (creation, storage, transfer, audit, and deployment) with each dimension of strategic thinking (vision, creativity, thinking, communication and analysis) in one of the major sectors of the food industry in Iran. In this research, knowledge management and its dimensions (knowledge acquisition, knowledge storage, knowledge transfer, knowledge auditing, and finally knowledge utilization) as independent variables and strategic thinking and its dimensions (creativity, systematic thinking, vision, strategic analysis, and strategic communication) are considered as the dependent variable. The statistical population of this study consisted of 245 managers and employees of Minoo Food Industrial Group in Tehran. In this study, a simple random sampling method was used, and data were collected by a questionnaire designed by the research team. Data were analyzed using SPSS 21 software. LISERL software is also used for calculating and drawing models and graphs. Among the factors investigated in the present study, knowledge storage with 0.78 had the most effect, and knowledge transfer with 0.62 had the least effect on knowledge management and thus on strategic thinking.Keywords: knowledge management, strategic thinking, knowledge management processes, food industry
Procedia PDF Downloads 1732367 A Lightweight Pretrained Encrypted Traffic Classification Method with Squeeze-and-Excitation Block and Sharpness-Aware Optimization
Authors: Zhiyan Meng, Dan Liu, Jintao Meng
Abstract:
Dependable encrypted traffic classification is crucial for improving cybersecurity and handling the growing amount of data. Large language models have shown that learning from large datasets can be effective, making pre-trained methods for encrypted traffic classification popular. However, attention-based pre-trained methods face two main issues: their large neural parameters are not suitable for low-computation environments like mobile devices and real-time applications, and they often overfit by getting stuck in local minima. To address these issues, we developed a lightweight transformer model, which reduces the computational parameters through lightweight vocabulary construction and Squeeze-and-Excitation Block. We use sharpness-aware optimization to avoid local minima during pre-training and capture temporal features with relative positional embeddings. Our approach keeps the model's classification accuracy high for downstream tasks. We conducted experiments on four datasets -USTC-TFC2016, VPN 2016, Tor 2016, and CICIOT 2022. Even with fewer than 18 million parameters, our method achieves classification results similar to methods with ten times as many parameters.Keywords: sharpness-aware optimization, encrypted traffic classification, squeeze-and-excitation block, pretrained model
Procedia PDF Downloads 332366 Histological Evaluation of the Neuroprotective Roles of Trans Cinnamaldehyde against High Fat Diet and Streptozotozin Induced Neurodegeneration in Wistar Rats
Authors: Samson Ehindero, Oluwole Akinola
Abstract:
Substantial evidence has shown an association between type 2 diabetes (T2D) and cognitive decline, Trans Cinnamaldehyde (TCA) has been shown to have many potent pharmacological properties. In this present study, we are currently investigating the effects of TCA on type II diabetes-induced neurodegeneration. Neurodegeneration was induced in forty (40) adult wistar rats using high fat diet (HFD) for 4 months followed by low dose of streptozotocin (STZ) (40 mg/kg, i.p.) administration. TCA was administered orally for 30 days at the doses of 40mg/kg and 60mg/kg body weight. Animals were randomized and divided into following groups; A- control group, B- diabetic group, C- TCA (high dose), D- diabetic + TCA (high dose), E- diabetic + TCA (high dose) with high fat diet, F- TCA Low dose, G- diabetic + TCA (low dose) and H- diabetic + TCA (low dose) with high fat diet. Animals were subjected to behavioral tests followed by histological studies of the hippocampus. Demented rats showed impaired behavior in Y- Maze test compared to treated and control groups. Trans Cinnamaldehyde restores the histo architecture of the hippocampus of demented rats. This present study demonstrates that treatment with trans- cinnamaldehyde improves behavioral deficits, restores cellular histo architecture in rat models of neurodegeneration.Keywords: neurodegeneration, trans cinnamaldehyde, high fat diet, streptozotocin
Procedia PDF Downloads 1882365 Cytotoxicity of Nano β–Tricalcium Phosphate (β-TCP) on Human Osteoblast (hFOB1.19)
Authors: Jer Ping Ooi, Shah Rizal Bin Kasim, Nor Aini Saidin
Abstract:
The objective of this study was to synthesize nano-sized β-tricalcium phosphate (β-TCP) powder and assess its cytotoxic effects on human osteoblast (hFOB1.19) by using four cytotoxicity assays, namely, lactose dehydrogenase (LDHe), tetrazolium hydroxide (XTT), neutral red (NR), and sulforhodamine B (SRB) assays. β-tricalcium phosphate (β-TCP) is a calcium phosphate compound commonly used as an implant material. To date, bulk-sized β-TCP is reported to be readily tolerated by the osteogenic cells and body based on in vitro, in vivo experiments and clinical studies. However, to what extent of nano-sized β-TCP will react in models as compared to bulk β-TCP is yet to be investigated. Thus, in this project, the cells were treated with nano β-TCP powder within a range of concentrations from 0 to 1000 μg/mL for 24, 48, and 72 h. The cytotoxicity tests showed that loss of cell viability ( > 50%) was high for hFOB1.19 cells in all assays. Cell cycle and apoptosis analysis of hFOB1.19 cells revealed that 50 μg/mL of the compound led to 30.5% of cells being apoptotic after 72 h of incubation, and the percentage was increased to 58.6% when the concentration was increased to 200 μg/mL. When the incubation time was increased from 24 to 72 h, the percentage of apoptotic cells increased from 17.3% to 58.6% when the hFOB1.19 were exposed with 200 μg/mL of nano β-TCP powder. Thus, both concentration and exposure duration affected the cytotoxicity effects of the nano β-TCP powder on hFOB1.19. We hypothesize that these cytotoxic effects on hFOB1.19 are related to the nano-scale size of the β-TCP.Keywords: β-tricalcium phosphate, hFOB1.19, adipose-derived mesenchymal stem cells, cytotoxicity
Procedia PDF Downloads 3202364 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework
Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim
Abstract:
Background modeling and subtraction in video analysis has been widely proved to be an effective method for moving objects detection in many computer vision applications. Over the past years, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are two of the most frequently occurring issues in the practical situation. This paper presents a new two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean values of RGB color channels. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block-wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the outputs of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate a very competitive performance compared to previous models.Keywords: background subtraction, codebook model, local binary pattern, dynamic background, illumination change
Procedia PDF Downloads 2202363 Encouraging Teachers to be Reflective: Advantages, Obstacles and Limitations
Authors: Fazilet Alachaher
Abstract:
Within the constructivist perspective of teaching, which views skilled teaching as knowing what to do in uncertain and unpredictable situations, this research essay explores the topic of reflective teaching by investigating the following questions: (1) What is reflective teaching and why is it important? (2) Why should teachers be trained to be reflective and how can they be prepared to be reflective? (3) What is the role of the teaching context in teachers’ attempts to be reflective? This paper suggests that reflective teaching is important because of the various potential benefits to teaching. Through reflection, teachers can maintain their voices and creativeness thus have authority to affect students, curriculum and school policies. The discussions also highlight the need to prepare student teachers and their professional counterparts to be reflective, so they can develop the characteristics of reflective teaching and gain the potential benefits of reflection. This can be achieved by adopting models and techniques that are based on constructivist pedagogical approaches. The paper also suggests that maintaining teachers’ attempts to be reflective in a workplace context and aligning practice with pre-service teacher education programs require the administrators or the policy makers to provide the following: sufficient time for teachers to reflect and work collaboratively to discuss challenges encountered in teaching, fewer non-classroom duties, regular in-service opportunities, more facilities and freedom in choosing suitable ways of evaluating their students’ progress and needs.Keywords: creative teaching, reflective teaching, constructivist pedagogical approaches, teaching context, teacher’s role, curriculum and school policies, teaching context effect
Procedia PDF Downloads 4482362 Signal Integrity Performance Analysis in Capacitive and Inductively Coupled Very Large Scale Integration Interconnect Models
Authors: Mudavath Raju, Bhaskar Gugulothu, B. Rajendra Naik
Abstract:
The rapid advances in Very Large Scale Integration (VLSI) technology has resulted in the reduction of minimum feature size to sub-quarter microns and switching time in tens of picoseconds or even less. As a result, the degradation of high-speed digital circuits due to signal integrity issues such as coupling effects, clock feedthrough, crosstalk noise and delay uncertainty noise. Crosstalk noise in VLSI interconnects is a major concern and reduction in VLSI interconnect has become more important for high-speed digital circuits. It is the most effectively considered in Deep Sub Micron (DSM) and Ultra Deep Sub Micron (UDSM) technology. Increasing spacing in-between aggressor and victim line is one of the technique to reduce the crosstalk. Guard trace or shield insertion in-between aggressor and victim is also one of the prominent options for the minimization of crosstalk. In this paper, far end crosstalk noise is estimated with mutual inductance and capacitance RLC interconnect model. Also investigated the extent of crosstalk in capacitive and inductively coupled interconnects to minimizes the same through shield insertion technique.Keywords: VLSI, interconnects, signal integrity, crosstalk, shield insertion, guard trace, deep sub micron
Procedia PDF Downloads 1882361 The Imagined Scientific Drawing as a Representative of the Content Provided by Emotions to Scientific Rationality
Authors: Dení Stincer Gómez, Zuraya Monroy Nasr
Abstract:
From the epistemology of emotions, one of the topics of current reflection is the function that emotions fulfill in the rational processes involved in scientific activity. So far, three functions have been assigned to them: selective, heuristic, and carriers of content. In this last function, it is argued that emotions, like our perceptual organs, contribute relevant content to reasoning, which is then converted into linguistic statements or graphic representations. In this paper, of a qualitative and philosophical nature, arguments are provided for two hypotheses 1) if emotions provide content to the mind, which then translates it into language or representations, then it is important to take up the idea of the Saussurean linguistic sign to understand this process. This sign has two elements: the signified and the signifier. Emotions would provide meanings, and reasoning creates the signifier, and 2) the meanings provided by emotions are properties and qualities of phenomena generally not accessible to the sense organs. These meanings must be imagined, and the imagination is nurtured by the feeling that "maybe this is the way." One way to access the content provided by emotions can be through imagined scientific drawings. The atomic models created since Thomson, the structure of crystals by René Just, the representations of lunar eclipses by Johannes, fractal geometry, and the structure of DNA, among others, have resulted fundamentally from the imagination. These representations, not provided by the sense organs, seem to come from the emotional involvement of scientists in their desire to understand, explain and discover.Keywords: emotions, epistemic functions of emotions, scientific drawing, linguistic sign
Procedia PDF Downloads 752360 The Interplay of Dietary Fibers and Intestinal Microbiota Affects Type 2 Diabetes by Generating Short-Chain Fatty Acids
Authors: Muhammad Mazhar, Yong Zhu, Likang Qin
Abstract:
Foods contain endogenous components known as dietary fibers, which are classified into soluble and insoluble forms. Dietary fibers are resistant to gut digestive enzymes, modulating anaerobic intestinal microbiota (AIM) and fabricating short-chain fatty acids (SCFAs). Acetate, butyrate, and propionate dominate in the gut, and different pathways, including Wood-Ljungdahl and acrylate pathways, generate these SCFAs. In pancreatic dysfunction, the release of insulin/glucagon is impaired, which leads to hyperglycemia. SCFAs enhance insulin sensitivity or secretion, beta-cell functions, leptin release, mitochondrial functions, and intestinal gluconeogenesis in human organs, which positively affect type 2 diabetes (T2D). Research models presented that SCFAs either enhance the release of peptide YY (PYY) and glucagon-like peptide-1 (GLP-1) from L-cells (entero-endocrine) or promote the release of leptin hormone satiation in adipose tissues through G-protein receptors, i.e., GPR-41/GPR-43. Dietary fibers are the components of foods that influence AIM and produce SCFAs, which may be offering beneficial effects on T2D. This review addresses the effectiveness of SCFAs in modulating gut AIM in the fermentation of dietary fiber and their worth against T2D.Keywords: dietary fibers, intestinal microbiota, short-chain fatty acids, fermentation, type 2 diabetes
Procedia PDF Downloads 742359 Regression of Hand Kinematics from Surface Electromyography Data Using an Long Short-Term Memory-Transformer Model
Authors: Anita Sadat Sadati Rostami, Reza Almasi Ghaleh
Abstract:
Surface electromyography (sEMG) offers important insights into muscle activation and has applications in fields including rehabilitation and human-computer interaction. The purpose of this work is to predict the degree of activation of two joints in the index finger using an LSTM-Transformer architecture trained on sEMG data from the Ninapro DB8 dataset. We apply advanced preprocessing techniques, such as multi-band filtering and customizable rectification methods, to enhance the encoding of sEMG data into features that are beneficial for regression tasks. The processed data is converted into spike patterns and simulated using Leaky Integrate-and-Fire (LIF) neuron models, allowing for neuromorphic-inspired processing. Our findings demonstrate that adjusting filtering parameters and neuron dynamics and employing the LSTM-Transformer model improves joint angle prediction performance. This study contributes to the ongoing development of deep learning frameworks for sEMG analysis, which could lead to improvements in motor control systems.Keywords: surface electromyography, LSTM-transformer, spiking neural networks, hand kinematics, leaky integrate-and-fire neuron, band-pass filtering, muscle activity decoding
Procedia PDF Downloads 182358 A Longitudinal Study of Psychological Capital, Parent-Child Relationships, and Subjective Well-Beings in Economically Disadvantaged Adolescents
Authors: Chang Li-Yu
Abstract:
Purposes: The present research focuses on exploring the latent growth model of psychological capital in disadvantaged adolescents and assessing its relationship with subjective well-being. Methods: Longitudinal study design was utilized and the data was from Taiwan Database of Children and Youth in Poverty (TDCYP), using the student questionnaires from 2009, 2011, and 2013. Data analysis was conducted using both univariate and multivariate latent growth curve models. Results: This study finds that: (1) The initial state and growth rate of individual factors such as parent-child relationships, psychological capital, and subjective wellbeing in economically disadvantaged adolescents have a predictive impact; (2) There are positive interactive effects in the development among factors like parentchild relationships, psychological capital, and subjective well-being in economically disadvantaged adolescents; and (3) The initial state and growth rate of parent-child relationships and psychological capital in economically disadvantaged adolescents positively affect the initial state and growth rate of their subjective well-being. Recommendations: Based on these findings, this study concretely discusses the significance of psychological capital and family cohesion for the mental health of economically disadvantaged youth and offers suggestions for counseling, psychological therapy, and future research.Keywords: economically disadvantaged adolescents, psychological capital, parent-child relationships, subjective well-beings
Procedia PDF Downloads 622357 A Study of Using Multiple Subproblems in Dantzig-Wolfe Decomposition of Linear Programming
Authors: William Chung
Abstract:
This paper is to study the use of multiple subproblems in Dantzig-Wolfe decomposition of linear programming (DW-LP). Traditionally, the decomposed LP consists of one LP master problem and one LP subproblem. The master problem and the subproblem is solved alternatively by exchanging the dual prices of the master problem and the proposals of the subproblem until the LP is solved. It is well known that convergence is slow with a long tail of near-optimal solutions (asymptotic convergence). Hence, the performance of DW-LP highly depends upon the number of decomposition steps. If the decomposition steps can be greatly reduced, the performance of DW-LP can be improved significantly. To reduce the number of decomposition steps, one of the methods is to increase the number of proposals from the subproblem to the master problem. To do so, we propose to add a quadratic approximation function to the LP subproblem in order to develop a set of approximate-LP subproblems (multiple subproblems). Consequently, in each decomposition step, multiple subproblems are solved for providing multiple proposals to the master problem. The number of decomposition steps can be reduced greatly. Note that each approximate-LP subproblem is nonlinear programming, and solving the LP subproblem must faster than solving the nonlinear multiple subproblems. Hence, using multiple subproblems in DW-LP is the tradeoff between the number of approximate-LP subproblems being formed and the decomposition steps. In this paper, we derive the corresponding algorithms and provide some simple computational results. Some properties of the resulting algorithms are also given.Keywords: approximate subproblem, Dantzig-Wolfe decomposition, large-scale models, multiple subproblems
Procedia PDF Downloads 1682356 Does Citizens’ Involvement Always Improve Outcomes: Procedures, Incentives and Comparative Advantages of Public and Private Law Enforcement
Authors: Avdasheva Svetlanaa, Kryuchkova Polinab
Abstract:
Comparative social efficiency of private and public enforcement of law is debated. This question is not of academic interest only, it is also important for the development of the legal system and regulations. Generally, involvement of ‘common citizens’ in public law enforcement is considered to be beneficial, while involvement of interest groups representatives is not. Institutional economics as well as law and economics consider the difference between public and private enforcement to be rather mechanical. Actions of bureaucrats in government agencies are assumed to be driven by the incentives linked to social welfare (or other indicator of public interest) and their own benefits. In contrast, actions of participants in private enforcement are driven by their private benefits. However administrative law enforcement may be designed in such a way that it would become driven mainly by individual incentives of alleged victims. We refer to this system as reactive public enforcement. Citizens may prefer using reactive public enforcement even if private enforcement is available. However replacement of public enforcement by reactive version of public enforcement negatively affects deterrence and reduces social welfare. We illustrate the problem of private vs pure public and private vs reactive public enforcement models with the examples of three legislation subsystems in Russia – labor law, consumer protection law and competition law. While development of private enforcement instead of public (especially in reactive public model) is desirable, replacement of both public and private enforcement by reactive model is definitely not.Keywords: public enforcement, private complaints, legal errors, competition protection, labor law, competition law, russia
Procedia PDF Downloads 4952355 Detecting Music Enjoyment Level Using Electroencephalogram Signals and Machine Learning Techniques
Authors: Raymond Feng, Shadi Ghiasi
Abstract:
An electroencephalogram (EEG) is a non-invasive technique that records electrical activity in the brain using scalp electrodes. Researchers have studied the use of EEG to detect emotions and moods by collecting signals from participants and analyzing how those signals correlate with their activities. In this study, researchers investigated the relationship between EEG signals and music enjoyment. Participants listened to music while data was collected. During the signal-processing phase, power spectral densities (PSDs) were computed from the signals, and dominant brainwave frequencies were extracted from the PSDs to form a comprehensive feature matrix. A machine learning approach was then taken to find correlations between the processed data and the music enjoyment level indicated by the participants. To improve on previous research, multiple machine learning models were employed, including K-Nearest Neighbors Classifier, Support Vector Classifier, and Decision Tree Classifier. Hyperparameters were used to fine-tune each model to further increase its performance. The experiments showed that a strong correlation exists, with the Decision Tree Classifier with hyperparameters yielding 85% accuracy. This study proves that EEG is a reliable means to detect music enjoyment and has future applications, including personalized music recommendation, mood adjustment, and mental health therapy.Keywords: EEG, electroencephalogram, machine learning, mood, music enjoyment, physiological signals
Procedia PDF Downloads 642354 Design and Fabrication of a Parabolic trough Collector and Experimental Investigation of Direct Steam Production in Tehran
Authors: M. Bidi, H. Akhbari, S. Eslami, A. Bakhtiari
Abstract:
Due to the high potential of solar energy utilization in Iran, development of related technologies is of great necessity. Linear parabolic collectors are among the most common and most efficient means to harness the solar energy. The main goal of this paper is design and construction of a parabolic trough collector to produce hot water and steam in Tehran. To provide precise and practical plans, 3D models of the collector under consideration were developed using Solidworks software. This collector was designed in a way that the tilt angle can be adjusted manually. To increase concentraion ratio, a small diameter absorber tube is selected and to enhance solar absorbtion, a shape of U-tube is used. One of the outstanding properties of this collector is its simple design and use of low cost metal and plastic materials in its manufacturing procedure. The collector under consideration was installed in Shahid Beheshti University of Tehran and the values of solar irradiation, ambient temperature, wind speed and collector steam production rate were measured in different days and hours of July. Results revealed that a 1×2 m parabolic trough collector located in Tehran is able to produce steam by the rate of 300ml/s under the condition of atmospheric pressure and without using a vacuum cover over the absorber tube.Keywords: desalination, parabolic trough collector, direct steam production, solar water heater, design and construction
Procedia PDF Downloads 3132353 LLM-Powered User-Centric Knowledge Graphs for Unified Enterprise Intelligence
Authors: Rajeev Kumar, Harishankar Kumar
Abstract:
Fragmented data silos within enterprises impede the extraction of meaningful insights and hinder efficiency in tasks such as product development, client understanding, and meeting preparation. To address this, we propose a system-agnostic framework that leverages large language models (LLMs) to unify diverse data sources into a cohesive, user-centered knowledge graph. By automating entity extraction, relationship inference, and semantic enrichment, the framework maps interactions, behaviors, and data around the user, enabling intelligent querying and reasoning across various data types, including emails, calendars, chats, documents, and logs. Its domain adaptability supports applications in contextual search, task prioritization, expertise identification, and personalized recommendations, all rooted in user-centric insights. Experimental results demonstrate its effectiveness in generating actionable insights, enhancing workflows such as trip planning, meeting preparation, and daily task management. This work advances the integration of knowledge graphs and LLMs, bridging the gap between fragmented data systems and intelligent, unified enterprise solutions focused on user interactions.Keywords: knowledge graph, entity extraction, relation extraction, LLM, activity graph, enterprise intelligence
Procedia PDF Downloads 112352 Graph Neural Networks and Rotary Position Embedding for Voice Activity Detection
Authors: YingWei Tan, XueFeng Ding
Abstract:
Attention-based voice activity detection models have gained significant attention in recent years due to their fast training speed and ability to capture a wide contextual range. The inclusion of multi-head style and position embedding in the attention architecture are crucial. Having multiple attention heads allows for differential focus on different parts of the sequence, while position embedding provides guidance for modeling dependencies between elements at various positions in the input sequence. In this work, we propose an approach by considering each head as a node, enabling the application of graph neural networks (GNN) to identify correlations among the different nodes. In addition, we adopt an implementation named rotary position embedding (RoPE), which encodes absolute positional information into the input sequence by a rotation matrix, and naturally incorporates explicit relative position information into a self-attention module. We evaluate the effectiveness of our method on a synthetic dataset, and the results demonstrate its superiority over the baseline CRNN in scenarios with low signal-to-noise ratio and noise, while also exhibiting robustness across different noise types. In summary, our proposed framework effectively combines the strengths of CNN and RNN (LSTM), and further enhances detection performance through the integration of graph neural networks and rotary position embedding.Keywords: voice activity detection, CRNN, graph neural networks, rotary position embedding
Procedia PDF Downloads 762351 Real-Time Finger Tracking: Evaluating YOLOv8 and MediaPipe for Enhanced HCI
Authors: Zahra Alipour, Amirreza Moheb Afzali
Abstract:
In the field of human-computer interaction (HCI), hand gestures play a crucial role in facilitating communication by expressing emotions and intentions. The precise tracking of the index finger and the estimation of joint positions are essential for developing effective gesture recognition systems. However, various challenges, such as anatomical variations, occlusions, and environmental influences, hinder optimal functionality. This study investigates the performance of the YOLOv8m model for hand detection using the EgoHands dataset, which comprises diverse hand gesture images captured in various environments. Over three training processes, the model demonstrated significant improvements in precision (from 88.8% to 96.1%) and recall (from 83.5% to 93.5%), achieving a mean average precision (mAP) of 97.3% at an IoU threshold of 0.7. We also compared YOLOv8m with MediaPipe and an integrated YOLOv8 + MediaPipe approach. The combined method outperformed the individual models, achieving an accuracy of 99% and a recall of 99%. These findings underscore the benefits of model integration in enhancing gesture recognition accuracy and localization for real-time applications. The results suggest promising avenues for future research in HCI, particularly in augmented reality and assistive technologies, where improved gesture recognition can significantly enhance user experience.Keywords: YOLOv8, mediapipe, finger tracking, joint estimation, human-computer interaction (HCI)
Procedia PDF Downloads 132350 Changes in When and Where People Are Spending Time in Response to COVID-19
Authors: Nicholas Reinicke, Brennan Borlaug, Matthew Moniot
Abstract:
The COVID-19 pandemic has resulted in a significant change in driving behavior as people respond to the new environment. However, existing methods for analyzing driver behavior, such as travel surveys and travel demand models, are not suited for incorporating abrupt environmental disruptions. To address this, we analyze a set of high-resolution trip data and introduce two new metrics for quantifying driving behavioral shifts as a function of time, allowing us to compare the time periods before and after the pandemic began. We apply these metrics to the Denver, Colorado metropolitan statistical area (MSA) to demonstrate the utility of the metrics. Then, we present a case study for comparing two distinct MSAs, Louisville, Kentucky, and Des Moines, Iowa, which exhibit significant differences in the makeup of their labor markets. The results indicate that although the regions of study exhibit certain unique driving behavioral shifts, emerging trends can be seen when comparing between seemingly distinct regions. For instance, drivers in all three MSAs are generally shown to have spent more time at residential locations and less time in workplaces in the time period after the pandemic started. In addition, workplaces that may be incompatible with remote working, such as hospitals and certain retail locations, generally retained much of their pre-pandemic travel activity.Keywords: COVID-19, driver behavior, GPS data, signal analysis, telework
Procedia PDF Downloads 1122349 Enhancement Method of Network Traffic Anomaly Detection Model Based on Adversarial Training With Category Tags
Authors: Zhang Shuqi, Liu Dan
Abstract:
For the problems in intelligent network anomaly traffic detection models, such as low detection accuracy caused by the lack of training samples, poor effect with small sample attack detection, a classification model enhancement method, F-ACGAN(Flow Auxiliary Classifier Generative Adversarial Network) which introduces generative adversarial network and adversarial training, is proposed to solve these problems. Generating adversarial data with category labels could enhance the training effect and improve classification accuracy and model robustness. FACGAN consists of three steps: feature preprocess, which includes data type conversion, dimensionality reduction and normalization, etc.; A generative adversarial network model with feature learning ability is designed, and the sample generation effect of the model is improved through adversarial iterations between generator and discriminator. The adversarial disturbance factor of the gradient direction of the classification model is added to improve the diversity and antagonism of generated data and to promote the model to learn from adversarial classification features. The experiment of constructing a classification model with the UNSW-NB15 dataset shows that with the enhancement of FACGAN on the basic model, the classification accuracy has improved by 8.09%, and the score of F1 has improved by 6.94%.Keywords: data imbalance, GAN, ACGAN, anomaly detection, adversarial training, data augmentation
Procedia PDF Downloads 1092348 Leveraging Large Language Models to Build a Cutting-Edge French Word Sense Disambiguation Corpus
Authors: Mouheb Mehdoui, Amel Fraisse, Mounir Zrigui
Abstract:
With the increasing amount of data circulating over the Web, there is a growing need to develop and deploy tools aimed at unraveling semantic nuances within text or sentences. The challenges in extracting precise meanings arise from the complexity of natural language, while words usually have multiple interpretations depending on the context. The challenge of precisely interpreting words within a given context is what the task of Word Sense Disambiguation meets. It is a very old domain within the area of Natural Language Processing aimed at determining a word’s meaning that it is going to carry in a particular context, hence increasing the correctness of applications processing the language. Numerous linguistic resources are accessible online, including WordNet, thesauri, and dictionaries, enabling exploration of diverse contextual meanings. However, several limitations persist. These include the scarcity of resources for certain languages, a limited number of examples within corpora, and the challenge of accurately detecting the topic or context covered by text, which significantly impacts word sense disambiguation. This paper will discuss the different approaches to WSD and review corpora available for this task. We will contrast these approaches, highlighting the limitations, which will allow us to build a corpus in French, targeted for WSD.Keywords: semantic enrichment, disambiguation, context fusion, natural language processing, multilingual applications
Procedia PDF Downloads 172347 TerraEnhance: High-Resolution Digital Elevation Model Generation using GANs
Authors: Siddharth Sarma, Ayush Majumdar, Nidhi Sabu, Mufaddal Jiruwaala, Shilpa Paygude
Abstract:
Digital Elevation Models (DEMs) are digital representations of the Earth’s topography, which include information about the elevation, slope, aspect, and other terrain attributes. DEMs play a crucial role in various applications, including terrain analysis, urban planning, and environmental modeling. In this paper, TerraEnhance is proposed, a distinct approach for high-resolution DEM generation using Generative Adversarial Networks (GANs) combined with Real-ESRGANs. By learning from a dataset of low-resolution DEMs, the GANs are trained to upscale the data by 10 times, resulting in significantly enhanced DEMs with improved resolution and finer details. The integration of Real-ESRGANs further enhances visual quality, leading to more accurate representations of the terrain. A post-processing layer is introduced, employing high-pass filtering to refine the generated DEMs, preserving important details while reducing noise and artifacts. The results demonstrate that TerraEnhance outperforms existing methods, producing high-fidelity DEMs with intricate terrain features and exceptional accuracy. These advancements make TerraEnhance suitable for various applications, such as terrain analysis and precise environmental modeling.Keywords: DEM, ESRGAN, image upscaling, super resolution, computer vision
Procedia PDF Downloads 112346 A Comparative Study of Global Power Grids and Global Fossil Energy Pipelines Using GIS Technology
Authors: Wenhao Wang, Xinzhi Xu, Limin Feng, Wei Cong
Abstract:
This paper comprehensively investigates current development status of global power grids and fossil energy pipelines (oil and natural gas), proposes a standard visual platform of global power and fossil energy based on Geographic Information System (GIS) technology. In this visual platform, a series of systematic visual models is proposed with global spatial data, systematic energy and power parameters. Under this visual platform, the current Global Power Grids Map and Global Fossil Energy Pipelines Map are plotted within more than 140 countries and regions across the world. Using the multi-scale fusion data processing and modeling methods, the world’s global fossil energy pipelines and power grids information system basic database is established, which provides important data supporting global fossil energy and electricity research. Finally, through the systematic and comparative study of global fossil energy pipelines and global power grids, the general status of global fossil energy and electricity development are reviewed, and energy transition in key areas are evaluated and analyzed. Through the comparison analysis of fossil energy and clean energy, the direction of relevant research is pointed out for clean development and energy transition.Keywords: energy transition, geographic information system, fossil energy, power systems
Procedia PDF Downloads 1532345 Enhancing Seismic Performance of Ductile Moment Frames with Delayed Wire-Rope Bracing Using Middle Steel Plate
Authors: Babak Dizangian, Mohammad Reza Ghasemi, Akram Ghalandari
Abstract:
Moment frames have considerable ductility against cyclic lateral loads and displacements; however, if this feature causes the relative displacement to exceed the permissible limit, it can impose unfavorable hysteretic behavior on the frame. Therefore, adding a bracing system with the capability of preserving the capacity of high energy absorption and controlling displacements without a considerable increase in the stiffness is quite important. This paper investigates the retrofitting of a single storey steel moment frame through a delayed wire-rope bracing system using a middle steel plate. In this model, the steel plate lies where the wire ropes meet, and the model geometry is such that the cables are continuously under tension so that they can take the most advantage of the inherent potential they have in tolerating tensile stress. Using the steel plate also reduces the system stiffness considerably compared to cross bracing systems and preserves the ductile frame’s energy absorption capacity. In this research, the software models of delayed wire-rope bracing system have been studied, validated, and compared with other researchers’ laboratory test results.Keywords: cyclic loading, delayed wire rope bracing, ductile moment frame, energy absorption, hysteresis curve
Procedia PDF Downloads 292