Search results for: continuum models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6917

Search results for: continuum models

2357 Assessment and Prediction of Vehicular Emissions in Commonwealth Avenue, Quezon City at Various Policy and Technology Scenarios Using Simple Interactive Model (SIM-Air)

Authors: Ria M. Caramoan, Analiza P. Rollon, Karl N. Vergel

Abstract:

The Simple Interactive Models for Better Air Quality (SIM-air) is an integrated approach model that allows the available information to support the integrated urban air quality management. This study utilized the vehicular air pollution information system module of SIM-air for the assessment of vehicular emissions in Commonwealth Avenue, Quezon City, Philippines. The main objective of the study is to assess and predict the contribution of different types of vehicles to the vehicular emissions in terms of PM₁₀, SOₓ, and NOₓ at different policy and technology scenarios. For the base year 2017, the results show vehicular emissions of 735.46 tons of PM₁₀, 108.90 tons of SOₓ, and 2,101.11 tons of NOₓ. Motorcycle is the major source of particulates contributing about 52% of the PM₁₀ emissions. Meanwhile, Public Utility Jeepneys contribute 27% of SOₓ emissions and private cars using gasoline contribute 39% of NOₓ emissions. Ambient air quality monitoring was also conducted in the study area for the standard parameters of PM₁₀, S0₂, and NO₂. Results show an average of 88.11 µg/Ncm, 47.41 µg/Ncm and 22.54 µg/Ncm for PM₁₀, N0₂, and SO₂, respectively, all were within the DENR National Ambient Air Quality Guideline Values. Future emissions of PM₁₀, NOₓ, and SOₓ are estimated at different scenarios. Results show that in the year 2030, PM₁₀ emissions will be increased by 186.2%. NOₓ emissions and SOₓ emissions will also be increased by 38.9% and 5.5%, without the implementation of the scenarios.

Keywords: ambient air quality, emissions inventory, mobile air pollution, vehicular emissions

Procedia PDF Downloads 136
2356 Creative Peace Diplomacy Model by the Perspective of Dialogue Management for International Relations

Authors: Bilgehan Gültekin, Tuba Gültekin

Abstract:

Peace diplomacy is the most important international tool to keep peace all over the world. The study titled “peace diplomacy for international relations” is consist of three part. In the first part, peace diplomacy is going to be introduced as a tool of peace communication and peace management. And, in this part, peace communication will be explained by international communication perspective. In the second part of the study,public relations events and communication campaigns will be developed originally for peace diplomacy. In this part, it is aimed original public communication dialogue management tools for peace diplomacy. the aim of the final part of the study, is to produce original public communication model for international relations. The model includes peace modules, peace management projects, original dialogue procedures and protocols, dialogue education, dialogue management strategies, peace actors, communication models, peace team management and public diplomacy steps. The creative part of the study aims to develop a model used for international relations for all countries. Creative Peace Diplomacy Model will be developed in the case of Turkey-Turkey-France and Turkey-Greece relations. So, communication and public relations events and campaigns are going to be developed as original for only this study.

Keywords: peace diplomacy, public communication model, dialogue management, international relations

Procedia PDF Downloads 541
2355 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: anomaly detection, autoencoder, data centers, deep learning

Procedia PDF Downloads 192
2354 Strengthening Evaluation of Steel Girder Bridge under Load Rating Analysis: Case Study

Authors: Qudama Albu-Jasim, Majdi Kanaan

Abstract:

A case study about the load rating and strengthening evaluation of the six-span of steel girders bridge in Colton city of State of California is investigated. To simulate the load rating strengthening assessment for the Colton Overhead bridge, a three-dimensional finite element model built in the CSiBridge program is simulated. Three-dimensional finite-element models of the bridge are established considering the nonlinear behavior of critical bridge components to determine the feasibility and strengthening capacity under load rating analysis. The bridge was evaluated according to Caltrans Bridge Load Rating Manual 1st edition for rating the superstructure using the Load and Resistance Factor Rating (LRFR) method. The analysis for the bridge was based on load rating to determine the largest loads that can be safely placed on existing I-girder steel members and permitted to pass over the bridge. Through extensive numerical simulations, the bridge is identified to be deficient in flexural and shear capacities, and therefore strengthening for reducing the risk is needed. An in-depth parametric study is considered to evaluate the sensitivity of the bridge’s load rating response to variations in its structural parameters. The parametric analysis has exhibited that uncertainties associated with the steel’s yield strength, the superstructure’s weight, and the diaphragm configurations should be considered during the fragility analysis of the bridge system.

Keywords: load rating, CSIBridge, strengthening, uncertainties, case study

Procedia PDF Downloads 209
2353 Mass Polarization in Three-Body System with Two Identical Particles

Authors: Igor Filikhin, Vladimir M. Suslov, Roman Ya. Kezerashvili, Branislav Vlahivic

Abstract:

The mass-polarization term of the three-body kinetic energy operator is evaluated for different systems which include two identical particles: A+A+B. The term has to be taken into account for the analysis of AB- and AA-interactions based on experimental data for two- and three-body ground state energies. In this study, we present three-body calculations within the framework of a potential model for the kaonic clusters K−K−p and ppK−, nucleus 3H and hypernucleus 6 ΛΛHe. The systems are well clustering as A+ (A+B) with a ground state energy E2 for the pair A+B. The calculations are performed using the method of the Faddeev equations in configuration space. The phenomenological pair potentials were used. We show a correlation between the mass ratio mA/mB and the value δB of the mass-polarization term. For bosonic-like systems, this value is defined as δB = 2E2 − E3, where E3 is three-body energy when VAA = 0. For the systems including three particles with spin(isospin), the models with average AB-potentials are used. In this case, the Faddeev equations become a scalar one like for the bosonic-like system αΛΛ. We show that the additional energy conected with the mass-polarization term can be decomposite to a sum of the two parts: exchenge related and reduced mass related. The state of the system can be described as the following: the particle A1 is bound within the A + B pair with the energy E2, and the second particle A2 is bound with the pair with the energy E3 − E2. Due to the identity of A particles, the particles A1 and A2 are interchangeable in the pair A + B. We shown that the mass polarization δB correlates with a type of AB potential using the system αΛΛ as an example.

Keywords: three-body systems, mass polarization, Faddeev equations, nuclear interactions

Procedia PDF Downloads 375
2352 Application of GA Optimization in Analysis of Variable Stiffness Composites

Authors: Nasim Fallahi, Erasmo Carrera, Alfonso Pagani

Abstract:

Variable angle tow describes the fibres which are curvilinearly steered in a composite lamina. Significantly, stiffness tailoring freedom of VAT composite laminate can be enlarged and enabled. Composite structures with curvilinear fibres have been shown to improve the buckling load carrying capability in contrast with the straight laminate composites. However, the optimal design and analysis of VAT are faced with high computational efforts due to the increasing number of variables. In this article, an efficient optimum solution has been used in combination with 1D Carrera’s Unified Formulation (CUF) to investigate the optimum fibre orientation angles for buckling analysis. The particular emphasis is on the LE-based CUF models, which provide a Lagrange Expansions to address a layerwise description of the problem unknowns. The first critical buckling load has been considered under simply supported boundary conditions. Special attention is lead to the sensitivity of buckling load corresponding to the fibre orientation angle in comparison with the results which obtain through the Genetic Algorithm (GA) optimization frame and then Artificial Neural Network (ANN) is applied to investigate the accuracy of the optimized model. As a result, numerical CUF approach with an optimal solution demonstrates the robustness and computational efficiency of proposed optimum methodology.

Keywords: beam structures, layerwise, optimization, variable stiffness

Procedia PDF Downloads 141
2351 Understanding Climate Change with Chinese Elderly: Knowledge, Attitudes and Practices on Climate Change in East China

Authors: Pelin Kinay, Andy P. Morse, Elmer V. Villanueva, Karyn Morrissey, Philip L Staddon, Shanzheng Zhang, Jingjing Liu

Abstract:

The present study aims to evaluate the climate change and health related knowledge, attitudes and practices (KAP) of the elderly population (60 years plus) in Hefei and Suzhou cities of China (n=300). This cross-sectional study includes 150 participants in each city. Data regarding demographic characteristics, KAP, and climate change perceptions were collected using a semi-structured questionnaire. When asked about the potential impacts of climate change over 79% of participants stated that climate change affected their lifestyle. Participants were most concerned about storms (51.7%), food shortage (33.3%) and drought (26%). The main health risks cited included water contamination (32%), air pollution related diseases (38.3%) and lung disease (43%). Finally, a majority (68.3%) did not report receiving government assistance on climate change issues. Logistic regression models were used to analyse the data in order to understand the links between socio-demographical factors and KAP of the participants. These findings provide insights for potential adaptation strategies targeting the elderly. It is recommended that government should take responsibility in creating awareness strategies to improve the coping capacity of elderly in China to climate change and its health impacts and develop climate change adaptation strategies.

Keywords: China, climate change, elderly, KAP

Procedia PDF Downloads 266
2350 Quantitative Evaluation of Endogenous Reference Genes for ddPCR under Salt Stress Using a Moderate Halophile

Authors: Qinghua Xing, Noha M. Mesbah, Haisheng Wang, Jun Li, Baisuo Zhao

Abstract:

Droplet digital PCR (ddPCR) is being increasingly adopted for gene detection and quantification because of its higher sensitivity and specificity. According to previous observations and our lab data, it is essential to use endogenous reference genes (RGs) when investigating gene expression at the mRNA level under salt stress. This study aimed to select and validate suitable RGs for gene expression under salt stress using ddPCR. Six candidate RGs were selected based on the tandem mass tag (TMT)-labeled quantitative proteomics of Alkalicoccus halolimnae at four salinities. The expression stability of these candidate genes was evaluated using statistical algorithms (geNorm, NormFinder, BestKeeper and RefFinder). There was a small fluctuation in cycle threshold (Ct) value and copy number of the pdp gene. Its expression stability was ranked in the vanguard of all algorithms, and was the most suitable RG for quantification of expression by both qPCR and ddPCR of A. halolimnae under salt stress. Single RG pdp and RG combinations were used to normalize the expression of ectA, ectB, ectC, and ectD under four salinities. The present study constitutes the first systematic analysis of endogenous RG selection for halophiles responding to salt stress. This work provides a valuable theory and an approach reference of internal control identification for ddPCR-based stress response models.

Keywords: endogenous reference gene, salt stress, ddPCR, RT-qPCR, Alkalicoccus halolimnae

Procedia PDF Downloads 103
2349 Preliminary Geophysical Assessment of Soil Contaminants around Wacot Rice Factory Argungu, North-Western Nigeria

Authors: A. I. Augie, Y. Alhassan, U. Z. Magawata

Abstract:

Geophysical investigation was carried out at wacot rice factory Argungu north-western Nigeria, using the 2D electrical resistivity method. The area falls between latitude 12˚44′23ʺN to 12˚44′50ʺN and longitude 4032′18′′E to 4032′39′′E covering a total area of about 1.85 km. Two profiles were carried out with Wenner configuration using resistivity meter (Ohmega). The data obtained from the study area were modeled using RES2DIVN software which gave an automatic interpretation of the apparent resistivity data. The inverse resistivity models of the profiles show the high resistivity values ranging from 208 Ωm to 651 Ωm. These high resistivity values in the overburden were due to dryness and compactness of the strata that lead to consolidation, which is an indication that the area is free from leachate contaminations. However, from the inverse model, there are regions of low resistivity values (1 Ωm to 18 Ωm), these zones were observed and identified as clayey and the most contaminated zones. The regions of low resistivity thereby indicated the leachate plume or the highly leachate concentrated zones due to similar resistivity values in both clayey and leachate. The regions of leachate are mainly from the factory into the surrounding area and its groundwater. The maximum leachate infiltration was found at depths 1 m to 15.9 m (P1) and 6 m to 15.9 m (P2) vertically, as well as distance along the profiles from 67 m to 75 m (P1), 155 m to 180 m (P1), and 115 m to 192 m (P2) laterally.

Keywords: contaminant, leachate, soil, groundwater, electrical, resistivity

Procedia PDF Downloads 158
2348 Seismic Performance of Various Grades of Steel Columns Through Finite Element Analysis

Authors: Asal Pournaghshband, Roham Maher

Abstract:

This study presents a numerical analysis of the cyclic behavior of H-shaped steel columns, focusing on different steel grades, including austenitic, ferritic, duplex stainless steel, and carbon steel. Finite Element (FE) models were developed and validated against experimental data, demonstrating a predictive accuracy of up to 6.5%. The study examined key parameters such as energy dissipation, and failure modes. Results indicate that duplex stainless steel offers the highest strength, with superior energy dissipation but a tendency for brittle failure at maximum strains of 0.149. Austenitic stainless steel demonstrated balanced performance with excellent ductility and energy dissipation, showing a maximum strain of 0.122, making it highly suitable for seismic applications. Ferritic stainless steel, while stronger than carbon steel, exhibited reduced ductility and energy absorption. Carbon steel displayed the lowest performance in terms of energy dissipation and ductility, with significant strain concentrations leading to earlier failure. These findings provide critical insights into optimizing material selection for earthquake-resistant structures, balancing strength, ductility, and energy dissipation under seismic conditions.

Keywords: Energy dissipation, finite element analysis, H-shaped columns, seismic performance, stainless steel grades

Procedia PDF Downloads 22
2347 DesignChain: Automated Design of Products Featuring a Large Number of Variants

Authors: Lars Rödel, Jonas Krebs, Gregor Müller

Abstract:

The growing price pressure due to the increasing number of global suppliers, the growing individualization of products and ever-shorter delivery times are upcoming challenges in the industry. In this context, Mass Personalization stands for the individualized production of customer products in batch size 1 at the price of standardized products. The possibilities of digitalization and automation of technical order processing open up the opportunity for companies to significantly reduce their cost of complexity and lead times and thus enhance their competitiveness. Many companies already use a range of CAx tools and configuration solutions today. Often, the expert knowledge of employees is hidden in "knowledge silos" and is rarely networked across processes. DesignChain describes the automated digital process from the recording of individual customer requirements, through design and technical preparation, to production. Configurators offer the possibility of mapping variant-rich products within the Design Chain. This transformation of customer requirements into product features makes it possible to generate even complex CAD models, such as those for large-scale plants, on a rule-based basis. With the aid of an automated CAx chain, production-relevant documents are thus transferred digitally to production. This process, which can be fully automated, allows variants to always be generated on the basis of current version statuses.

Keywords: automation, design, CAD, CAx

Procedia PDF Downloads 74
2346 Traffic Analysis and Prediction Using Closed-Circuit Television Systems

Authors: Aragorn Joaquin Pineda Dela Cruz

Abstract:

Road traffic congestion is continually deteriorating in Hong Kong. The largest contributing factor is the increase in vehicle fleet size, resulting in higher competition over the utilisation of road space. This study proposes a project that can process closed-circuit television images and videos to provide real-time traffic detection and prediction capabilities. Specifically, a deep-learning model involving computer vision techniques for video and image-based vehicle counting, then a separate model to detect and predict traffic congestion levels based on said data. State-of-the-art object detection models such as You Only Look Once and Faster Region-based Convolutional Neural Networks are tested and compared on closed-circuit television data from various major roads in Hong Kong. It is then used for training in long short-term memory networks to be able to predict traffic conditions in the near future, in an effort to provide more precise and quicker overviews of current and future traffic conditions relative to current solutions such as navigation apps.

Keywords: intelligent transportation system, vehicle detection, traffic analysis, deep learning, machine learning, computer vision, traffic prediction

Procedia PDF Downloads 102
2345 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales

Authors: Philipp Sommer, Amgad Agoub

Abstract:

The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.

Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning

Procedia PDF Downloads 55
2344 The Relationship between Knowledge Management Processes and Strategic Thinking at the Organization Level

Authors: Bahman Ghaderi, Hedayat Hosseini, Parviz Kafche

Abstract:

The role of knowledge management processes in achieving the strategic goals of organizations is crucial. To this end, understanding the relationship between knowledge management processes and different aspects of strategic thinking (followed by long-term organizational planning) should be considered. This research examines the relationship between each of the five knowledge management processes (creation, storage, transfer, audit, and deployment) with each dimension of strategic thinking (vision, creativity, thinking, communication and analysis) in one of the major sectors of the food industry in Iran. In this research, knowledge management and its dimensions (knowledge acquisition, knowledge storage, knowledge transfer, knowledge auditing, and finally knowledge utilization) as independent variables and strategic thinking and its dimensions (creativity, systematic thinking, vision, strategic analysis, and strategic communication) are considered as the dependent variable. The statistical population of this study consisted of 245 managers and employees of Minoo Food Industrial Group in Tehran. In this study, a simple random sampling method was used, and data were collected by a questionnaire designed by the research team. Data were analyzed using SPSS 21 software. LISERL software is also used for calculating and drawing models and graphs. Among the factors investigated in the present study, knowledge storage with 0.78 had the most effect, and knowledge transfer with 0.62 had the least effect on knowledge management and thus on strategic thinking.

Keywords: knowledge management, strategic thinking, knowledge management processes, food industry

Procedia PDF Downloads 169
2343 A Lightweight Pretrained Encrypted Traffic Classification Method with Squeeze-and-Excitation Block and Sharpness-Aware Optimization

Authors: Zhiyan Meng, Dan Liu, Jintao Meng

Abstract:

Dependable encrypted traffic classification is crucial for improving cybersecurity and handling the growing amount of data. Large language models have shown that learning from large datasets can be effective, making pre-trained methods for encrypted traffic classification popular. However, attention-based pre-trained methods face two main issues: their large neural parameters are not suitable for low-computation environments like mobile devices and real-time applications, and they often overfit by getting stuck in local minima. To address these issues, we developed a lightweight transformer model, which reduces the computational parameters through lightweight vocabulary construction and Squeeze-and-Excitation Block. We use sharpness-aware optimization to avoid local minima during pre-training and capture temporal features with relative positional embeddings. Our approach keeps the model's classification accuracy high for downstream tasks. We conducted experiments on four datasets -USTC-TFC2016, VPN 2016, Tor 2016, and CICIOT 2022. Even with fewer than 18 million parameters, our method achieves classification results similar to methods with ten times as many parameters.

Keywords: sharpness-aware optimization, encrypted traffic classification, squeeze-and-excitation block, pretrained model

Procedia PDF Downloads 28
2342 Histological Evaluation of the Neuroprotective Roles of Trans Cinnamaldehyde against High Fat Diet and Streptozotozin Induced Neurodegeneration in Wistar Rats

Authors: Samson Ehindero, Oluwole Akinola

Abstract:

Substantial evidence has shown an association between type 2 diabetes (T2D) and cognitive decline, Trans Cinnamaldehyde (TCA) has been shown to have many potent pharmacological properties. In this present study, we are currently investigating the effects of TCA on type II diabetes-induced neurodegeneration. Neurodegeneration was induced in forty (40) adult wistar rats using high fat diet (HFD) for 4 months followed by low dose of streptozotocin (STZ) (40 mg/kg, i.p.) administration. TCA was administered orally for 30 days at the doses of 40mg/kg and 60mg/kg body weight. Animals were randomized and divided into following groups; A- control group, B- diabetic group, C- TCA (high dose), D- diabetic + TCA (high dose), E- diabetic + TCA (high dose) with high fat diet, F- TCA Low dose, G- diabetic + TCA (low dose) and H- diabetic + TCA (low dose) with high fat diet. Animals were subjected to behavioral tests followed by histological studies of the hippocampus. Demented rats showed impaired behavior in Y- Maze test compared to treated and control groups. Trans Cinnamaldehyde restores the histo architecture of the hippocampus of demented rats. This present study demonstrates that treatment with trans- cinnamaldehyde improves behavioral deficits, restores cellular histo architecture in rat models of neurodegeneration.

Keywords: neurodegeneration, trans cinnamaldehyde, high fat diet, streptozotocin

Procedia PDF Downloads 183
2341 Cytotoxicity of Nano β–Tricalcium Phosphate (β-TCP) on Human Osteoblast (hFOB1.19)

Authors: Jer Ping Ooi, Shah Rizal Bin Kasim, Nor Aini Saidin

Abstract:

The objective of this study was to synthesize nano-sized β-tricalcium phosphate (β-TCP) powder and assess its cytotoxic effects on human osteoblast (hFOB1.19) by using four cytotoxicity assays, namely, lactose dehydrogenase (LDHe), tetrazolium hydroxide (XTT), neutral red (NR), and sulforhodamine B (SRB) assays. β-tricalcium phosphate (β-TCP) is a calcium phosphate compound commonly used as an implant material. To date, bulk-sized β-TCP is reported to be readily tolerated by the osteogenic cells and body based on in vitro, in vivo experiments and clinical studies. However, to what extent of nano-sized β-TCP will react in models as compared to bulk β-TCP is yet to be investigated. Thus, in this project, the cells were treated with nano β-TCP powder within a range of concentrations from 0 to 1000 μg/mL for 24, 48, and 72 h. The cytotoxicity tests showed that loss of cell viability ( > 50%) was high for hFOB1.19 cells in all assays. Cell cycle and apoptosis analysis of hFOB1.19 cells revealed that 50 μg/mL of the compound led to 30.5% of cells being apoptotic after 72 h of incubation, and the percentage was increased to 58.6% when the concentration was increased to 200 μg/mL. When the incubation time was increased from 24 to 72 h, the percentage of apoptotic cells increased from 17.3% to 58.6% when the hFOB1.19 were exposed with 200 μg/mL of nano β-TCP powder. Thus, both concentration and exposure duration affected the cytotoxicity effects of the nano β-TCP powder on hFOB1.19. We hypothesize that these cytotoxic effects on hFOB1.19 are related to the nano-scale size of the β-TCP.

Keywords: β-tricalcium phosphate, hFOB1.19, adipose-derived mesenchymal stem cells, cytotoxicity

Procedia PDF Downloads 313
2340 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework

Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim

Abstract:

Background modeling and subtraction in video analysis has been widely proved to be an effective method for moving objects detection in many computer vision applications. Over the past years, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are two of the most frequently occurring issues in the practical situation. This paper presents a new two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean values of RGB color channels. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block-wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the outputs of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate a very competitive performance compared to previous models.

Keywords: background subtraction, codebook model, local binary pattern, dynamic background, illumination change

Procedia PDF Downloads 216
2339 Encouraging Teachers to be Reflective: Advantages, Obstacles and Limitations

Authors: Fazilet Alachaher

Abstract:

Within the constructivist perspective of teaching, which views skilled teaching as knowing what to do in uncertain and unpredictable situations, this research essay explores the topic of reflective teaching by investigating the following questions: (1) What is reflective teaching and why is it important? (2) Why should teachers be trained to be reflective and how can they be prepared to be reflective? (3) What is the role of the teaching context in teachers’ attempts to be reflective? This paper suggests that reflective teaching is important because of the various potential benefits to teaching. Through reflection, teachers can maintain their voices and creativeness thus have authority to affect students, curriculum and school policies. The discussions also highlight the need to prepare student teachers and their professional counterparts to be reflective, so they can develop the characteristics of reflective teaching and gain the potential benefits of reflection. This can be achieved by adopting models and techniques that are based on constructivist pedagogical approaches. The paper also suggests that maintaining teachers’ attempts to be reflective in a workplace context and aligning practice with pre-service teacher education programs require the administrators or the policy makers to provide the following: sufficient time for teachers to reflect and work collaboratively to discuss challenges encountered in teaching, fewer non-classroom duties, regular in-service opportunities, more facilities and freedom in choosing suitable ways of evaluating their students’ progress and needs.

Keywords: creative teaching, reflective teaching, constructivist pedagogical approaches, teaching context, teacher’s role, curriculum and school policies, teaching context effect

Procedia PDF Downloads 445
2338 Signal Integrity Performance Analysis in Capacitive and Inductively Coupled Very Large Scale Integration Interconnect Models

Authors: Mudavath Raju, Bhaskar Gugulothu, B. Rajendra Naik

Abstract:

The rapid advances in Very Large Scale Integration (VLSI) technology has resulted in the reduction of minimum feature size to sub-quarter microns and switching time in tens of picoseconds or even less. As a result, the degradation of high-speed digital circuits due to signal integrity issues such as coupling effects, clock feedthrough, crosstalk noise and delay uncertainty noise. Crosstalk noise in VLSI interconnects is a major concern and reduction in VLSI interconnect has become more important for high-speed digital circuits. It is the most effectively considered in Deep Sub Micron (DSM) and Ultra Deep Sub Micron (UDSM) technology. Increasing spacing in-between aggressor and victim line is one of the technique to reduce the crosstalk. Guard trace or shield insertion in-between aggressor and victim is also one of the prominent options for the minimization of crosstalk. In this paper, far end crosstalk noise is estimated with mutual inductance and capacitance RLC interconnect model. Also investigated the extent of crosstalk in capacitive and inductively coupled interconnects to minimizes the same through shield insertion technique.

Keywords: VLSI, interconnects, signal integrity, crosstalk, shield insertion, guard trace, deep sub micron

Procedia PDF Downloads 184
2337 The Imagined Scientific Drawing as a Representative of the Content Provided by Emotions to Scientific Rationality

Authors: Dení Stincer Gómez, Zuraya Monroy Nasr

Abstract:

From the epistemology of emotions, one of the topics of current reflection is the function that emotions fulfill in the rational processes involved in scientific activity. So far, three functions have been assigned to them: selective, heuristic, and carriers of content. In this last function, it is argued that emotions, like our perceptual organs, contribute relevant content to reasoning, which is then converted into linguistic statements or graphic representations. In this paper, of a qualitative and philosophical nature, arguments are provided for two hypotheses 1) if emotions provide content to the mind, which then translates it into language or representations, then it is important to take up the idea of the Saussurean linguistic sign to understand this process. This sign has two elements: the signified and the signifier. Emotions would provide meanings, and reasoning creates the signifier, and 2) the meanings provided by emotions are properties and qualities of phenomena generally not accessible to the sense organs. These meanings must be imagined, and the imagination is nurtured by the feeling that "maybe this is the way." One way to access the content provided by emotions can be through imagined scientific drawings. The atomic models created since Thomson, the structure of crystals by René Just, the representations of lunar eclipses by Johannes, fractal geometry, and the structure of DNA, among others, have resulted fundamentally from the imagination. These representations, not provided by the sense organs, seem to come from the emotional involvement of scientists in their desire to understand, explain and discover.

Keywords: emotions, epistemic functions of emotions, scientific drawing, linguistic sign

Procedia PDF Downloads 69
2336 The Interplay of Dietary Fibers and Intestinal Microbiota Affects Type 2 Diabetes by Generating Short-Chain Fatty Acids

Authors: Muhammad Mazhar, Yong Zhu, Likang Qin

Abstract:

Foods contain endogenous components known as dietary fibers, which are classified into soluble and insoluble forms. Dietary fibers are resistant to gut digestive enzymes, modulating anaerobic intestinal microbiota (AIM) and fabricating short-chain fatty acids (SCFAs). Acetate, butyrate, and propionate dominate in the gut, and different pathways, including Wood-Ljungdahl and acrylate pathways, generate these SCFAs. In pancreatic dysfunction, the release of insulin/glucagon is impaired, which leads to hyperglycemia. SCFAs enhance insulin sensitivity or secretion, beta-cell functions, leptin release, mitochondrial functions, and intestinal gluconeogenesis in human organs, which positively affect type 2 diabetes (T2D). Research models presented that SCFAs either enhance the release of peptide YY (PYY) and glucagon-like peptide-1 (GLP-1) from L-cells (entero-endocrine) or promote the release of leptin hormone satiation in adipose tissues through G-protein receptors, i.e., GPR-41/GPR-43. Dietary fibers are the components of foods that influence AIM and produce SCFAs, which may be offering beneficial effects on T2D. This review addresses the effectiveness of SCFAs in modulating gut AIM in the fermentation of dietary fiber and their worth against T2D.

Keywords: dietary fibers, intestinal microbiota, short-chain fatty acids, fermentation, type 2 diabetes

Procedia PDF Downloads 71
2335 Regression of Hand Kinematics from Surface Electromyography Data Using an Long Short-Term Memory-Transformer Model

Authors: Anita Sadat Sadati Rostami, Reza Almasi Ghaleh

Abstract:

Surface electromyography (sEMG) offers important insights into muscle activation and has applications in fields including rehabilitation and human-computer interaction. The purpose of this work is to predict the degree of activation of two joints in the index finger using an LSTM-Transformer architecture trained on sEMG data from the Ninapro DB8 dataset. We apply advanced preprocessing techniques, such as multi-band filtering and customizable rectification methods, to enhance the encoding of sEMG data into features that are beneficial for regression tasks. The processed data is converted into spike patterns and simulated using Leaky Integrate-and-Fire (LIF) neuron models, allowing for neuromorphic-inspired processing. Our findings demonstrate that adjusting filtering parameters and neuron dynamics and employing the LSTM-Transformer model improves joint angle prediction performance. This study contributes to the ongoing development of deep learning frameworks for sEMG analysis, which could lead to improvements in motor control systems.

Keywords: surface electromyography, LSTM-transformer, spiking neural networks, hand kinematics, leaky integrate-and-fire neuron, band-pass filtering, muscle activity decoding

Procedia PDF Downloads 2
2334 A Longitudinal Study of Psychological Capital, Parent-Child Relationships, and Subjective Well-Beings in Economically Disadvantaged Adolescents

Authors: Chang Li-Yu

Abstract:

Purposes: The present research focuses on exploring the latent growth model of psychological capital in disadvantaged adolescents and assessing its relationship with subjective well-being. Methods: Longitudinal study design was utilized and the data was from Taiwan Database of Children and Youth in Poverty (TDCYP), using the student questionnaires from 2009, 2011, and 2013. Data analysis was conducted using both univariate and multivariate latent growth curve models. Results: This study finds that: (1) The initial state and growth rate of individual factors such as parent-child relationships, psychological capital, and subjective wellbeing in economically disadvantaged adolescents have a predictive impact; (2) There are positive interactive effects in the development among factors like parentchild relationships, psychological capital, and subjective well-being in economically disadvantaged adolescents; and (3) The initial state and growth rate of parent-child relationships and psychological capital in economically disadvantaged adolescents positively affect the initial state and growth rate of their subjective well-being. Recommendations: Based on these findings, this study concretely discusses the significance of psychological capital and family cohesion for the mental health of economically disadvantaged youth and offers suggestions for counseling, psychological therapy, and future research.

Keywords: economically disadvantaged adolescents, psychological capital, parent-child relationships, subjective well-beings

Procedia PDF Downloads 56
2333 A Study of Using Multiple Subproblems in Dantzig-Wolfe Decomposition of Linear Programming

Authors: William Chung

Abstract:

This paper is to study the use of multiple subproblems in Dantzig-Wolfe decomposition of linear programming (DW-LP). Traditionally, the decomposed LP consists of one LP master problem and one LP subproblem. The master problem and the subproblem is solved alternatively by exchanging the dual prices of the master problem and the proposals of the subproblem until the LP is solved. It is well known that convergence is slow with a long tail of near-optimal solutions (asymptotic convergence). Hence, the performance of DW-LP highly depends upon the number of decomposition steps. If the decomposition steps can be greatly reduced, the performance of DW-LP can be improved significantly. To reduce the number of decomposition steps, one of the methods is to increase the number of proposals from the subproblem to the master problem. To do so, we propose to add a quadratic approximation function to the LP subproblem in order to develop a set of approximate-LP subproblems (multiple subproblems). Consequently, in each decomposition step, multiple subproblems are solved for providing multiple proposals to the master problem. The number of decomposition steps can be reduced greatly. Note that each approximate-LP subproblem is nonlinear programming, and solving the LP subproblem must faster than solving the nonlinear multiple subproblems. Hence, using multiple subproblems in DW-LP is the tradeoff between the number of approximate-LP subproblems being formed and the decomposition steps. In this paper, we derive the corresponding algorithms and provide some simple computational results. Some properties of the resulting algorithms are also given.

Keywords: approximate subproblem, Dantzig-Wolfe decomposition, large-scale models, multiple subproblems

Procedia PDF Downloads 164
2332 Does Citizens’ Involvement Always Improve Outcomes: Procedures, Incentives and Comparative Advantages of Public and Private Law Enforcement

Authors: Avdasheva Svetlanaa, Kryuchkova Polinab

Abstract:

Comparative social efficiency of private and public enforcement of law is debated. This question is not of academic interest only, it is also important for the development of the legal system and regulations. Generally, involvement of ‘common citizens’ in public law enforcement is considered to be beneficial, while involvement of interest groups representatives is not. Institutional economics as well as law and economics consider the difference between public and private enforcement to be rather mechanical. Actions of bureaucrats in government agencies are assumed to be driven by the incentives linked to social welfare (or other indicator of public interest) and their own benefits. In contrast, actions of participants in private enforcement are driven by their private benefits. However administrative law enforcement may be designed in such a way that it would become driven mainly by individual incentives of alleged victims. We refer to this system as reactive public enforcement. Citizens may prefer using reactive public enforcement even if private enforcement is available. However replacement of public enforcement by reactive version of public enforcement negatively affects deterrence and reduces social welfare. We illustrate the problem of private vs pure public and private vs reactive public enforcement models with the examples of three legislation subsystems in Russia – labor law, consumer protection law and competition law. While development of private enforcement instead of public (especially in reactive public model) is desirable, replacement of both public and private enforcement by reactive model is definitely not.

Keywords: public enforcement, private complaints, legal errors, competition protection, labor law, competition law, russia

Procedia PDF Downloads 494
2331 Detecting Music Enjoyment Level Using Electroencephalogram Signals and Machine Learning Techniques

Authors: Raymond Feng, Shadi Ghiasi

Abstract:

An electroencephalogram (EEG) is a non-invasive technique that records electrical activity in the brain using scalp electrodes. Researchers have studied the use of EEG to detect emotions and moods by collecting signals from participants and analyzing how those signals correlate with their activities. In this study, researchers investigated the relationship between EEG signals and music enjoyment. Participants listened to music while data was collected. During the signal-processing phase, power spectral densities (PSDs) were computed from the signals, and dominant brainwave frequencies were extracted from the PSDs to form a comprehensive feature matrix. A machine learning approach was then taken to find correlations between the processed data and the music enjoyment level indicated by the participants. To improve on previous research, multiple machine learning models were employed, including K-Nearest Neighbors Classifier, Support Vector Classifier, and Decision Tree Classifier. Hyperparameters were used to fine-tune each model to further increase its performance. The experiments showed that a strong correlation exists, with the Decision Tree Classifier with hyperparameters yielding 85% accuracy. This study proves that EEG is a reliable means to detect music enjoyment and has future applications, including personalized music recommendation, mood adjustment, and mental health therapy.

Keywords: EEG, electroencephalogram, machine learning, mood, music enjoyment, physiological signals

Procedia PDF Downloads 60
2330 Design and Fabrication of a Parabolic trough Collector and Experimental Investigation of Direct Steam Production in Tehran

Authors: M. Bidi, H. Akhbari, S. Eslami, A. Bakhtiari

Abstract:

Due to the high potential of solar energy utilization in Iran, development of related technologies is of great necessity. Linear parabolic collectors are among the most common and most efficient means to harness the solar energy. The main goal of this paper is design and construction of a parabolic trough collector to produce hot water and steam in Tehran. To provide precise and practical plans, 3D models of the collector under consideration were developed using Solidworks software. This collector was designed in a way that the tilt angle can be adjusted manually. To increase concentraion ratio, a small diameter absorber tube is selected and to enhance solar absorbtion, a shape of U-tube is used. One of the outstanding properties of this collector is its simple design and use of low cost metal and plastic materials in its manufacturing procedure. The collector under consideration was installed in Shahid Beheshti University of Tehran and the values of solar irradiation, ambient temperature, wind speed and collector steam production rate were measured in different days and hours of July. Results revealed that a 1×2 m parabolic trough collector located in Tehran is able to produce steam by the rate of 300ml/s under the condition of atmospheric pressure and without using a vacuum cover over the absorber tube.

Keywords: desalination, parabolic trough collector, direct steam production, solar water heater, design and construction

Procedia PDF Downloads 309
2329 Graph Neural Networks and Rotary Position Embedding for Voice Activity Detection

Authors: YingWei Tan, XueFeng Ding

Abstract:

Attention-based voice activity detection models have gained significant attention in recent years due to their fast training speed and ability to capture a wide contextual range. The inclusion of multi-head style and position embedding in the attention architecture are crucial. Having multiple attention heads allows for differential focus on different parts of the sequence, while position embedding provides guidance for modeling dependencies between elements at various positions in the input sequence. In this work, we propose an approach by considering each head as a node, enabling the application of graph neural networks (GNN) to identify correlations among the different nodes. In addition, we adopt an implementation named rotary position embedding (RoPE), which encodes absolute positional information into the input sequence by a rotation matrix, and naturally incorporates explicit relative position information into a self-attention module. We evaluate the effectiveness of our method on a synthetic dataset, and the results demonstrate its superiority over the baseline CRNN in scenarios with low signal-to-noise ratio and noise, while also exhibiting robustness across different noise types. In summary, our proposed framework effectively combines the strengths of CNN and RNN (LSTM), and further enhances detection performance through the integration of graph neural networks and rotary position embedding.

Keywords: voice activity detection, CRNN, graph neural networks, rotary position embedding

Procedia PDF Downloads 70
2328 Real-Time Finger Tracking: Evaluating YOLOv8 and MediaPipe for Enhanced HCI

Authors: Zahra Alipour, Amirreza Moheb Afzali

Abstract:

In the field of human-computer interaction (HCI), hand gestures play a crucial role in facilitating communication by expressing emotions and intentions. The precise tracking of the index finger and the estimation of joint positions are essential for developing effective gesture recognition systems. However, various challenges, such as anatomical variations, occlusions, and environmental influences, hinder optimal functionality. This study investigates the performance of the YOLOv8m model for hand detection using the EgoHands dataset, which comprises diverse hand gesture images captured in various environments. Over three training processes, the model demonstrated significant improvements in precision (from 88.8% to 96.1%) and recall (from 83.5% to 93.5%), achieving a mean average precision (mAP) of 97.3% at an IoU threshold of 0.7. We also compared YOLOv8m with MediaPipe and an integrated YOLOv8 + MediaPipe approach. The combined method outperformed the individual models, achieving an accuracy of 99% and a recall of 99%. These findings underscore the benefits of model integration in enhancing gesture recognition accuracy and localization for real-time applications. The results suggest promising avenues for future research in HCI, particularly in augmented reality and assistive technologies, where improved gesture recognition can significantly enhance user experience.

Keywords: YOLOv8, mediapipe, finger tracking, joint estimation, human-computer interaction (HCI)

Procedia PDF Downloads 3