Search results for: forecast accuracy unemployment rate
10895 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 2110894 The Development of GPS Buoy for Ocean Surface Monitoring: Initial Results
Authors: Anuar Mohd Salleh, Mohd Effendi Daud
Abstract:
This study presents a kinematic positioning approach which is use the GPS buoy for precise ocean surface monitoring. A GPS buoy data from two experiments have been processed using a precise, medium-range differential kinematic technique. In each case the data were collected for more than 24 hours at nearby coastal site at a high rate (1 Hz), along with measurements from neighboring tidal stations, to verify the estimated sea surface heights. Kinematic coordinates of GPS buoy were estimated using the epoch-wise pre-elimination and the backward substitution algorithm. Test results show the centimeter level accuracy in sea surface height determination can be successfully achieved using proposed technique. The centimeter level agreement between two methods also suggests the possibility of using this inexpensive and more flexible GPS buoy equipment to enhance (or even replace) the current use of tidal gauge stations.Keywords: global positioning system, kinematic GPS, sea surface height, GPS buoy, tide gauge
Procedia PDF Downloads 54210893 Achievable Average Secrecy Rates over Bank of Parallel Independent Fading Channels with Friendly Jamming
Authors: Munnujahan Ara
Abstract:
In this paper, we investigate the effect of friendly jamming power allocation strategies on the achievable average secrecy rate over a bank of parallel fading wiretap channels. We investigate the achievable average secrecy rate in parallel fading wiretap channels subject to Rayleigh and Rician fading. The achievable average secrecy rate, due to the presence of a line-of-sight component in the jammer channel is also evaluated. Moreover, we study the detrimental effect of correlation across the parallel sub-channels, and evaluate the corresponding decrease in the achievable average secrecy rate for the various fading configurations. We also investigate the tradeoff between the transmission power and the jamming power for a fixed total power budget. Our results, which are applicable to current orthogonal frequency division multiplexing (OFDM) communications systems, shed further light on the achievable average secrecy rates over a bank of parallel fading channels in the presence of friendly jammers.Keywords: fading parallel channels, wire-tap channel, OFDM, secrecy capacity, power allocation
Procedia PDF Downloads 51010892 Efficient Bargaining versus Right to Manage in the Era of Liberalization
Authors: Panagiota Koliousi, Natasha Miaouli
Abstract:
We compare product and labour market liberalization under the two trade union bargaining models: the Right-to-Manage (RTM) model and the Efficient Bargaining (EB) model. The vehicle is a dynamic general equilibrium (DGE) model that incorporates two types of agents (capitalists and workers), imperfectly competitive product and labour markets. The model is solved numerically employing common parameter values and data from the euro area. A key message is that product market deregulation is favourable under any labour market structure while opting for labour market deregulation one should provide special attention to the structure of the labour market such as the bargaining system of unions. If the prevailing way of bargaining is the RTM model then restructuring both markets is beneficial for all agents.Keywords: market structure, structural reforms, trade unions, unemployment
Procedia PDF Downloads 19510891 Effect of Time and Rate of Nitrogen Application on the Malting Quality of Barley Yield in Sandy Soil
Authors: A. S. Talaab, Safaa, A. Mahmoud, Hanan S. Siam
Abstract:
A field experiment was conducted during the winter season of 2013/2014 in the barley production area of Dakhala – New Valley Governorate, Egypt to assess the effect of nitrogen rate and time of N fertilizer application on barley grain yield, yield components and N use efficiency of barley and their association with grain yield. The treatments consisted of three levels of nitrogen (0, 70 and 100 kg N/acre) and five application times. The experiment was laid out as a randomized complete block design with three replication. Results revealed that barley grain yield and yield components increased significantly in response to N rate. Splitting N fertilizer amount at several times result in significant effect on grain yield, yield components, protein content and N uptake efficiency when compared with the entire N was applied at once. Application of N at rate of 100 kg N/acre resulted in accumulation of nitrate in the subsurface soil > 30cm. When N application timing considered, less NO3 was found in the soil profile with splitting N application compared with all preplans application.Keywords: nitrogen use efficiency, splitting N fertilizer, barley, NO3
Procedia PDF Downloads 31110890 A Convolution Neural Network Approach to Predict Pes-Planus Using Plantar Pressure Mapping Images
Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi, Morvarid Lalenoor
Abstract:
Background: Plantar pressure distribution measurement has been used for a long time to assess foot disorders. Plantar pressure is an important component affecting the foot and ankle function and Changes in plantar pressure distribution could indicate various foot and ankle disorders. Morphologic and mechanical properties of the foot may be important factors affecting the plantar pressure distribution. Accurate and early measurement may help to reduce the prevalence of pes planus. With recent developments in technology, new techniques such as machine learning have been used to assist clinicians in predicting patients with foot disorders. Significance of the study: This study proposes a neural network learning-based flat foot classification methodology using static foot pressure distribution. Methodologies: Data were collected from 895 patients who were referred to a foot clinic due to foot disorders. Patients with pes planus were labeled by an experienced physician based on clinical examination. Then all subjects (with and without pes planus) were evaluated for static plantar pressures distribution. Patients who were diagnosed with the flat foot in both feet were included in the study. In the next step, the leg length was normalized and the network was trained for plantar pressure mapping images. Findings: From a total of 895 image data, 581 were labeled as pes planus. A computational neural network (CNN) ran to evaluate the performance of the proposed model. The prediction accuracy of the basic CNN-based model was performed and the prediction model was derived through the proposed methodology. In the basic CNN model, the training accuracy was 79.14%, and the test accuracy was 72.09%. Conclusion: This model can be easily and simply used by patients with pes planus and doctors to predict the classification of pes planus and prescreen for possible musculoskeletal disorders related to this condition. However, more models need to be considered and compared for higher accuracy.Keywords: foot disorder, machine learning, neural network, pes planus
Procedia PDF Downloads 35810889 Does Inflation Affect Private Investment in Nigeria?
Authors: Amassoma Ditimi, Nwosa Philip Ifeakachukwu
Abstract:
This study examined the impact of inflation on private investment in Nigeria for the period 1980 to 2012. Private investment was measured by foreign direct investment and private domestic investment. The study employed the Ordinary Least Square (OLS) technique. The empirical regression estimate showed that inflation had a positive but insignificant effect on private investment in Nigeria; implying that although an increase in inflation rate leads to a corresponding increase in private investment but however the effect was found to be insignificant. Thus, the study recommended that government should prevent high inflation rate that can negatively affect private investment in Nigeria and government should also put in place appropriate facilities that are investment enhancing in order to increase the level of both domestic and foreign private investment in Nigeria.Keywords: inflation rate, private investment, OLS, Nigeria
Procedia PDF Downloads 37010888 Design and Simulation of Unified Power Quality Conditioner based on Adaptive Fuzzy PI Controller
Authors: Brahim Ferdi, Samira Dib
Abstract:
The unified power quality conditioner (UPQC), a combination of shunt and series active power filter, is one of the best solutions towards the mitigation of voltage and current harmonics problems in distribution power system. PI controller is very common in the control of UPQC. However, one disadvantage of this conventional controller is the difficulty in tuning its gains (Kp and Ki). To overcome this problem, an adaptive fuzzy logic PI controller is proposed. The controller is composed of fuzzy controller and PI controller. According to the error and error rate of the control system and fuzzy control rules, the fuzzy controller can online adjust the two gains of the PI controller to get better performance of UPQC. Simulations using MATLAB/SIMULINK are carried out to verify the performance of the proposed controller. The results show that the proposed controller has fast dynamic response and high accuracy of tracking the current and voltage references.Keywords: adaptive fuzzy PI controller, current harmonics, PI controller, voltage harmonics, UPQC
Procedia PDF Downloads 55510887 An Automated R-Peak Detection Method Using Common Vector Approach
Authors: Ali Kirkbas
Abstract:
R peaks in an electrocardiogram (ECG) are signs of cardiac activity in individuals that reveal valuable information about cardiac abnormalities, which can lead to mortalities in some cases. This paper examines the problem of detecting R-peaks in ECG signals, which is a two-class pattern classification problem in fact. To handle this problem with a reliable high accuracy, we propose to use the common vector approach which is a successful machine learning algorithm. The dataset used in the proposed method is obtained from MIT-BIH, which is publicly available. The results are compared with the other popular methods under the performance metrics. The obtained results show that the proposed method shows good performance than that of the other. methods compared in the meaning of diagnosis accuracy and simplicity which can be operated on wearable devices.Keywords: ECG, R-peak classification, common vector approach, machine learning
Procedia PDF Downloads 6210886 Using Audit Tools to Maintain Data Quality for ACC/NCDR PCI Registry Abstraction
Authors: Vikrum Malhotra, Manpreet Kaur, Ayesha Ghotto
Abstract:
Background: Cardiac registries such as ACC Percutaneous Coronary Intervention Registry require high quality data to be abstracted, including data elements such as nuclear cardiology, diagnostic coronary angiography, and PCI. Introduction: The audit tool created is used by data abstractors to provide data audits and assess the accuracy and inter-rater reliability of abstraction performed by the abstractors for a health system. This audit tool solution has been developed across 13 registries, including ACC/NCDR registries, PCI, STS, Get with the Guidelines. Methodology: The data audit tool was used to audit internal registry abstraction for all data elements, including stress test performed, type of stress test, data of stress test, results of stress test, risk/extent of ischemia, diagnostic catheterization detail, and PCI data elements for ACC/NCDR PCI registries. This is being used across 20 hospital systems internally and providing abstraction and audit services for them. Results: The data audit tool had inter-rater reliability and accuracy greater than 95% data accuracy and IRR score for the PCI registry in 50 PCI registry cases in 2021. Conclusion: The tool is being used internally for surgical societies and across hospital systems. The audit tool enables the abstractor to be assessed by an external abstractor and includes all of the data dictionary fields for each registry.Keywords: abstraction, cardiac registry, cardiovascular registry, registry, data
Procedia PDF Downloads 10410885 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 7510884 Classifying Affective States in Virtual Reality Environments Using Physiological Signals
Authors: Apostolos Kalatzis, Ashish Teotia, Vishnunarayan Girishan Prabhu, Laura Stanley
Abstract:
Emotions are functional behaviors influenced by thoughts, stimuli, and other factors that induce neurophysiological changes in the human body. Understanding and classifying emotions are challenging as individuals have varying perceptions of their environments. Therefore, it is crucial that there are publicly available databases and virtual reality (VR) based environments that have been scientifically validated for assessing emotional classification. This study utilized two commercially available VR applications (Guided Meditation VR™ and Richie’s Plank Experience™) to induce acute stress and calm state among participants. Subjective and objective measures were collected to create a validated multimodal dataset and classification scheme for affective state classification. Participants’ subjective measures included the use of the Self-Assessment Manikin, emotional cards and 9 point Visual Analogue Scale for perceived stress, collected using a Virtual Reality Assessment Tool developed by our team. Participants’ objective measures included Electrocardiogram and Respiration data that were collected from 25 participants (15 M, 10 F, Mean = 22.28 4.92). The features extracted from these data included heart rate variability components and respiration rate, both of which were used to train two machine learning models. Subjective responses validated the efficacy of the VR applications in eliciting the two desired affective states; for classifying the affective states, a logistic regression (LR) and a support vector machine (SVM) with a linear kernel algorithm were developed. The LR outperformed the SVM and achieved 93.8%, 96.2%, 93.8% leave one subject out cross-validation accuracy, precision and recall, respectively. The VR assessment tool and data collected in this study are publicly available for other researchers.Keywords: affective computing, biosignals, machine learning, stress database
Procedia PDF Downloads 14010883 The Impact of Globalization on the Development of Israel Advanced Changes
Authors: Erez Cohen
Abstract:
The study examines the socioeconomic impact of development of an advanced industry in Israel. The research method is based on data collected from the Israel Central Bureau of Statistics and from the National Insurance Institute (NII) databases, which provided information that allows to examine the Economic and Social Changes during the 1990s. The study examined the socioeconomic effects of the development of advanced industry in Israel. The research findings indicate that as a result of globalization processes, the weight of traditional industry began to diminish as a result of factory closures and the laying off of workers. These circumstances led to growing unemployment among the weaker groups in Israeli society, detracting from their income and thus increasing inequality among different socioeconomic groups in Israel and enhancement of social disparities.Keywords: globalization, Israeli advanced industry, public policy, socio-economic indicators
Procedia PDF Downloads 16310882 Musical Instrument Recognition in Polyphonic Audio Through Convolutional Neural Networks and Spectrograms
Authors: Rujia Chen, Akbar Ghobakhlou, Ajit Narayanan
Abstract:
This study investigates the task of identifying musical instruments in polyphonic compositions using Convolutional Neural Networks (CNNs) from spectrogram inputs, focusing on binary classification. The model showed promising results, with an accuracy of 97% on solo instrument recognition. When applied to polyphonic combinations of 1 to 10 instruments, the overall accuracy was 64%, reflecting the increasing challenge with larger ensembles. These findings contribute to the field of Music Information Retrieval (MIR) by highlighting the potential and limitations of current approaches in handling complex musical arrangements. Future work aims to include a broader range of musical sounds, including electronic and synthetic sounds, to improve the model's robustness and applicability in real-time MIR systems.Keywords: binary classifier, CNN, spectrogram, instrument
Procedia PDF Downloads 7610881 Risk Analysis of Flood Physical Vulnerability in Residential Areas of Mathare Nairobi, Kenya
Authors: James Kinyua Gitonga, Toshio Fujimi
Abstract:
Vulnerability assessment and analysis is essential to solving the degree of damage and loss as a result of natural disasters. Urban flooding causes a major economic loss and casualties, at Mathare residential area in Nairobi, Kenya. High population caused by rural-urban migration, Unemployment, and unplanned urban development are among factors that increase flood vulnerability in Mathare area. This study aims to analyse flood risk physical vulnerabilities in Mathare based on scientific data, research data that includes the Rainfall data, River Mathare discharge rate data, Water runoff data, field survey data and questionnaire survey through sampling of the study area have been used to develop the risk curves. Three structural types of building were identified in the study area, vulnerability and risk curves were made for these three structural types by plotting the relationship between flood depth and damage for each structural type. The results indicate that the structural type with mud wall and mud floor is the most vulnerable building to flooding while the structural type with stone walls and concrete floor is least vulnerable. The vulnerability of building contents is mainly determined by the number of floors, where households with two floors are least vulnerable, and households with a one floor are most vulnerable. Therefore more than 80% of the residential buildings including the property in the building are highly vulnerable to floods consequently exposed to high risk. When estimating the potential casualties/injuries we discovered that the structural types of houses were major determinants where the mud/adobe structural type had casualties of 83.7% while the Masonry structural type had casualties of 10.71% of the people living in these houses. This research concludes that flood awareness, warnings and observing the building codes will enable reduce damage to the structural types of building, deaths and reduce damage to the building contents.Keywords: flood loss, Mathare Nairobi, risk curve analysis, vulnerability
Procedia PDF Downloads 23710880 PsyVBot: Chatbot for Accurate Depression Diagnosis using Long Short-Term Memory and NLP
Authors: Thaveesha Dheerasekera, Dileeka Sandamali Alwis
Abstract:
The escalating prevalence of mental health issues, such as depression and suicidal ideation, is a matter of significant global concern. It is plausible that a variety of factors, such as life events, social isolation, and preexisting physiological or psychological health conditions, could instigate or exacerbate these conditions. Traditional approaches to diagnosing depression entail a considerable amount of time and necessitate the involvement of adept practitioners. This underscores the necessity for automated systems capable of promptly detecting and diagnosing symptoms of depression. The PsyVBot system employs sophisticated natural language processing and machine learning methodologies, including the use of the NLTK toolkit for dataset preprocessing and the utilization of a Long Short-Term Memory (LSTM) model. The PsyVBot exhibits a remarkable ability to diagnose depression with a 94% accuracy rate through the analysis of user input. Consequently, this resource proves to be efficacious for individuals, particularly those enrolled in academic institutions, who may encounter challenges pertaining to their psychological well-being. The PsyVBot employs a Long Short-Term Memory (LSTM) model that comprises a total of three layers, namely an embedding layer, an LSTM layer, and a dense layer. The stratification of these layers facilitates a precise examination of linguistic patterns that are associated with the condition of depression. The PsyVBot has the capability to accurately assess an individual's level of depression through the identification of linguistic and contextual cues. The task is achieved via a rigorous training regimen, which is executed by utilizing a dataset comprising information sourced from the subreddit r/SuicideWatch. The diverse data present in the dataset ensures precise and delicate identification of symptoms linked with depression, thereby guaranteeing accuracy. PsyVBot not only possesses diagnostic capabilities but also enhances the user experience through the utilization of audio outputs. This feature enables users to engage in more captivating and interactive interactions. The PsyVBot platform offers individuals the opportunity to conveniently diagnose mental health challenges through a confidential and user-friendly interface. Regarding the advancement of PsyVBot, maintaining user confidentiality and upholding ethical principles are of paramount significance. It is imperative to note that diligent efforts are undertaken to adhere to ethical standards, thereby safeguarding the confidentiality of user information and ensuring its security. Moreover, the chatbot fosters a conducive atmosphere that is supportive and compassionate, thereby promoting psychological welfare. In brief, PsyVBot is an automated conversational agent that utilizes an LSTM model to assess the level of depression in accordance with the input provided by the user. The demonstrated accuracy rate of 94% serves as a promising indication of the potential efficacy of employing natural language processing and machine learning techniques in tackling challenges associated with mental health. The reliability of PsyVBot is further improved by the fact that it makes use of the Reddit dataset and incorporates Natural Language Toolkit (NLTK) for preprocessing. PsyVBot represents a pioneering and user-centric solution that furnishes an easily accessible and confidential medium for seeking assistance. The present platform is offered as a modality to tackle the pervasive issue of depression and the contemplation of suicide.Keywords: chatbot, depression diagnosis, LSTM model, natural language process
Procedia PDF Downloads 6810879 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 19310878 Relationship between Different Heart Rate Control Levels and Risk of Heart Failure Rehospitalization in Patients with Persistent Atrial Fibrillation: A Retrospective Cohort Study
Authors: Yongrong Liu, Xin Tang
Abstract:
Background: Persistent atrial fibrillation is a common arrhythmia closely related to heart failure. Heart rate control is an essential strategy for treating persistent atrial fibrillation. Still, the understanding of the relationship between different heart rate control levels and the risk of heart failure rehospitalization is limited. Objective: The objective of the study is to determine the relationship between different levels of heart rate control in patients with persistent atrial fibrillation and the risk of readmission for heart failure. Methods: We conducted a retrospective dual-centre cohort study, collecting data from patients with persistent atrial fibrillation who received outpatient treatment at two tertiary hospitals in central and western China from March 2019 to March 2020. The collected data included age, gender, body mass index (BMI), medical history, and hospitalization frequency due to heart failure. Patients were divided into three groups based on their heart rate control levels: Group I with a resting heart rate of less than 80 beats per minute, Group II with a resting heart rate between 80 and 100 beats per minute, and Group III with a resting heart rate greater than 100 beats per minute. The readmission rates due to heart failure within one year after discharge were statistically analyzed using propensity score matching in a 1:1 ratio. Differences in readmission rates among the different groups were compared using one-way ANOVA. The impact of varying levels of heart rate control on the risk of readmission for heart failure was assessed using the Cox proportional hazards model. Binary logistic regression analysis was employed to control for potential confounding factors. Results: We enrolled a total of 1136 patients with persistent atrial fibrillation. The results of the one-way ANOVA showed that there were differences in readmission rates among groups exposed to different levels of heart rate control. The readmission rates due to heart failure for each group were as follows: Group I (n=432): 31 (7.17%); Group II (n=387): 11.11%; Group III (n=317): 90 (28.50%) (F=54.3, P<0.001). After performing 1:1 propensity score matching for the different groups, 223 pairs were obtained. Analysis using the Cox proportional hazards model showed that compared to Group I, the risk of readmission for Group II was 1.372 (95% CI: 1.125-1.682, P<0.001), and for Group III was 2.053 (95% CI: 1.006-5.437, P<0.001). Furthermore, binary logistic regression analysis, including variables such as digoxin, hypertension, smoking, coronary heart disease, and chronic obstructive pulmonary disease as independent variables, revealed that coronary heart disease and COPD also had a significant impact on readmission due to heart failure (p<0.001). Conclusion: The correlation between the heart rate control level of patients with persistent atrial fibrillation and the risk of heart failure rehospitalization is positive. Reasonable heart rate control may significantly reduce the risk of heart failure rehospitalization.Keywords: heart rate control levels, heart failure rehospitalization, persistent atrial fibrillation, retrospective cohort study
Procedia PDF Downloads 7210877 Migration in Times of Uncertainty
Authors: Harman Jaggi, David Steinsaltz, Shripad Tuljapurkar
Abstract:
Understanding the effect of fluctuations on populations is crucial in the context of increasing habitat fragmentation, climate change, and biological invasions, among others. Migration in response to environmental disturbances enables populations to escape unfavorable conditions, benefit from new environments and thereby ride out fluctuations in variable environments. Would populations disperse if there is no uncertainty? Karlin showed in 1982 that when sub-populations experience distinct but fixed growth rates at different sites, greater mixing of populations will lower the overall growth rate relative to the most favorable site. Here we ask if and when environmental variability favors migration over no-migration. Specifically, in random environments, would a small amount of migration increase the overall long-run growth rate relative to the zero migration case? We use analysis and simulations to show how long-run growth rate changes with migration rate. Our results show that when fitness (dis)advantages fluctuate over time across sites, migration may allow populations to benefit from variability. When there is one best site with highest growth rate, the effect of migration on long-run growth rate depends on the difference in expected growth between sites, scaled by the variance of the difference. When variance is large, there is a substantial probability of an inferior site experiencing higher growth rate than its average. Thus, a high variance can compensate for a difference in average growth rates between sites. Positive correlations in growth rates across sites favor less migration. With multiple sites and large fluctuations, the length of shortest cycle (excursion) from the best site (on average) matters, and we explore the interplay between excursion length, average differences between sites and the size of fluctuations. Our findings have implications for conservation biology: even when there are superior sites in a sea of poor habitats, variability and habitat quality across space may be key to determining the importance of migration.Keywords: migration, variable-environments, random, dispersal, fluctuations, habitat-quality
Procedia PDF Downloads 13710876 FE Analysis of Blade-Disc Dovetail Joints Using Mortar Base Frictional Contact Formulation
Authors: Abbas Moradi, Mohsen Safajoy, Reza Yazdanparast
Abstract:
Analysis of blade-disc dovetail joints is one of the biggest challenges facing designers of aero-engines. To avoid comparatively expensive experimental full-scale tests, numerical methods can be used to simulate loaded disc-blades assembly. Mortar method provides a powerful and flexible tool for solving frictional contact problems. In this study, 2D frictional contact in dovetail has been analysed based on the mortar algorithm. In order to model the friction, the classical law of coulomb and moving friction cone algorithm is applied. The solution is then obtained by solving the resulting set of non-linear equations using an efficient numerical algorithm based on Newton–Raphson Method. The numerical results show that this approach has better convergence rate and accuracy than other proposed numerical methods.Keywords: computational contact mechanics, dovetail joints, nonlinear FEM, mortar approach
Procedia PDF Downloads 35110875 Improvement of Heat Dissipation Ability of Polyimide Composite Film
Authors: Jinyoung Kim, Jinuk Kwon, Haksoo Han
Abstract:
Polyimide is widely used in electronic industries, and heat dissipation of polyimide film is important for its application in electric devices for high-temperature resistance heat dissipation film. In this study, we demonstrated a new way to increase heat dissipating rate by adding carbon black as filler. This type of polyimide composite film was produced by pyromellitic dianhydride (PMDA) and 4,4’-oxydianiline (ODA). Carbon black (CB) is added in different loading, shows increasing heat dissipation rate for increase of Carbon black. The polyimide-carbon black composite film is synthesized with high dissipation rate to ~8W∙m−1K−1. Its high thermal decomposition temperature and glass transition temperature were maintained with carbon filler verified by thermogravimetric analysis (TGA) and differential scanning calorimetric (DSC), the polyimidization reaction of polyi(amide-mide) was confirmed by Fourier transform infrared spectroscopy (FT-IR). The polyimide composite film with carbon black with high heat dissipating rate could be used in various applications such as computers, mobile phone industries, integrated circuits, coating materials, semiconductor etc.Keywords: polyimide, heat dissipation, electric device, filler
Procedia PDF Downloads 67810874 CMOS Solid-State Nanopore DNA System-Level Sequencing Techniques Enhancement
Authors: Syed Islam, Yiyun Huang, Sebastian Magierowski, Ebrahim Ghafar-Zadeh
Abstract:
This paper presents system level CMOS solid-state nanopore techniques enhancement for speedup next generation molecular recording and high throughput channels. This discussion also considers optimum number of base-pair (bp) measurements through channel as an important role to enhance potential read accuracy. Effective power consumption estimation offered suitable rangeof multi-channel configuration. Nanopore bp extraction model in statistical method could contribute higher read accuracy with longer read-length (200 < read-length). Nanopore ionic current switching with Time Multiplexing (TM) based multichannel readout system contributed hardware savings.Keywords: DNA, nanopore, amplifier, ADC, multichannel
Procedia PDF Downloads 45210873 Drag Reduction of Base Bleed at Various Flight Conditions
Authors: Man Chul Jeong, Hyoung Jin Lee, Sang Yoon Lee, Ji Hyun Park, Min Wook Chang, In-Seuck Jeung
Abstract:
This study focus on the drag reduction effect of the base bleed at supersonic flow. Base bleed is the method which bleeds the gas on the tail of the flight vehicle and reduces the base drag, which occupies over 50% of the total drag in any flight speed. Thus base bleed can reduce the total drag significantly, and enhance the total flight range. Drag reduction ratio of the base bleed is strongly related to the mass flow rate of the bleeding gas. Thus selecting appropriate mass flow rate is important. However, since the flight vehicle has various flight speed, same mass flow rate of the base bleed can have different drag reduction effect during the flight. Thus, this study investigates the effect of the drag reduction depending on the flight speed by numerical analysis using STAR-CCM+. The analysis model is 155mm diameter projectile with boat-tailed shape base. Angle of the boat-tail is chosen previously for minimum drag coefficient. Numerical analysis is conducted for Mach 2 and Mach 3, with various mass flow rate, or the injection parameter I, of the bleeding gas and the temperature of the bleeding gas, is fixed to 300K. The results showed that I=0.025 has the minimum drag at Mach 2, and I=0.014 has the minimum drag at Mach 3. Thus as the Mach number is higher, the lower mass flow rate of the base bleed has more effect on drag reduction.Keywords: base bleed, supersonic, drag reduction, recirculation
Procedia PDF Downloads 41410872 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes
Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani
Abstract:
The development of the method to annotate unknown gene functions is an important task in bioinformatics. One of the approaches for the annotation is The identification of the metabolic pathway that genes are involved in. Gene expression data have been utilized for the identification, since gene expression data reflect various intracellular phenomena. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.Keywords: metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning
Procedia PDF Downloads 40110871 Simulation and Analysis of Passive Parameters of Building in eQuest: A Case Study in Istanbul, Turkey
Authors: Mahdiyeh Zafaranchi
Abstract:
With rapid development of urbanization and improvement of living standards in the world, energy consumption and carbon emissions of the building sector are expected to increase in the near future; because of that, energy-saving issues have become more important among the engineers. Besides, the building sector is a major contributor to energy consumption and carbon emissions. The concept of efficient building appeared as a response to the need for reducing energy demand in this sector which has the main purpose of shifting from standard buildings to low-energy buildings. Although energy-saving should happen in all steps of a building during the life cycle (material production, construction, demolition), the main concept of efficient energy building is saving energy during the life expectancy of a building by using passive and active systems, and should not sacrifice comfort and quality to reach these goals. The main aim of this study is to investigate passive strategies (do not need energy consumption or use renewable energy) to achieve energy-efficient buildings. Energy retrofit measures were explored by eQuest software using a case study as a base model. The study investigates predictive accuracy for the major factors like thermal transmittance (U-value) of the material, windows, shading devices, thermal insulation, rate of the exposed envelope, window/wall ration, lighting system in the energy consumption of the building. The base model was located in Istanbul, Turkey. The impact of eight passive parameters on energy consumption had been indicated. After analyzing the base model by eQuest, a final scenario was suggested which had a good energy performance. The results showed a decrease in the U-values of materials, the rate of exposing buildings, and windows had a significant effect on energy consumption. Finally, savings in electric consumption of about 10.5%, and gas consumption by about 8.37% in the suggested model were achieved annually.Keywords: efficient building, electric and gas consumption, eQuest, Passive parameters
Procedia PDF Downloads 11010870 Optimal Decisions for Personalized Products with Demand Information Updating and Limited Capacity
Authors: Meimei Zheng
Abstract:
Product personalization could not only bring new profits to companies but also provide the direction of long-term development for companies. However, the characteristics of personalized product cause some new problems. This paper investigates how companies make decisions on the supply of personalized products when facing different customer attitudes to personalized product and service, constraints due to limited capacity and updates of personalized demand information. This study will provide optimal decisions for companies to develop personalized markets, resulting in promoting business transformation and improving business competitiveness.Keywords: demand forecast updating, limited capacity, personalized products, optimization
Procedia PDF Downloads 25910869 The Modelling of Real Time Series Data
Authors: Valeria Bondarenko
Abstract:
We proposed algorithms for: estimation of parameters fBm (volatility and Hurst exponent) and for the approximation of random time series by functional of fBm. We proved the consistency of the estimators, which constitute the above algorithms, and proved the optimal forecast of approximated time series. The adequacy of estimation algorithms, approximation, and forecasting is proved by numerical experiment. During the process of creating software, the system has been created, which is displayed by the hierarchical structure. The comparative analysis of proposed algorithms with the other methods gives evidence of the advantage of approximation method. The results can be used to develop methods for the analysis and modeling of time series describing the economic, physical, biological and other processes.Keywords: mathematical model, random process, Wiener process, fractional Brownian motion
Procedia PDF Downloads 35510868 Sequential Data Assimilation with High-Frequency (HF) Radar Surface Current
Authors: Lei Ren, Michael Hartnett, Stephen Nash
Abstract:
The abundant measured surface current from HF radar system in coastal area is assimilated into model to improve the modeling forecasting ability. A simple sequential data assimilation scheme, Direct Insertion (DI), is applied to update model forecast states. The influence of Direct Insertion data assimilation over time is analyzed at one reference point. Vector maps of surface current from models are compared with HF radar measurements. Root-Mean-Squared-Error (RMSE) between modeling results and HF radar measurements is calculated during the last four days with no data assimilation.Keywords: data assimilation, CODAR, HF radar, surface current, direct insertion
Procedia PDF Downloads 57110867 Detection of COVID-19 Cases From X-Ray Images Using Capsule-Based Network
Authors: Donya Ashtiani Haghighi, Amirali Baniasadi
Abstract:
Coronavirus (COVID-19) disease has spread abruptly all over the world since the end of 2019. Computed tomography (CT) scans and X-ray images are used to detect this disease. Different Deep Neural Network (DNN)-based diagnosis solutions have been developed, mainly based on Convolutional Neural Networks (CNNs), to accelerate the identification of COVID-19 cases. However, CNNs lose important information in intermediate layers and require large datasets. In this paper, Capsule Network (CapsNet) is used. Capsule Network performs better than CNNs for small datasets. Accuracy of 0.9885, f1-score of 0.9883, precision of 0.9859, recall of 0.9908, and Area Under the Curve (AUC) of 0.9948 are achieved on the Capsule-based framework with hyperparameter tuning. Moreover, different dropout rates are investigated to decrease overfitting. Accordingly, a dropout rate of 0.1 shows the best results. Finally, we remove one convolution layer and decrease the number of trainable parameters to 146,752, which is a promising result.Keywords: capsule network, dropout, hyperparameter tuning, classification
Procedia PDF Downloads 7610866 Forecasting of Innovative Development of Kondratiev-Schumpeter’s Economic Cycles
Authors: Alexander Gretchenko, Liudmila Goncharenko, Sergey Sybachin
Abstract:
This article summarizes the history of the discovery of N.D. Kondratiev of large cycles of economic conditions, as well as the creation and justification of the theory of innovation-cyclical economic development of Kondratiev-Schumpeter. An analysis of it in modern conditions is providing. The main conclusion in this article is that in general terms today it can be argued that the Kondratiev-Schumpeter theory is sufficiently substantiated. Further, the possibility of making a forecast of the development of the economic situation in the direction of applying this theory in practice, which demonstrate its effectiveness, is considered.Keywords: Kondratiev's big cycles of economic conjuncture, Schumpeter's theory of innovative economic development, long-term cyclical forecasting, dating of Kondratiev cycles
Procedia PDF Downloads 162