Search results for: crow search algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5210

Search results for: crow search algorithm

3200 Pharmacological Active Compounds of Sponges and a Gorgonian Coral from the Andaman Sea, Thailand

Authors: Patchara Pedpradab, Kietisak Yoksang, Kosin Pattanamanee

Abstract:

In our ongoing search for pharmacological significant of compounds from marine organisms, we investigated the active constituents of two sponges (Xestospongia sp., Halichondria sp.) and a gorgonian coral (Juncella sp.) from the Andaman Sea, Thailand. Several compounds were isolated from those of marine organisms. A marine sponge, Xestospongia sp. contained an isoqinoline compound namely aureol and cytotoxic thiophenen sesterterpene while Halichondria sp. produced C-28 sterols. The white gorgonian coral, Juncella sp. contained anti-tuberculosis diterpenes namely, junceellin and praelolide. All of the isolated compounds were analyzed by spectroscopic methods, extensively.

Keywords: Xestospongia sp., Halichondria sp., gorgonian, Juncella sp. biological activity

Procedia PDF Downloads 366
3199 Scheduling Method for Electric Heater in HEMS considering User’s Comfort

Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo, Jin-O Kim

Abstract:

Home Energy Management System (HEMS) which makes the residential consumers contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it also represents impacts of the comfort level on the scheduling result.

Keywords: load scheduling, usage pattern, user’s comfort, copula function, branch and bound, electric heater

Procedia PDF Downloads 586
3198 The Relationship of Depression Risk and Gestational Diabetes Mellitus: A Systematic Review and Meta-Analysis

Authors: Yu Chen Su

Abstract:

Introduction: Gestational diabetes mellitus (GDM) refers to impaired glucose tolerance in pregnant women, impacting both the mother and newborn with short and long-term effects. It increases risks of preeclampsia, hypertension, type 2 diabetes, cesarean section, and preterm birth. GDM is associated with fetal macrosomia, shoulder dystocia, neonatal hypoglycemia, and future type 2 diabetes risk. A study on 6,421 pregnant women found 12% experienced high stress, linked to maladaptive coping and depressive emotions. Women with high-risk pregnancies may experience greater stress and depression. Research suggests GDM increases depression prevalence. A study on 632 Hispanic women with GDM showed severe stress and depression tendencies. Involving 95 women with GDM, 33.4% exhibited depression symptoms. Another study compared 180 GDM women to 186 with normal glucose levels, revealing higher depression levels in GDM women. They found GDM women were 1.85 times more likely to receive antidepressants during pregnancy and 1.69 times more likely to experience postpartum depression. Maternal stress and depressive symptoms during pregnancy are significant factors. Early identification by healthcare professionals can greatly benefit GDM women, their infants, and their families. Objectives: The purpose of this study was to investigate the association between gestational diabetes mellitus (GDM) and the risk of depression. Methods: This study reviewed and analyzed relevant literature on gestational diabetes mellitus (GDM) and depression in 6,876 patients. The literature search followed PRISMA guidelines and included databases like Embase, PubMed, MEDLINE, CINAHL, and Cochrane Library. Prospective or retrospective studies with relevant risk ratios and estimates were included, using a random-effects model for the analysis of depression risk correlation. Studies without depression data or relevant risks were excluded. The search period extended until October 2022. Results: Systematic review of 7 studies (6,876 participants) found a significant association (OR = 8.77, CI: 7.98-9.64, p < 0.05) between gestational diabetes mellitus (GDM) and higher depression risk compared to healthy pregnant women. Conclusions: Pregnancy is a significant life transition involving physiological, psychological, and social changes. Gestational diabetes poses challenges to women's physical and mental well-being. Sensitive healthcare professionals identifying issues early can greatly benefit women, babies, and the family.

Keywords: gestational diabetes, depression, systematic review, neta-analysis

Procedia PDF Downloads 74
3197 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique

Authors: Dibakar Chakrabarty, Mebada Suiting

Abstract:

Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.

Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM

Procedia PDF Downloads 248
3196 Impact of Natural and Artificial Disasters, Lackadaisical and Semantic Approach in Risk Management, and Mitigation Implication for Sustainable Goals in Nigeria, from 2009 to 2022

Authors: Wisdom Robert Duruji, Moses Kanayochukwu Ifoh, Efeoghene Edward Esiemunobo

Abstract:

This study examines the impact of natural and artificial disasters, lackadaisical and semantic approach in risk management, and mitigation implication for sustainable development goals in Nigeria, from 2009 to 2022. The study utilizes a range of research methods to achieve its objectives. These include literature review, website knowledge, Google search, news media information, academic journals, field-work and on-site observations. These diverse methods allow for a comprehensive analysis on the impact and the implications being study. The study finds that paradigm shift from remediating seismic, flooding, environmental pollution and degradation natural disasters by Nigeria Emergency Management Agency (NEMA), to political and charity organization; has plunged risk reduction strategies to embezzling opportunities. However, this lackadaisical and semantic approach in natural disaster mitigation, invariably replicates artificial disasters in Nigeria through: Boko Haram terrorist organization, Fulani herdsmen and farmers conflicts, political violence, kidnapping for ransom, ethnic conflicts, Religious dichotomy, insurgency, secession protagonists, unknown-gun-men, and banditry. This study also, finds that some Africans still engage in self-imposed slavery through human trafficking, by nefariously stow-away to Europe; through Libya, Sahara desert and Mediterranean sea; in search for job opportunities, due to ineptitude in governance by their leaders; a perilous journey that enhanced artificial disasters in Nigeria. That artificial disaster fatality in Nigeria increased from about 5,655 in 2009 to 114,318 in 2018; and to 157,643 in 2022. However, financial and material loss of about $9.29 billion was incurred in Nigeria due to natural disaster, while about $70.59 billion was accrued due to artificial disaster; from 2009 to 2018. Although disaster risk mitigation and politics can synergistically support sustainable development goals; however, they are different entities, and need for distinct separations in Nigeria, as in reality and perception. This study concluded that referendum should be conducted in Nigeria, to ascertain its current status as a nation. Therefore it is recommended that Nigerian governments should refine its naturally endowed crude oil locally; to end fuel subsidy scam, corruption and poverty in Nigeria!

Keywords: corruption, crude oil, environmental risk analysis, Nigeria, referendum, terrorism

Procedia PDF Downloads 44
3195 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 123
3194 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load

Authors: Sanjin Kršćanski, Josip Brnić

Abstract:

Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.

Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending

Procedia PDF Downloads 305
3193 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 317
3192 Introduction to Multi-Agent Deep Deterministic Policy Gradient

Authors: Xu Jie

Abstract:

As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.

Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents

Procedia PDF Downloads 24
3191 Q-Learning of Bee-Like Robots Through Obstacle Avoidance

Authors: Jawairia Rasheed

Abstract:

Modern robots are often used for search and rescue purpose. One of the key areas of interest in such cases is learning complex environments. One of the key methodologies for robots in such cases is reinforcement learning. In reinforcement learning robots learn to move the path to reach the goal while avoiding obstacles. Q-learning, one of the most advancement of reinforcement learning is used for making the robots to learn the path. Robots learn by interacting with the environment to reach the goal. In this paper simulation model of bee-like robots is implemented in NETLOGO. In the start the learning rate was less and it increased with the passage of time. The bees successfully learned to reach the goal while avoiding obstacles through Q-learning technique.

Keywords: reinforlearning of bee like robots for reaching the goalcement learning for randomly placed obstacles, obstacle avoidance through q-learning, q-learning for obstacle avoidance,

Procedia PDF Downloads 104
3190 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 137
3189 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 285
3188 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 153
3187 An Estimating Equation for Survival Data with a Possibly Time-Varying Covariates under a Semiparametric Transformation Models

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

An estimating equation technique is an alternative method of the widely used maximum likelihood methods, which enables us to ease some complexity due to the complex characteristics of time-varying covariates. In the situations, when both the time-varying covariates and left-truncation are considered in the model, the maximum likelihood estimation procedures become much more burdensome and complex. To ease the complexity, in this study, the modified estimating equations those have been given high attention and considerations in many researchers under semiparametric transformation model was proposed. The purpose of this article was to develop the modified estimating equation under flexible and general class of semiparametric transformation models for left-truncated and right censored survival data with time-varying covariates. Besides the commonly applied Cox proportional hazards model, such kind of problems can be also analyzed with a general class of semiparametric transformation models to estimate the effect of treatment given possibly time-varying covariates on the survival time. The consistency and asymptotic properties of the estimators were intuitively derived via the expectation-maximization (EM) algorithm. The characteristics of the estimators in the finite sample performance for the proposed model were illustrated via simulation studies and Stanford heart transplant real data examples. To sum up the study, the bias for covariates has been adjusted by estimating density function for the truncation time variable. Then the effect of possibly time-varying covariates was evaluated in some special semiparametric transformation models.

Keywords: EM algorithm, estimating equation, semiparametric transformation models, time-to-event outcomes, time varying covariate

Procedia PDF Downloads 152
3186 Hydrogen: Contention-Aware Hybrid Memory Management for Heterogeneous CPU-GPU Architectures

Authors: Yiwei Li, Mingyu Gao

Abstract:

Integrating hybrid memories with heterogeneous processors could leverage heterogeneity in both compute and memory domains for better system efficiency. To ensure performance isolation, we introduce Hydrogen, a hardware architecture to optimize the allocation of hybrid memory resources to heterogeneous CPU-GPU systems. Hydrogen supports efficient capacity and bandwidth partitioning between CPUs and GPUs in both memory tiers. We propose decoupled memory channel mapping and token-based data migration throttling to enable flexible partitioning. We also support epoch-based online search for optimized configurations and lightweight reconfiguration with reduced data movements. Hydrogen significantly outperforms existing designs by 1.21x on average and up to 1.31x.

Keywords: hybrid memory, heterogeneous systems, dram cache, graphics processing units

Procedia PDF Downloads 97
3185 Modeling and Numerical Simulation of Heat Transfer and Internal Loads at Insulating Glass Units

Authors: Nina Penkova, Kalin Krumov, Liliana Zashcova, Ivan Kassabov

Abstract:

The insulating glass units (IGU) are widely used in the advanced and renovated buildings in order to reduce the energy for heating and cooling. Rules for the choice of IGU to ensure energy efficiency and thermal comfort in the indoor space are well known. The existing of internal loads - gage or vacuum pressure in the hermetized gas space, requires additional attention at the design of the facades. The internal loads appear at variations of the altitude, meteorological pressure and gas temperature according to the same at the process of sealing. The gas temperature depends on the presence of coatings, coating position in the transparent multi-layer system, IGU geometry and space orientation, its fixing on the facades and varies with the climate conditions. An algorithm for modeling and numerical simulation of thermal fields and internal pressure in the gas cavity at insulating glass units as function of the meteorological conditions is developed. It includes models of the radiation heat transfer in solar and infrared wave length, indoor and outdoor convection heat transfer and free convection in the hermetized gas space, assuming the gas as compressible. The algorithm allows prediction of temperature and pressure stratification in the gas domain of the IGU at different fixing system. The models are validated by comparison of the numerical results with experimental data obtained by Hot-box testing. Numerical calculations and estimation of 3D temperature, fluid flow fields, thermal performances and internal loads at IGU in window system are implemented.

Keywords: insulating glass units, thermal loads, internal pressure, CFD analysis

Procedia PDF Downloads 273
3184 Thorium Resources of Georgia – Is It Its Future Energy ?

Authors: Avtandil Okrostsvaridze, Salome Gogoladze

Abstract:

In the light of exhaustion of hydrocarbon reserves of new energy resources, its search is of vital importance problem for the modern civilization. At the time of energy resource crisis, the radioactive element thorium (232Th) is considered as the main energy resource for the future of our civilization. Modern industry uses thorium in high-temperature and high-tech tools, but the most important property of thorium is that like uranium it can be used as fuel in nuclear reactors. However, thorium has a number of advantages compared to this element: Its concentration in the earth crust is 4-5 times higher than uranium; extraction and enrichment of thorium is much cheaper than of uranium; it is less radioactive; its waste products complete destruction is possible; thorium yields much more energy than uranium. Nowadays, developed countries, among them India and China, have started intensive work for creation of thorium nuclear reactors and intensive search for thorium reserves. It is not excluded that in the next 10 years these reactors will completely replace uranium reactors. Thorium ore mineralization is genetically related to alkaline-acidic magmatism. Thorium accumulations occur as in endogen marked as in exogenous conditions. Unfortunately, little is known about the reserves of this element in Georgia, as planned prospecting-exploration works of thorium have never been carried out here. Although, 3 ore occurrences of this element are detected: 1) In the Greater Caucasus Kakheti segment, in the hydrothermally altered rocks of the Lower Jurassic clay-shales, where thorium concentrations varied between 51 - 3882g/t; 2) In the eastern periphery of the Dzirula massif, in the hydrothermally alteration rocks of the cambrian quartz-diorite gneisses, where thorium concentrations varied between 117-266 g/t; 3) In active contact zone of the Eocene volcanites and syenitic intrusive in Vakijvari ore field of the Guria region, where thorium concentrations varied between 185 – 428 g/t. In addition, geological settings of the areas, where thorium occurrences were fixed, give a theoretical basis on possible accumulation of practical importance thorium ores. Besides, the Black Sea Guria region magnetite sand which is transported from Vakijvari ore field, should contain significant reserves of thorium. As the research shows, monazite (thorium containing mineral) is involved in magnetite in the form of the thinnest inclusions. The world class thorium deposit concentrations of this element vary within the limits of 50-200 g/t. Accordingly, on the basis of these data, thorium resources found in Georgia should be considered as perspective ore deposits. Generally, we consider that complex investigation of thorium should be included into the sphere of strategic interests of the state, because future energy of Georgia, will probably be thorium.

Keywords: future energy, Georgia, ore field, thorium

Procedia PDF Downloads 493
3183 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE

Procedia PDF Downloads 100
3182 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model

Authors: Bokkasam Sasidhar, Ibrahim Aljasser

Abstract:

The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.

Keywords: scheduling, maximal flow problem, multiple arc network model, optimization

Procedia PDF Downloads 402
3181 Deployment of Attack Helicopters in Conventional Warfare: The Gulf War

Authors: Mehmet Karabekir

Abstract:

Attack helicopters (AHs) are usually deployed in conventional warfare to destroy armored and mechanized forces of enemy. In addition, AHs are able to perform various tasks in the deep, and close operations – intelligence, surveillance, reconnaissance, air assault operations, and search and rescue operations. Apache helicopters were properly employed in the Gulf Wars and contributed the success of campaign by destroying a large number of armored and mechanized vehicles of Iraq Army. The purpose of this article is to discuss the deployment of AHs in conventional warfare in the light of Gulf Wars. First, the employment of AHs in deep and close operations will be addressed regarding the doctrine. Second, the US armed forces AH-64 doctrinal and tactical usage will be argued in the 1st and 2nd Gulf Wars.

Keywords: attack helicopter, conventional warfare, gulf wars

Procedia PDF Downloads 473
3180 Terraria AI: YOLO Interface for Decision-Making Algorithms

Authors: Emmanuel Barrantes Chaves, Ernesto Rivera Alvarado

Abstract:

This paper presents a method to enable agents for the Terraria game to evaluate algorithms commonly used in general video game artificial intelligence competitions. The usage of the ‘You Only Look Once’ model in the first layer of the process obtains information from the screen, translating this information into a video game description language known as “Video Game Description Language”; the agents take that as input to make decisions. For this, the state-of-the-art algorithms were tested and compared; Monte Carlo Tree Search and Rolling Horizon Evolutionary; in this case, Rolling Horizon Evolutionary shows a better performance. This approach’s main advantage is that a VGDL beforehand is unnecessary. It will be built on the fly and opens the road for using more games as a framework for AI.

Keywords: AI, MCTS, RHEA, Terraria, VGDL, YOLOv5

Procedia PDF Downloads 96
3179 Resource Allocation and Task Scheduling with Skill Level and Time Bound Constraints

Authors: Salam Saudagar, Ankit Kamboj, Niraj Mohan, Satgounda Patil, Nilesh Powar

Abstract:

Task Assignment and Scheduling is a challenging Operations Research problem when there is a limited number of resources and comparatively higher number of tasks. The Cost Management team at Cummins needs to assign tasks based on a deadline and must prioritize some of the tasks as per business requirements. Moreover, there is a constraint on the resources that assignment of tasks should be done based on an individual skill level, that may vary for different tasks. Another constraint is for scheduling the tasks that should be evenly distributed in terms of number of working hours, which adds further complexity to this problem. The proposed greedy approach to solve assignment and scheduling problem first assigns the task based on management priority and then by the closest deadline. This is followed by an iterative selection of an available resource with the least allocated total working hours for a task, i.e. finding the local optimal choice for each task with the goal of determining the global optimum. The greedy approach task allocation is compared with a variant of Hungarian Algorithm, and it is observed that the proposed approach gives an equal allocation of working hours among the resources. The comparative study of the proposed approach is also done with manual task allocation and it is noted that the visibility of the task timeline has increased from 2 months to 6 months. An interactive dashboard app is created for the greedy assignment and scheduling approach and the tasks with more than 2 months horizon that were waiting in a queue without a delivery date initially are now analyzed effectively by the business with expected timelines for completion.

Keywords: assignment, deadline, greedy approach, Hungarian algorithm, operations research, scheduling

Procedia PDF Downloads 147
3178 An Analytical Approach of Computational Complexity for the Method of Multifluid Modelling

Authors: A. K. Borah, A. K. Singh

Abstract:

In this paper we deal building blocks of the computer simulation of the multiphase flows. Whole simulation procedure can be viewed as two super procedures; The implementation of VOF method and the solution of Navier Stoke’s Equation. Moreover, a sequential code for a Navier Stoke’s solver has been studied.

Keywords: Bi-conjugate gradient stabilized (Bi-CGSTAB), ILUT function, krylov subspace, multifluid flows preconditioner, simple algorithm

Procedia PDF Downloads 528
3177 Using the SMT Solver to Minimize the Latency and to Optimize the Number of Cores in an NoC-DSP Architectures

Authors: Imen Amari, Kaouther Gasmi, Asma Rebaya, Salem Hasnaoui

Abstract:

The problem of scheduling and mapping data flow applications on multi-core architectures is notoriously difficult. This difficulty is related to the rapid evaluation of Telecommunication and multimedia systems accompanied by a rapid increase of user requirements in terms of latency, execution time, consumption, energy, etc. Having an optimal scheduling on multi-cores DSP (Digital signal Processors) platforms is a challenging task. In this context, we present a novel technic and algorithm in order to find a valid schedule that optimizes the key performance metrics particularly the Latency. Our contribution is based on Satisfiability Modulo Theories (SMT) solving technologies which is strongly driven by the industrial applications and needs. This paper, describe a scheduling module integrated in our proposed Workflow which is advised to be a successful approach for programming the applications based on NoC-DSP platforms. This workflow transform automatically a Simulink model to a synchronous dataflow (SDF) model. The automatic transformation followed by SMT solver scheduling aim to minimize the final latency and other software/hardware metrics in terms of an optimal schedule. Also, finding the optimal numbers of cores to be used. In fact, our proposed workflow taking as entry point a Simulink file (.mdl or .slx) derived from embedded Matlab functions. We use an approach which is based on the synchronous and hierarchical behavior of both Simulink and SDF. Whence, results of running the scheduler which exist in the Workflow mentioned above using our proposed SMT solver algorithm refinements produce the best possible scheduling in terms of latency and numbers of cores.

Keywords: multi-cores DSP, scheduling, SMT solver, workflow

Procedia PDF Downloads 286
3176 Prediction of Covid-19 Cases and Current Situation of Italy and Its Different Regions Using Machine Learning Algorithm

Authors: Shafait Hussain Ali

Abstract:

Since its outbreak in China, the Covid_19 19 disease has been caused by the corona virus SARS N coyote 2. Italy was the first Western country to be severely affected, and the first country to take drastic measures to control the disease. In start of December 2019, the sudden outbreaks of the Coronary Virus Disease was caused by a new Corona 2 virus (SARS-CO2) of acute respiratory syndrome in china city Wuhan. The World Health Organization declared the epidemic a public health emergency of international concern on January 30, 2020,. On February 14, 2020, 49,053 laboratory-confirmed deaths and 1481 deaths have been reported worldwide. The threat of the disease has forced most of the governments to implement various control measures. Therefore it becomes necessary to analyze the Italian data very carefully, in particular to investigates and to find out the present condition and the number of infected persons in the form of positive cases, death, hospitalized or some other features of infected persons will clear in simple form. So used such a model that will clearly shows the real facts and figures and also understandable to every readable person which can get some real benefit after reading it. The model used must includes(total positive cases, current positive cases, hospitalized patients, death, recovered peoples frequency rates ) all features that explains and clear the wide range facts in very simple form and helpful to administration of that country.

Keywords: machine learning tools and techniques, rapid miner tool, Naive-Bayes algorithm, predictions

Procedia PDF Downloads 107
3175 Design of Bacterial Pathogens Identification System Based on Scattering of Laser Beam Light and Classification of Binned Plots

Authors: Mubashir Hussain, Mu Lv, Xiaohan Dong, Zhiyang Li, Bin Liu, Nongyue He

Abstract:

Detection and classification of microbes have a vast range of applications in biomedical engineering especially in detection, characterization, and quantification of bacterial contaminants. For identification of pathogens, different techniques are emerging in the field of biomedical engineering. Latest technology uses light scattering, capable of identifying different pathogens without any need for biochemical processing. Bacterial Pathogens Identification System (BPIS) which uses a laser beam, passes through the sample and light scatters off. An assembly of photodetectors surrounded by the sample at different angles to detect the scattering of light. The algorithm of the system consists of two parts: (a) Library files, and (b) Comparator. Library files contain data of known species of bacterial microbes in the form of binned plots, while comparator compares data of unknown sample with library files. Using collected data of unknown bacterial species, highest voltage values stored in the form of peaks and arranged in 3D histograms to find the frequency of occurrence. Resulting data compared with library files of known bacterial species. If sample data matching with any library file of known bacterial species, sample identified as a matched microbe. An experiment performed to identify three different bacteria particles: Enterococcus faecalis, Pseudomonas aeruginosa, and Escherichia coli. By applying algorithm using library files of given samples, results were compromising. This system is potentially applicable to several biomedical areas, especially those related to cell morphology.

Keywords: microbial identification, laser scattering, peak identification, binned plots classification

Procedia PDF Downloads 150
3174 Compass Bar: A Visualization Technique for Out-of-View-Objects in Head-Mounted Displays

Authors: Alessandro Evangelista, Vito M. Manghisi, Michele Gattullo, Enricoandrea Laviola

Abstract:

In this work, we propose a custom visualization technique for Out-Of-View-Objects in Virtual and Augmented Reality applications using Head Mounted Displays. In the last two decades, Augmented Reality (AR) and Virtual Reality (VR) technologies experienced a remarkable growth of applications for navigation, interaction, and collaboration in different types of environments, real or virtual. Both environments can be potentially very complex, as they can include many virtual objects located in different places. Given the natural limitation of the human Field of View (about 210° horizontal and 150° vertical), humans cannot perceive objects outside this angular range. Moreover, despite recent technological advances in AR e VR Head-Mounted Displays (HMDs), these devices still suffer from a limited Field of View, especially regarding Optical See-Through displays, thus greatly amplifying the challenge of visualizing out-of-view objects. This problem is not negligible when the user needs to be aware of the number and the position of the out-of-view objects in the environment. For instance, during a maintenance operation on a construction site where virtual objects serve to improve the dangers' awareness. Providing such information can enhance the comprehension of the scene, enable fast navigation and focused search, and improve users' safety. In our research, we investigated how to represent out-of-view-objects in HMD User Interfaces (UI). Inspired by commercial video games such as Call of Duty Modern Warfare, we designed a customized Compass. By exploiting the Unity 3D graphics engine, we implemented our custom solution that can be used both in AR and VR environments. The Compass Bar consists of a graduated bar (in degrees) at the top center of the UI. The values of the bar range from -180 (far left) to +180 (far right), the zero is placed in front of the user. Two vertical lines on the bar show the amplitude of the user's field of view. Every virtual object within the scene is represented onto the compass bar as a specific color-coded proxy icon (a circular ring with a colored dot at its center). To provide the user with information about the distance, we implemented a specific algorithm that increases the size of the inner dot as the user approaches the virtual object (i.e., when the user reaches the object, the dot fills the ring). This visualization technique for out-of-view objects has some advantages. It allows users to be quickly aware of the number and the position of the virtual objects in the environment. For instance, if the compass bar displays the proxy icon at about +90, users will immediately know that the virtual object is to their right and so on. Furthermore, by having qualitative information about the distance, users can optimize their speed, thus gaining effectiveness in their work. Given the small size and position of the Compass Bar, our solution also helps lessening the occlusion problem thus increasing user acceptance and engagement. As soon as the lockdown measures will allow, we will carry out user-tests comparing this solution with other state-of-the-art existing ones such as 3D Radar, SidebARs and EyeSee360.

Keywords: augmented reality, situation awareness, virtual reality, visualization design

Procedia PDF Downloads 127
3173 Manipulator Development for Telediagnostics

Authors: Adam Kurnicki, Bartłomiej Stanczyk, Bartosz Kania

Abstract:

This paper presents development of the light-weight manipulator with series elastic actuation for medical telediagnostics (USG examination). General structure of realized impedance control algorithm was shown. It was described how to perform force measurements based mainly on elasticity of manipulator links.

Keywords: telediagnostics, elastic manipulator, impedance control, force measurement

Procedia PDF Downloads 477
3172 Boundary Motion by Curvature: Accessible Modeling of Oil Spill Evaporation/Dissipation

Authors: Gary Miller, Andriy Didenko, David Allison

Abstract:

The boundary of a region in the plane shrinks according to its curvature. A simple algorithm based upon this motion by curvature performed by a spreadsheet simulates the evaporation/dissipation behavior of oil spill boundaries.

Keywords: mathematical modeling, oil, evaporation, dissipation, boundary

Procedia PDF Downloads 510
3171 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique

Authors: S. Jalaja, A. M. Vijaya Prakash

Abstract:

Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.

Keywords: carry save adder Karatsuba multiplication, mid range Karatsuba multiplication, modified FFA and transposed filter, retiming

Procedia PDF Downloads 235