Search results for: tabu search algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5105

Search results for: tabu search algorithm

3155 On Multiobjective Optimization to Improve the Scalability of Fog Application Deployments Using Fogtorch

Authors: Suleiman Aliyu

Abstract:

Integrating IoT applications with Fog systems presents challenges in optimization due to diverse environments and conflicting objectives. This study explores achieving Pareto optimal deployments for Fog-based IoT systems to address growing QoS demands. We introduce Pareto optimality to balance competing performance metrics. Using the FogTorch optimization framework, we propose a hybrid approach (Backtracking search with branch and bound) for scalable IoT deployments. Our research highlights the advantages of Pareto optimality over single-objective methods and emphasizes the role of FogTorch in this context. Initial results show improvements in IoT deployment cost in Fog systems, promoting resource-efficient strategies.

Keywords: pareto optimality, fog application deployment, resource allocation, internet of things

Procedia PDF Downloads 63
3154 A Systematic Review on Communication and Relations between Health Care Professionals and Patients with Cancer in Outpatient Settings Matter

Authors: Anne Prip, Kirsten Alling Møller, Dorte Lisbet Nielsen, Mary Jarden, Marie-Helene Olsen, Anne Kjaergaard Danielsen

Abstract:

Background: The development in cancer care has shifted towards shorter hospital stays and more outpatient treatment. Today, cancer care and treatment predominantly takes place in outpatient settings where encounters between patients and health care professionals are often brief. This development will probably continue internationally as the global cancer burden seems to be growing significantly. Furthermore, the number of patients who require ambulatory treatments such as chemotherapy is increasing. Focusing on the encounters between health care professionals and patients during oncology treatment has thus become increasingly important due to a growing trend in outpatient cancer management. Objective: The aim of the systematic review was to summarize the literature from the perspective of the patient, on experiences of and the need for communication and relationships with the health care professional during chemotherapy treatment in an outpatient setting. Method: The review was designed and carried out according to the PRISMA guidelines and PICO framework. The systematic search was conducted in Medline, CINAHL, The Cochrane Library and Joanna Briggs Institute Evidence Based Practice Database. Results: In all, 1174 studies were identified by literature search. After duplicates were removed, the remaining studies (n = 1053) were screened for inclusion. Nine studies were included; qualitative (n = 5) and quantitative (n = 4) as they met the inclusions criteria. The review identified that communication and relationships between health care professionals and patients were important for the patients’ ability to cope with cancer and also had an impact on patients’ satisfaction with care in the outpatient clinic. Furthermore, the review showed that hope and positivity was a need and strategy for patients with cancer and was facilitated by health care professionals. Finally, it revealed that outpatient clinic visits framed and influenced communication and relationships. Conclusions: This review identified the significance of communication and the relationships between patients and health care professionals in the outpatient setting as it supports patients’ ability to cope with cancer. The review showed the need for health care professionals to pay attention to the relational aspects of communication in an outpatient clinic as encounters are often brief. Furthermore, the review helps to specify which elements of the communication are central in the patient-health care professional interaction from the patients' perspective. Finally, it shows a need for more research to investigate which type of interaction and intervention would be the most effective in supporting patients’ coping during chemotherapy in an outpatient clinic.

Keywords: ambulatory chemotherapy, communication, health care professional-patient relation, nurse-patient relation, outpatient care, systematic review

Procedia PDF Downloads 414
3153 Experimental Analysis of Tools Used for Doxing and Proposed New Transforms to Help Organizations Protect against Doxing Attacks

Authors: Parul Khanna, Pavol Zavarsky, Dale Lindskog

Abstract:

Doxing is a term derived from documents, and hence consists of collecting information on an organization or individual through social media websites, search engines, password cracking methods, social engineering tools and other sources of publicly displayed information. The main purpose of doxing attacks is to threaten, embarrass, harass and humiliate the organization or individual. Various tools are used to perform doxing. Tools such as Maltego visualize organization’s architecture which helps in determining weak links within the organization. This paper discusses limitations of Maltego Chlorine CE 3.6.0 and suggests measures as to how organizations can use these tools to protect themselves from doxing attacks.

Keywords: advanced persistent threat, FOCA, OSINT, PII

Procedia PDF Downloads 240
3152 Modelling Structural Breaks in Stock Price Time Series Using Stochastic Differential Equations

Authors: Daniil Karzanov

Abstract:

This paper studies the effect of quarterly earnings reports on the stock price. The profitability of the stock is modeled by geometric Brownian diffusion and the Constant Elasticity of Variance model. We fit several variations of stochastic differential equations to the pre-and after-report period using the Maximum Likelihood Estimation and Grid Search of parameters method. By examining the change in the model parameters after reports’ publication, the study reveals that the reports have enough evidence to be a structural breakpoint, meaning that all the forecast models exploited are not applicable for forecasting and should be refitted shortly.

Keywords: stock market, earnings reports, financial time series, structural breaks, stochastic differential equations

Procedia PDF Downloads 188
3151 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 261
3150 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 250
3149 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 308
3148 Digital Platform for Psychological Assessment Supported by Sensors and Efficiency Algorithms

Authors: Francisco M. Silva

Abstract:

Technology is evolving, creating an impact on our everyday lives and the telehealth industry. Telehealth encapsulates the provision of healthcare services and information via a technological approach. There are several benefits of using web-based methods to provide healthcare help. Nonetheless, few health and psychological help approaches combine this method with wearable sensors. This paper aims to create an online platform for users to receive self-care help and information using wearable sensors. In addition, researchers developing a similar project obtain a solid foundation as a reference. This study provides descriptions and analyses of the software and hardware architecture. Exhibits and explains a heart rate dynamic and efficient algorithm that continuously calculates the desired sensors' values. Presents diagrams that illustrate the website deployment process and the webserver means of handling the sensors' data. The goal is to create a working project using Arduino compatible hardware. Heart rate sensors send their data values to an online platform. A microcontroller board uses an algorithm to calculate the sensor heart rate values and outputs it to a web server. The platform visualizes the sensor's data, summarizes it in a report, and creates alerts for the user. Results showed a solid project structure and communication from the hardware and software. The web server displays the conveyed heart rate sensor's data on the online platform, presenting observations and evaluations.

Keywords: Arduino, heart rate BPM, microcontroller board, telehealth, wearable sensors, web-based healthcare

Procedia PDF Downloads 113
3147 Scheduling Method for Electric Heater in HEMS considering User’s Comfort

Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo, Jin-O Kim

Abstract:

Home Energy Management System (HEMS) which makes the residential consumers contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it also represents impacts of the comfort level on the scheduling result.

Keywords: load scheduling, usage pattern, user’s comfort, copula function, branch and bound, electric heater

Procedia PDF Downloads 570
3146 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique

Authors: Dibakar Chakrabarty, Mebada Suiting

Abstract:

Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.

Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM

Procedia PDF Downloads 231
3145 SOM Map vs Hopfield Neural Network: A Comparative Study in Microscopic Evacuation Application

Authors: Zouhour Neji Ben Salem

Abstract:

Microscopic evacuation focuses on the evacuee behavior and way of search of safety place in an egress situation. In recent years, several models handled microscopic evacuation problem. Among them, we have proposed Artificial Neural Network (ANN) as an alternative to mathematical models that can deal with such problem. In this paper, we present two ANN models: SOM map and Hopfield Network used to predict the evacuee behavior in a disaster situation. These models are tested in a real case, the second floor of Tunisian children hospital evacuation in case of fire. The two models are studied and compared in order to evaluate their performance.

Keywords: artificial neural networks, self-organization map, hopfield network, microscopic evacuation, fire building evacuation

Procedia PDF Downloads 383
3144 A Descriptive Approach towards the Understanding of the Central American Coffee Business Demography Phenomena

Authors: Jesus David Argueta Moreno, Justa Rufina Martel, Edith Gabriela Carrasco

Abstract:

The Central American Coffee small, medium, and large corporations search for excellence, sustainability, and continuous improvement, triggers in a still unknown scale the Local expansion, crusading, and franchising strategies towards a more suitable commercial opportunity, where the dynamics of the Central American business displacement can be explained through the markets permeability traits. By considering the previously mentioned, the present study aims to evaluate the franchising potentialities offered by Central American Coffee business scenario, in order to explain dynamics of the business demography phenomena and its relevance on the Central American competitiveness landscape.

Keywords: competitiveness, franchising, business demography, Central American Coffee

Procedia PDF Downloads 599
3143 Keys of Success in Regional Entrepreneurial Media Collaboration Linked With a New Concept of Citizenship

Authors: Rianne Voet

Abstract:

This paper uses a literature review to search for keys of success for entrepreneurial regional media collaborations in the Netherlands and elsewhere. It specifies keys on general aspects: a digital-first strategy, innovation, a particular journalistic mission and a new role for the public. It outlines keys in practicalities: competencies, revenue model, legal structure, communication structure and organization structure. The paper elaborates on a new public function and a new concept of citizenship which, according to several authors in the literature, are required in order to be successful. Finally, it offers a model of keys for success in regional entrepreneurial media collaboration.

Keywords: media collaboration, factors of success, keys of success, regional media cooperation

Procedia PDF Downloads 254
3142 Frequent Pattern Mining for Digenic Human Traits

Authors: Atsuko Okazaki, Jurg Ott

Abstract:

Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.

Keywords: digenic traits, DNA variants, epistasis, statistical genetics

Procedia PDF Downloads 108
3141 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load

Authors: Sanjin Kršćanski, Josip Brnić

Abstract:

Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.

Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending

Procedia PDF Downloads 289
3140 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 303
3139 Detecting Potential Biomarkers for Ulcerative Colitis Using Hybrid Feature Selection

Authors: Mustafa Alshawaqfeh, Bilal Wajidy, Echin Serpedin, Jan Suchodolski

Abstract:

Inflammatory Bowel disease (IBD) is a disease of the colon with characteristic inflammation. Clinically IBD is detected using laboratory tests (blood and stool), radiology tests (imaging using CT, MRI), capsule endoscopy and endoscopy. There are two variants of IBD referred to as Ulcerative Colitis (UC) and Crohn’s disease. This study employs a hybrid feature selection method that combines a correlation-based variable ranking approach with exhaustive search wrapper methods in order to find potential biomarkers for UC. The proposed biomarkers presented accurate discriminatory power thereby identifying themselves to be possible ingredients to UC therapeutics.

Keywords: ulcerative colitis, biomarker detection, feature selection, inflammatory bowel disease (IBD)

Procedia PDF Downloads 385
3138 Pharmacological Active Compounds of Sponges and a Gorgonian Coral from the Andaman Sea, Thailand

Authors: Patchara Pedpradab, Kietisak Yoksang, Kosin Pattanamanee

Abstract:

In our ongoing search for pharmacological significant of compounds from marine organisms, we investigated the active constituents of two sponges (Xestospongia sp., Halichondria sp.) and a gorgonian coral (Juncella sp.) from the Andaman Sea, Thailand. Several compounds were isolated from those of marine organisms. A marine sponge, Xestospongia sp. contained an isoqinoline compound namely aureol and cytotoxic thiophenen sesterterpene while Halichondria sp. produced C-28 sterols. The white gorgonian coral, Juncella sp. contained anti-tuberculosis diterpenes namely, junceellin and praelolide. All of the isolated compounds were analyzed by spectroscopic methods, extensively.

Keywords: Xestospongia sp., Halichondria sp., gorgonian, Juncella sp. biological activity

Procedia PDF Downloads 352
3137 The Relationship of Depression Risk and Gestational Diabetes Mellitus: A Systematic Review and Meta-Analysis

Authors: Yu Chen Su

Abstract:

Introduction: Gestational diabetes mellitus (GDM) refers to impaired glucose tolerance in pregnant women, impacting both the mother and newborn with short and long-term effects. It increases risks of preeclampsia, hypertension, type 2 diabetes, cesarean section, and preterm birth. GDM is associated with fetal macrosomia, shoulder dystocia, neonatal hypoglycemia, and future type 2 diabetes risk. A study on 6,421 pregnant women found 12% experienced high stress, linked to maladaptive coping and depressive emotions. Women with high-risk pregnancies may experience greater stress and depression. Research suggests GDM increases depression prevalence. A study on 632 Hispanic women with GDM showed severe stress and depression tendencies. Involving 95 women with GDM, 33.4% exhibited depression symptoms. Another study compared 180 GDM women to 186 with normal glucose levels, revealing higher depression levels in GDM women. They found GDM women were 1.85 times more likely to receive antidepressants during pregnancy and 1.69 times more likely to experience postpartum depression. Maternal stress and depressive symptoms during pregnancy are significant factors. Early identification by healthcare professionals can greatly benefit GDM women, their infants, and their families. Objectives: The purpose of this study was to investigate the association between gestational diabetes mellitus (GDM) and the risk of depression. Methods: This study reviewed and analyzed relevant literature on gestational diabetes mellitus (GDM) and depression in 6,876 patients. The literature search followed PRISMA guidelines and included databases like Embase, PubMed, MEDLINE, CINAHL, and Cochrane Library. Prospective or retrospective studies with relevant risk ratios and estimates were included, using a random-effects model for the analysis of depression risk correlation. Studies without depression data or relevant risks were excluded. The search period extended until October 2022. Results: Systematic review of 7 studies (6,876 participants) found a significant association (OR = 8.77, CI: 7.98-9.64, p < 0.05) between gestational diabetes mellitus (GDM) and higher depression risk compared to healthy pregnant women. Conclusions: Pregnancy is a significant life transition involving physiological, psychological, and social changes. Gestational diabetes poses challenges to women's physical and mental well-being. Sensitive healthcare professionals identifying issues early can greatly benefit women, babies, and the family.

Keywords: gestational diabetes, depression, systematic review, neta-analysis

Procedia PDF Downloads 62
3136 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 113
3135 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 268
3134 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 140
3133 Impact of Natural and Artificial Disasters, Lackadaisical and Semantic Approach in Risk Management, and Mitigation Implication for Sustainable Goals in Nigeria, from 2009 to 2022

Authors: Wisdom Robert Duruji, Moses Kanayochukwu Ifoh, Efeoghene Edward Esiemunobo

Abstract:

This study examines the impact of natural and artificial disasters, lackadaisical and semantic approach in risk management, and mitigation implication for sustainable development goals in Nigeria, from 2009 to 2022. The study utilizes a range of research methods to achieve its objectives. These include literature review, website knowledge, Google search, news media information, academic journals, field-work and on-site observations. These diverse methods allow for a comprehensive analysis on the impact and the implications being study. The study finds that paradigm shift from remediating seismic, flooding, environmental pollution and degradation natural disasters by Nigeria Emergency Management Agency (NEMA), to political and charity organization; has plunged risk reduction strategies to embezzling opportunities. However, this lackadaisical and semantic approach in natural disaster mitigation, invariably replicates artificial disasters in Nigeria through: Boko Haram terrorist organization, Fulani herdsmen and farmers conflicts, political violence, kidnapping for ransom, ethnic conflicts, Religious dichotomy, insurgency, secession protagonists, unknown-gun-men, and banditry. This study also, finds that some Africans still engage in self-imposed slavery through human trafficking, by nefariously stow-away to Europe; through Libya, Sahara desert and Mediterranean sea; in search for job opportunities, due to ineptitude in governance by their leaders; a perilous journey that enhanced artificial disasters in Nigeria. That artificial disaster fatality in Nigeria increased from about 5,655 in 2009 to 114,318 in 2018; and to 157,643 in 2022. However, financial and material loss of about $9.29 billion was incurred in Nigeria due to natural disaster, while about $70.59 billion was accrued due to artificial disaster; from 2009 to 2018. Although disaster risk mitigation and politics can synergistically support sustainable development goals; however, they are different entities, and need for distinct separations in Nigeria, as in reality and perception. This study concluded that referendum should be conducted in Nigeria, to ascertain its current status as a nation. Therefore it is recommended that Nigerian governments should refine its naturally endowed crude oil locally; to end fuel subsidy scam, corruption and poverty in Nigeria!

Keywords: corruption, crude oil, environmental risk analysis, Nigeria, referendum, terrorism

Procedia PDF Downloads 27
3132 An Estimating Equation for Survival Data with a Possibly Time-Varying Covariates under a Semiparametric Transformation Models

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

An estimating equation technique is an alternative method of the widely used maximum likelihood methods, which enables us to ease some complexity due to the complex characteristics of time-varying covariates. In the situations, when both the time-varying covariates and left-truncation are considered in the model, the maximum likelihood estimation procedures become much more burdensome and complex. To ease the complexity, in this study, the modified estimating equations those have been given high attention and considerations in many researchers under semiparametric transformation model was proposed. The purpose of this article was to develop the modified estimating equation under flexible and general class of semiparametric transformation models for left-truncated and right censored survival data with time-varying covariates. Besides the commonly applied Cox proportional hazards model, such kind of problems can be also analyzed with a general class of semiparametric transformation models to estimate the effect of treatment given possibly time-varying covariates on the survival time. The consistency and asymptotic properties of the estimators were intuitively derived via the expectation-maximization (EM) algorithm. The characteristics of the estimators in the finite sample performance for the proposed model were illustrated via simulation studies and Stanford heart transplant real data examples. To sum up the study, the bias for covariates has been adjusted by estimating density function for the truncation time variable. Then the effect of possibly time-varying covariates was evaluated in some special semiparametric transformation models.

Keywords: EM algorithm, estimating equation, semiparametric transformation models, time-to-event outcomes, time varying covariate

Procedia PDF Downloads 144
3131 Modeling and Numerical Simulation of Heat Transfer and Internal Loads at Insulating Glass Units

Authors: Nina Penkova, Kalin Krumov, Liliana Zashcova, Ivan Kassabov

Abstract:

The insulating glass units (IGU) are widely used in the advanced and renovated buildings in order to reduce the energy for heating and cooling. Rules for the choice of IGU to ensure energy efficiency and thermal comfort in the indoor space are well known. The existing of internal loads - gage or vacuum pressure in the hermetized gas space, requires additional attention at the design of the facades. The internal loads appear at variations of the altitude, meteorological pressure and gas temperature according to the same at the process of sealing. The gas temperature depends on the presence of coatings, coating position in the transparent multi-layer system, IGU geometry and space orientation, its fixing on the facades and varies with the climate conditions. An algorithm for modeling and numerical simulation of thermal fields and internal pressure in the gas cavity at insulating glass units as function of the meteorological conditions is developed. It includes models of the radiation heat transfer in solar and infrared wave length, indoor and outdoor convection heat transfer and free convection in the hermetized gas space, assuming the gas as compressible. The algorithm allows prediction of temperature and pressure stratification in the gas domain of the IGU at different fixing system. The models are validated by comparison of the numerical results with experimental data obtained by Hot-box testing. Numerical calculations and estimation of 3D temperature, fluid flow fields, thermal performances and internal loads at IGU in window system are implemented.

Keywords: insulating glass units, thermal loads, internal pressure, CFD analysis

Procedia PDF Downloads 260
3130 Q-Learning of Bee-Like Robots Through Obstacle Avoidance

Authors: Jawairia Rasheed

Abstract:

Modern robots are often used for search and rescue purpose. One of the key areas of interest in such cases is learning complex environments. One of the key methodologies for robots in such cases is reinforcement learning. In reinforcement learning robots learn to move the path to reach the goal while avoiding obstacles. Q-learning, one of the most advancement of reinforcement learning is used for making the robots to learn the path. Robots learn by interacting with the environment to reach the goal. In this paper simulation model of bee-like robots is implemented in NETLOGO. In the start the learning rate was less and it increased with the passage of time. The bees successfully learned to reach the goal while avoiding obstacles through Q-learning technique.

Keywords: reinforlearning of bee like robots for reaching the goalcement learning for randomly placed obstacles, obstacle avoidance through q-learning, q-learning for obstacle avoidance,

Procedia PDF Downloads 85
3129 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE

Procedia PDF Downloads 82
3128 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model

Authors: Bokkasam Sasidhar, Ibrahim Aljasser

Abstract:

The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.

Keywords: scheduling, maximal flow problem, multiple arc network model, optimization

Procedia PDF Downloads 391
3127 Resource Allocation and Task Scheduling with Skill Level and Time Bound Constraints

Authors: Salam Saudagar, Ankit Kamboj, Niraj Mohan, Satgounda Patil, Nilesh Powar

Abstract:

Task Assignment and Scheduling is a challenging Operations Research problem when there is a limited number of resources and comparatively higher number of tasks. The Cost Management team at Cummins needs to assign tasks based on a deadline and must prioritize some of the tasks as per business requirements. Moreover, there is a constraint on the resources that assignment of tasks should be done based on an individual skill level, that may vary for different tasks. Another constraint is for scheduling the tasks that should be evenly distributed in terms of number of working hours, which adds further complexity to this problem. The proposed greedy approach to solve assignment and scheduling problem first assigns the task based on management priority and then by the closest deadline. This is followed by an iterative selection of an available resource with the least allocated total working hours for a task, i.e. finding the local optimal choice for each task with the goal of determining the global optimum. The greedy approach task allocation is compared with a variant of Hungarian Algorithm, and it is observed that the proposed approach gives an equal allocation of working hours among the resources. The comparative study of the proposed approach is also done with manual task allocation and it is noted that the visibility of the task timeline has increased from 2 months to 6 months. An interactive dashboard app is created for the greedy assignment and scheduling approach and the tasks with more than 2 months horizon that were waiting in a queue without a delivery date initially are now analyzed effectively by the business with expected timelines for completion.

Keywords: assignment, deadline, greedy approach, Hungarian algorithm, operations research, scheduling

Procedia PDF Downloads 134
3126 An Analytical Approach of Computational Complexity for the Method of Multifluid Modelling

Authors: A. K. Borah, A. K. Singh

Abstract:

In this paper we deal building blocks of the computer simulation of the multiphase flows. Whole simulation procedure can be viewed as two super procedures; The implementation of VOF method and the solution of Navier Stoke’s Equation. Moreover, a sequential code for a Navier Stoke’s solver has been studied.

Keywords: Bi-conjugate gradient stabilized (Bi-CGSTAB), ILUT function, krylov subspace, multifluid flows preconditioner, simple algorithm

Procedia PDF Downloads 513