Search results for: clinical trial optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7525

Search results for: clinical trial optimization

5245 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective

Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli

Abstract:

In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.

Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks

Procedia PDF Downloads 82
5244 Reversible Cerebral Vasoconstriction Syndrome at Emergency Department

Authors: Taerim Kim, Shin Ahn, Chang Hwan Sohn, Dong Woo Seo, Won Young Kim

Abstract:

Object: Reversible cerebral vasospasm syndrome (RCVS) remains an underrated cause of thunderclap headache which shares similar history of the ‘worst-ever’ headache with subarachnoid hemorrhage (SAH) to the emergency physicians. This study evaluated the clinical manifestations, radiological features, and outcomes of patients with RCVS so that the physicians could raise the high index of suspicion to detect RCVS in more patients with thunderclap headache before having life-threatening complications. Methods: The electric medical records of 18 patients with diagnostic criteria of RCVS at the emergency department (ED) between January 2013 and December 2014 were retrospective reviewed. Results: The mean age was 50.7 years, and 80% were women. Patients with RCVS visit an average of 4.7 physicians before receiving an accurate diagnosis and mean duration of symptom until diagnosis is 9.3 days. All patients except one experienced severe headache, from 8 to 10 pain intensity on a numerical rating scale (NRS). 44% of patients had nausea as an associated symptom, 66% of patients experienced worsening of headache while gagging, leaning forward, defecating, urinating or having sex. The most frequently affected vessels are middle cerebral arteries demonstrating the characteristic diffuse “string of beads” appearance. Four patients had SAH as a complication. Conclusion: Patients with RCVS have a unique set of clinical and imaging features. Emergency physicians should raise the high index of suspicion to detect RCVS in more patients with thunderclap headache before life-threatening complications.

Keywords: headache, thunderclap, subarachnoid haemorrhage, stroke

Procedia PDF Downloads 427
5243 Clinical Impact of Delirium and Antipsychotic Therapy: 10-Year Experience from a Referral Coronary Care Unit

Authors: Niyada Naksuk, Thoetchai Peeraphatdit, Vitaly Herasevich, Peter A. Brady, Suraj Kapa, Samuel J. Asirvatham

Abstract:

Introduction: Little is known about the safety of antipsychotic therapy for delirium in the coronary care unit (CCU). Our aim was to examine the effect of delirium and antipsychotic therapy among CCU patients. Methods: Pre-study Confusion Assessment Method-Intensive Care Unit (CAM–ICU) criteria were implemented in screening consecutive patients admitted to Mayo Clinic, Rochester, the USA from 2004 through 2013. Death status was prospectively ascertained. Results: Of 11,079 study patients, the incidence of delirium was 8.3% (n=925). Delirium was associated with an increased risk of in-hospital mortality (adjusted OR 1.49; 95% CI, 1.08-2.08; P=.02) and one-year mortality among patients who survived from CCU admission (adjusted HR 1.46; 95% CI, 1.12-1.87; P=.005). A total of 792 doses of haloperidol (5 IQR [3-10] mg/day) or quetiapine (25 IQR [13-50] mg/day) were given to 244 patients with delirium. The clinical characteristics of patients with delirium who did and did not receive antipsychotic therapy were not different (baseline corrected QT [QTc] interval 460±61 ms vs. 457±58 ms, respectively; P = 0.57). In comparison to baseline, mean QTc intervals after the first and third doses of the antipsychotics were not significantly prolonged in haloperidol (448±56, 458±57, and 450±50 ms, respectively) or quetiapine groups (459±54, 467±68, and 462±46 ms, respectively) (P > 0.05 for all). Additionally, in-hospital mortality (adjusted OR 0.67; 95% CI, 0.42-1.04; P=.07), ventricular arrhythmia (adjusted OR 0.87; 95% CI, 0.17-3.62; P=.85) and one-year mortality among the hospital survivors (adjusted HR 0.86; 95% CI 0.62-1.17; P = 0.34) were not different in patients with delirium irrespective of whether or not they received antipsychotics. Conclusions: In patients admitted to the CCU, delirium was associated with an increase in both in-hospital and one-year mortality. Low doses of haloperidol and quetiapine appeared to be safe, without an increase in risk of sudden cardiac death, in-hospital mortality, or one-year mortality in carefully monitored patients.

Keywords: arrhythmias, haloperidol, mortality, qtc interval, quetiapine

Procedia PDF Downloads 372
5242 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 77
5241 Research on Public Space Optimization Strategies for Existing Settlements Based on Intergenerational Friendliness

Authors: Huanhuan Qiang, Sijia Jin

Abstract:

Population aging has become a global trend, and China has entered an aging society, implementing an active aging system focused on home and community-based care. However, most urban communities where elderly people live face issues such as monotonous planning, unappealing landscapes, and inadequate aging infrastructure, which do not meet the requirements for active aging. Intergenerational friendliness and mutual assistance are key components in China's active aging policy framework. Therefore, residential development should prioritize enhancing intergenerational friendliness. Residential and public spaces are central to community life and well-being, offering new and challenging venues to improve relationships among residents of different ages. They are crucial for developing intergenerational communities with diverse generations and non-blood relationships. This paper takes the Maigaoqiao community in Nanjing, China, as a case study, examining intergenerational interactions in public spaces. Based on Maslow's hierarchy of needs and using time geography analysis, it identifies the spatiotemporal behavior characteristics of intergenerational groups in outdoor activities. Then construct an intergenerational-friendly evaluation system and an IPA quadrant model for public spaces in residential areas. Lastly, it explores optimization strategies for public spaces to promote intergenerational friendly interactions, focusing on five aspects: accessibility, safety, functionality, a sense of belonging, and interactivity.

Keywords: intergenerational friendliness, demand theory, spatiotemporal behavior, IPA analysis, existing residential public space

Procedia PDF Downloads 4
5240 Structural Damage Detection via Incomplete Model Data Using Output Data Only

Authors: Ahmed Noor Al-qayyim, Barlas Özden Çağlayan

Abstract:

Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on obtaining very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. This study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using “Two Points - Condensation (TPC) technique”. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices are obtained from optimization of the equation of motion using the measured test data. The current stiffness matrices are compared with original (undamaged) stiffness matrices. High percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element. Where two cases are considered, the method detects the damage and determines its location accurately in both cases. In addition, the results illustrate that these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can also be used for big structures.

Keywords: damage detection, optimization, signals processing, structural health monitoring, two points–condensation

Procedia PDF Downloads 365
5239 First Order Moment Bounds on DMRL and IMRL Classes of Life Distributions

Authors: Debasis Sengupta, Sudipta Das

Abstract:

The class of life distributions with decreasing mean residual life (DMRL) is well known in the field of reliability modeling. It contains the IFR class of distributions and is contained in the NBUE class of distributions. While upper and lower bounds of the reliability distribution function of aging classes such as IFR, IFRA, NBU, NBUE, and HNBUE have discussed in the literature for a long time, there is no analogous result available for the DMRL class. We obtain the upper and lower bounds for the reliability function of the DMRL class in terms of first order finite moment. The lower bound is obtained by showing that for any fixed time, the minimization of the reliability function over the class of all DMRL distributions with a fixed mean is equivalent to its minimization over a smaller class of distribution with a special form. Optimization over this restricted set can be made algebraically. Likewise, the maximization of the reliability function over the class of all DMRL distributions with a fixed mean turns out to be a parametric optimization problem over the class of DMRL distributions of a special form. The constructive proofs also establish that both the upper and lower bounds are sharp. Further, the DMRL upper bound coincides with the HNBUE upper bound and the lower bound coincides with the IFR lower bound. We also prove that a pair of sharp upper and lower bounds for the reliability function when the distribution is increasing mean residual life (IMRL) with a fixed mean. This result is proved in a similar way. These inequalities fill a long-standing void in the literature of the life distribution modeling.

Keywords: DMRL, IMRL, reliability bounds, hazard functions

Procedia PDF Downloads 397
5238 Role of Vitamin-D in Reducing Need for Supplemental Oxygen Among COVID-19 Patients

Authors: Anita Bajpai, Sarah Duan, Ashlee Erskine, Shehzein Khan, Raymond Kramer

Abstract:

Introduction: This research focuses on exploring the beneficial effects if any, of Vitamin-D in reducing the need for supplemental oxygen among hospitalized COVID-19 patients. Two questions are investigated – Q1)Doeshaving a healthy level of baselineVitamin-D 25-OH (≥ 30ng/ml) help,andQ2) does administering Vitamin-D therapy after-the-factduring inpatient hospitalization help? Methods/Study Design: This is a comprehensive, retrospective, observational study of all inpatients at RUHS from March through December 2020 who tested positive for COVID-19 based on real-time reverse transcriptase–polymerase chain reaction assay of nasal and pharyngeal swabs and rapid assay antigen test. To address Q1, we looked atall N1=182 patients whose baseline plasma Vitamin-D 25-OH was known and who needed supplemental oxygen. Of this, a total of 121 patients had a healthy Vitamin-D level of ≥30 ng/mlwhile the remaining 61 patients had low or borderline (≤ 29.9ng/ml)level. Similarly, for Q2, we looked at a total of N2=893 patients who were given supplemental oxygen, of which713 were not given Vitamin-D and 180 were given Vitamin-D therapy. The numerical value of the maximum amount of oxygen flow rate(dependent variable) administered was recorded for each patient. The mean values and associated standard deviations for each group were calculated. Thesetwo sets of independent data served as the basis for independent, two-sample t-Test statistical analysis. To be accommodative of any reasonable benefitof Vitamin-D, ap-value of 0.10(α< 10%) was set as the cutoff point for statistical significance. Results: Given the large sample sizes, the calculated statistical power for both our studies exceeded the customary norm of 80% or better (β< 0.2). For Q1, the mean value for maximumoxygen flow rate for the group with healthybaseline level of Vitamin-D was 8.6 L/min vs.12.6L/min for those with low or borderline levels, yielding a p-value of 0.07 (p < 0.10) with the conclusion that those with a healthy level of baseline Vitamin-D needed statistically significant lower levels of supplemental oxygen. ForQ2, the mean value for a maximum oxygen flow rate for those not administered Vitamin-Dwas 12.5 L/min vs.12.8L/min for those given Vitamin-D, yielding a p-valueof 0.87 (p > 0.10). We thereforeconcludedthat there was no statistically significant difference in the use of oxygen therapy between those who were or were not administered Vitamin-D after-the-fact in the hospital. Discussion/Conclusion: We found that patients who had healthy levels of Vitamin-D at baseline needed statistically significant lower levels of supplemental oxygen. Vitamin-D is well documented, including in a recent article in the Lancet, for its anti-inflammatory role as an adjuvant in the regulation of cytokines and immune cells. Interestingly, we found no statistically significant advantage for giving Vitamin-D to hospitalized patients. It may be a case of “too little too late”. A randomized clinical trial reported in JAMA also did not find any reduction in hospital stay of patients given Vitamin-D. Such conclusions come with a caveat that any delayed marginal benefits may not have materialized promptly in the presence of a significant inflammatory condition. Since Vitamin-D is a low-cost, low-risk option, it may still be useful on an inpatient basis until more definitive findings are established.

Keywords: COVID-19, vitamin-D, supplemental oxygen, vitamin-D in primary care

Procedia PDF Downloads 153
5237 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation

Authors: Ekin Nurbaş

Abstract:

One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.

Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing

Procedia PDF Downloads 147
5236 Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation

Authors: Derjew Ayele Ejigu, Houde Song, Xiaojing Liu

Abstract:

This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.

Keywords: machine learning, neural network, pressurized water reactor, supervisory controller

Procedia PDF Downloads 156
5235 Assessment of the Role of Plasmid in Multidrug Resistance in Extended Spectrum βEtalactamase Producing Escherichia Coli Stool Isolates from Diarrhoeal Patients in Kano Metropolis Nigeria

Authors: Abdullahi Musa, Yakubu Kukure Enebe Ibrahim, Adeshina Gujumbola

Abstract:

The emergence of multidrug resistance in clinical Escherichia coli has been associated with plasmid-mediated genes. DNA transfer among bacteria is critical to the dissemination of resistance. Plasmids have proved to be the ideal vehicles for dissemination of resistance genes. Plasmids coding for antibiotic resistance were long being recognized by many researchers globally. The study aimed at determining the antibiotic susceptibility pattern of ESBL E. coli isolates claimed to be multidrug resistance using disc diffusion method. Antibacterial activity of the test isolates was carried out using disk diffusion methods. The results showed that, majority of the multidrug resistance among clinical isolates of ESBL E. coli was as a result of acquisition of plasmid carrying antibiotic-resistance genes. Production of these ESBL enzymes by these organisms which are normally carried by plasmid and transfer from one bacterium to another has greatly contributed to the rapid spread of antibiotic resistance amongst E. coli isolates, which lead to high economic burden, increase morbidity and mortality rate, complication in therapy and limit treatment options. To curtail these problems, it is of significance to checkmate the rate at which over the counter drugs are sold and antibiotic misused in animal feeds. This will play a very important role in minimizing the spread of resistance bacterial strains in our environment.

Keywords: Escherichia coli, plasmid, multidrug resistance, ESBL, pan drug resistance

Procedia PDF Downloads 69
5234 Reducing The Frequency of Flooding Accompanied by Low pH Wastewater In 100/200 Unit of Phosphate Fertilizer 1 Plant by Implementing The 3R Program (Reduce, Reuse and Recycle)

Authors: Pradipta Risang Ratna Sambawa, Driya Herseta, Mahendra Fajri Nugraha

Abstract:

In 2020, PT Petrokimia Gresik implemented a program to increase the ROP (Run Of Pile) production rate at the Phosphate Fertilizer 1 plant, causing an increase in scrubbing water consumption in the 100/200 area unit. This increase in water consumption causes a higher discharge of wastewater, which can further cause local flooding, especially during the rainy season. The 100/200 area of the Phosphate Fertilizer 1 plant is close to the warehouse and is often a passing area for trucks transporting raw materials. This causes the pH in the wastewater to become acidic (the worst point is up to pH 1). The problem of flooding and exposure to acidic wastewater in the 100/200 area of Phosphate Fertilizer Plant 1 was then resolved by PT Petrokimia Gresik through wastewater optimization steps called the 3R program (Reduce, Reuse, and Recycle). The 3R (Reduce, reuse, and recycle) program consists of an air consumption reduction program by considering the liquid/gas ratio in scrubbing unit of 100/200 Phosphate Fertilizer 1 plant, creating a wastewater interconnection line so that wastewater from unit 100/200 can be used as scrubbing water in the Phonska 1, Phonska 2, Phonska 3 and unit 300 Phosphate Fertilizer 1 plant and increasing scrubbing effectiveness through scrubbing effectiveness simulations. Through a series of wastewater optimization programs, PT Petrokimia Gresik has succeeded in reducing NaOH consumption for neutralization up to 2,880 kg/day or equivalent in saving up to 314,359.76 dollars/year and reducing process water consumption up to 600 m3/day or equivalent in saving up to 63,739.62 dollars/year.

Keywords: fertilizer, phosphate fertilizer, wastewater, wastewater treatment, water management

Procedia PDF Downloads 26
5233 Simulation and Controller Tunning in a Photo-Bioreactor Applying by Taguchi Method

Authors: Hosein Ghahremani, MohammadReza Khoshchehre, Pejman Hakemi

Abstract:

This study involves numerical simulations of a vertical plate-type photo-bioreactor to investigate the performance of Microalgae Spirulina and Control and optimization of parameters for the digital controller by Taguchi method that MATLAB software and Qualitek-4 has been made. Since the addition of parameters such as temperature, dissolved carbon dioxide, biomass, and ... Some new physical parameters such as light intensity and physiological conditions like photosynthetic efficiency and light inhibitors are involved in biological processes, control is facing many challenges. Not only facilitate the commercial production photo-bioreactor Microalgae as feed for aquaculture and food supplements are efficient systems but also as a possible platform for the production of active molecules such as antibiotics or innovative anti-tumor agents, carbon dioxide removal and removal of heavy metals from wastewater is used. Digital controller is designed for controlling the light bioreactor until Microalgae growth rate and carbon dioxide concentration inside the bioreactor is investigated. The optimal values of the controller parameters of the S/N and ANOVA analysis software Qualitek-4 obtained With Reaction curve, Cohen-Con and Ziegler-Nichols method were compared. The sum of the squared error obtained for each of the control methods mentioned, the Taguchi method as the best method for controlling the light intensity was selected photo-bioreactor. This method compared to control methods listed the higher stability and a shorter interval to be answered.

Keywords: photo-bioreactor, control and optimization, Light intensity, Taguchi method

Procedia PDF Downloads 394
5232 Superamolecular Chemistry and Packing of FAMEs in the Liquid Phase for Optimization of Combustion and Emission

Authors: Zeev Wiesman, Paula Berman, Nitzan Meiri, Charles Linder

Abstract:

Supramolecular chemistry refers to the domain of chemistry beyond that of molecules and focuses on the chemical systems made up of a discrete number of assembled molecular sub units or components. Biodiesel components self arrangements is closely related/affect their physical properties in combustion systems and emission. Due to technological difficulties, knowledge regarding the molecular packing of FAMEs (biodiesel) in the liquid phase is limited. Spectral tools such as X-ray and NMR are known to provide evidences related to molecular structure organization. Recently, it was reported by our research group that using 1H Time Domain NMR methodology based on relaxation time and self diffusion coefficients, FAMEs clusters with different motilities can be accurately studied in the liquid phase. Head to head dimarization with quasi-smectic clusters organization, based on molecular motion analysis, was clearly demonstrated. These findings about the assembly/packing of the FAME components are directly associated with fluidity/viscosity of the biodiesel. Furthermore, these findings may provide information of micro/nano-particles that are formed in the delivery and injection system of various combustion systems (affected by thermodynamic conditions). Various relevant parameters to combustion such as: distillation/Liquid Gas phase transition, cetane number/ignition delay, shoot, oxidation/NOX emission maybe predicted. These data may open the window for further optimization of FAME/diesel mixture in terms of combustion and emission.

Keywords: supermolecular chemistry, FAMEs, liquid phase, fluidity, LF-NMR

Procedia PDF Downloads 341
5231 Correlation between Clinical Measurements of Static Foot Posture in Young Adults

Authors: Phornchanok Motantasut, Torkamol Hunsawong, Lugkana Mato, Wanida Donpunha

Abstract:

Identifying abnormal foot posture is important for prescribing appropriate management in patients with lower limb disorders and chronic non-specific low back pain. The normalized navicular height truncated (NNHt) and the foot posture index-6 (FPI-6) have been recommended as the common, simple, valid, and reliable static measures for clinical application. The NNHt is a single plane measure while the FPI-6 is a triple plane measure. At present, there is inadequate information about the correlation between the NNHt and the FPI-6 for categorizing foot posture that leads to a difficulty of choosing the appropriate assessment. Therefore, the present study aimed to determine the correlation between the NNHt and the FPI-6 measures in adult participants with asymptomatic feet. Methods: A cross-sectional descriptive study was conducted in 47 asymptomatic individuals (23 males and 24 females) aged 28.89 ± 7.67 years with body mass index 21.73 ± 1.76 kg/m². The right foot was measured twice by the experienced rater using the NNHt and the FPI-6. A sequence of the measures was randomly arranged for each participant with a 10-minute rest between the tests. The Pearson’s correlation coefficient (r) was used to determine the relationship between the measures. Results: The mean NNHt score was 0.23 ± 0.04 (ranged from 0.15 to 0.36) and the mean FPI-6 score was 4.42 ± 4.36 (ranged from -6 to +11). The Pearson’s correlation coefficient among the NNHt score and the FPI-6 score was -0.872 (p < 0.01). Conclusion: The present finding demonstrates the strong correlation between the NNHt and FPI-6 in adult feet and implies that both measures could be substituted for each other in identifying foot posture.

Keywords: foot posture index, foot type, measurement of foot posture, navicular height

Procedia PDF Downloads 138
5230 The Developmental Model of Teaching and Learning Clinical Practicum at Postpartum Ward for Nursing Students by Using VARK Learning Styles

Authors: Wanwadee Neamsakul

Abstract:

VARK learning style is an effective method of learning that could enhance all skills of the students like visual (V), auditory (A), read/write (R), and kinesthetic (K). This learning style benefits the students in terms of professional competencies, critical thinking and lifelong learning which are the desirable characteristics of the nursing students. This study aimed to develop a model of teaching and learning clinical practicum at postpartum ward for nursing students by using VARK learning styles, and evaluate the nursing students’ opinions about the developmental model. A methodology used for this study was research and development (R&D). The model was developed by focus group discussion with five obstetric nursing instructors who have experiences teaching Maternal Newborn and Midwifery I subject. The activities related to practices in the postpartum (PP) ward including all skills of VARK were assigned into the matrix table. The researcher asked the experts to supervise the model and adjusted the model following the supervision. Subsequently, it was brought to be tried out with the nursing students who practiced on the PP ward. Thirty third year nursing students from one of the northern Nursing Colleges, Academic year 2015 were purposive sampling. The opinions about the satisfaction of the model were collected using a questionnaire which was tested for its validity and reliability. Data were analyzed using descriptive statistics. The developed model composed of 27 activities. Seven activities were developed as enhancement of visual skills for the nursing students (25.93%), five activities as auditory skills (18.52%), six activities as read and write skills (22.22%), and nine activities as kinesthetic skills (33.33%). Overall opinions about the model were reported at the highest level of average satisfaction (mean=4.63, S.D=0.45). In the aspects of visual skill (mean=4.80, S.D=0.45) was reported at the highest level of average satisfaction followed by auditory skill (mean=4.62, S.D=0.43), read and write skill (mean=4.57, S.D=0.46), and kinesthetic skill (mean=4.53, S.D=0.45) which were reported at the highest level of average satisfaction, respectively. The nursing students reported that the model could help them employ all of their skills during practicing and taking care of the postpartum women and newborn babies. They could establish self-confidence while providing care and felt proud of themselves by the benefits of the model. It can be said that using VARK learning style to develop the model could enhance both nursing students’ competencies and positive attitude towards the nursing profession. Consequently, they could provide quality care for postpartum women and newborn babies effectively in the long run.

Keywords: model, nursing students, postpartum ward, teaching and learning clinical practicum

Procedia PDF Downloads 150
5229 Multi-Criteria Decision Making Network Optimization for Green Supply Chains

Authors: Bandar A. Alkhayyal

Abstract:

Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.

Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains

Procedia PDF Downloads 160
5228 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework

Authors: Iulia E. Falcan

Abstract:

The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.

Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization

Procedia PDF Downloads 170
5227 Prediction Factor of Recurrence Supraventricular Tachycardia After Adenosine Treatment in the Emergency Department

Authors: Welawat Tienpratarn, Chaiyaporn Yuksen, Rungrawin Promkul, Chetsadakon Jenpanitpong, Pajit Bunta, Suthap Jaiboon

Abstract:

Supraventricular tachycardia (SVT) is an abnormally fast atrial tachycardia characterized by narrow (≤ 120 ms) and constant QRS. Adenosine was the drug of choice; the first dose was 6 mg. It can be repeated with the second and third doses of 12 mg, with greater than 90% success. The study found that patients observed at 4 hours after normal sinus rhythm was no recurrence within 24 hours. The objective of this study was to investigate the factors that influence the recurrence of SVT after adenosine in the emergency department (ED). The study was conducted retrospectively exploratory model, prognostic study at the Emergency Department (ED) in Faculty of Medicine, Ramathibodi Hospital, a university-affiliated super tertiary care hospital in Bangkok, Thailand. The study was conducted for ten years period between 2010 and 2020. The inclusion criteria were age > 15 years, visiting the ED with SVT, and treating with adenosine. Those patients were recorded with the recurrence SVT in ED. The multivariable logistic regression model developed the predictive model and prediction score for recurrence PSVT. 264 patients met the study criteria. Of those, 24 patients (10%) had recurrence PSVT. Five independent factors were predictive of recurrence PSVT. There was age>65 years, heart rate (after adenosine) > 100 per min, structural heart disease, and dose of adenosine. The clinical risk score to predict recurrence PSVT is developed accuracy 74.41%. The score of >6 had the likelihood ratio of recurrence PSVT by 5.71 times. The clinical predictive score of > 6 was associated with recurrence PSVT in ED.

Keywords: supraventricular tachycardia, recurrance, emergency department, adenosine

Procedia PDF Downloads 117
5226 Horizontal Bone Augmentation Using Two Membranes at Dehisced Implant Sites: A Randomized Clinical Study

Authors: Monika Bansal

Abstract:

Background: Placement of dental implant in narrow alveolar ridge is challenging to be treated. GBR procedure is currently most widely used to augment the deficient alveolar ridges and to treat the fenestration and dehiscence around dental implants. Thus, the objectives of the present study were to evaluate as well as compare the clinical performance of collagen membrane and titanium mesh for horizontal bone augmentation at dehisced implant sites. Methods and material: Total 12 single edentulous implant sites with buccal bone deficiency in 8 subjects were equally divided and treated simultaneously with either of the two membranes and DBBM(Bio-Oss) bone graft. Primary outcome measurements in terms of defect height and defect width were made using a calibrated plastic periodontal probe. Re-entry surgery was performed to remeasure the augmented site and to remove Ti-mesh at 6th month. Independent paired t-tests for the inter-group comparison and student-paired t-tests for the intra-group comparison were performed. The differences were considered to be significant at p ≤ 0.05. Results: Mean defect fill with respect to height and width was 3.50 ± 0.54 mm (87%) and 2.33 ± 0.51 mm (82%) for collagen membrane and 3.83 ± 0.75 mm (92%) and 2.50 ± 0.54 mm (88%) for Ti-mesh group respectively. Conclusions: Within the limitation of the study, it was concluded that mean defect height and width after 6 months were statistically significant within the group without significant difference between them, although defect resolution was better in Ti-mesh.

Keywords: collagen membrane, dehiscence, dental implant, horizontal bone, augmentation, ti-mesh

Procedia PDF Downloads 111
5225 Clinical, Bacteriological and Histopathological Aspects of First-Time Pyoderma in a Population of Iranian Domestic Dogs: A Retrospective Study (2012-2017)

Authors: Shaghayegh Rafatpanah, Mehrnaz Rad, Ahmad Reza Movassaghi, Javad Khoshnegah

Abstract:

The purpose of the present study was to investigate the prevalence of isolation, antimicrobial susceptibility and ERIC-PCR typing of staphylococci species from dogs with pyoderma. The study animals were 61 clinical cases of Iranian domestic dogs with the first-time pyoderma. The prevalence of pyoderma was significantly higher amongst adult (odds Ratio: 0.21; p=0.001) large breed (odds Ratio: 2.42; p=0.002)dogs. There was no difference in prevalence of pyoderma in male and females (odds Ratio: 1.27; p= 0.337). The 'head, face and pinna' and 'trunk' were the most affected lesion regions, each with 19 cases (26.76%). An identifiable underlying disease was present in 52 (85.24%) of the dogs. Bacterial species were recovered from 43 of the 61 (70.49%) studied animals. No isolates were recovered from 18 studied dogs. The most frequently recovered bacterial genus was Staphylococcus (32/43 isolates, 74.41%) including S. epidermidis (22/43 isolates, 51.16%), S. aureus (7/43 isolates, 16.27%) and S. pseudintermedius (3/43 isolates, 6.97%). Staphylococci species resistance was most commonly seen against amoxicillin (94.11%), penicillin (83.35%), and ampicillin (76.47%). Resistant to cephalexin and cefoxitin was 5.88% and 2.94%, respectively. A total of 27 of the staphylococci isolated (84.37 %) were resistant to at least one antimicrobial agent, and 19 isolates (59.37%) were resistant to three or more antimicrobial drugs. There were no significant differences in the prevalence of resistance between the staphylococci isolated from cases of superficial and deep pyoderma. ERIC-PCR results revealed 19 different patterns among 22 isolates of S. epidermidis and 7 isolates of S. aureus.

Keywords: dog, pyoderma, Staphylococcus, Staphylococcus epidermidis, Iran

Procedia PDF Downloads 180
5224 Research on the Function Optimization of China-Hungary Economic and Trade Cooperation Zone

Authors: Wenjuan Lu

Abstract:

China and Hungary have risen from a friendly and comprehensive cooperative relationship to a comprehensive strategic partnership in recent years, and the economic and trade relations between the two countries have developed smoothly. As an important country along the ‘Belt and Road’, Hungary and China have strong economic complementarities and have unique advantages in carrying China's industrial transfer and economic transformation and development. The construction of the China-Hungary Economic and Trade Cooperation Zone, which was initiated by the ‘Sino-Hungarian Borsod Industrial Zone’ and the ‘Hungarian Central European Trade and Logistics Cooperation Park’ has promoted infrastructure construction, optimized production capacity, promoted industrial restructuring, and formed brand and agglomeration effects. Enhancing the influence of Chinese companies in the European market has also promoted economic development in Hungary and even in Central and Eastern Europe. However, as the China-Hungary Economic and Trade Cooperation Zone is still in its infancy, there are still shortcomings such as small scale, single function, and no prominent platform. In the future, based on the needs of China's cooperation with ‘17+1’ and China-Hungary cooperation, on the basis of appropriately expanding the scale of economic and trade cooperation zones and appropriately increasing the number of economic and trade cooperation zones, it is better to focus on optimizing and adjusting its functions and highlighting different economic and trade cooperation. The differentiated function of the trade zones strengthens the multi-faceted cooperation of economic and trade cooperation zones and highlights its role as a platform for cooperation in information, capital, and services.

Keywords: ‘One Belt, One Road’ Initiative, China-Hungary economic and trade cooperation zone, function optimization, Central and Eastern Europe

Procedia PDF Downloads 180
5223 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 88
5222 Developing and integrated Clinical Risk Management Model

Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei

Abstract:

Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.

Keywords: failure modes and effective analysis, risk management, root cause analysis, model

Procedia PDF Downloads 249
5221 Optimization of Lead Bioremediation by Marine Halomonas sp. ES015 Using Statistical Experimental Methods

Authors: Aliaa M. El-Borai, Ehab A. Beltagy, Eman E. Gadallah, Samy A. ElAssar

Abstract:

Bioremediation technology is now used for treatment instead of traditional metal removal methods. A strain was isolated from Marsa Alam, Red sea, Egypt showed high resistance to high lead concentration and was identified by the 16S rRNA gene sequencing technique as Halomonas sp. ES015. Medium optimization was carried out using Plackett-Burman design, and the most significant factors were yeast extract, casamino acid and inoculums size. The optimized media obtained by the statistical design raised the removal efficiency from 84% to 99% from initial concentration 250 ppm of lead. Moreover, Box-Behnken experimental design was applied to study the relationship between yeast extract concentration, casamino acid concentration and inoculums size. The optimized medium increased removal efficiency to 97% from initial concentration 500 ppm of lead. Immobilized Halomonas sp. ES015 cells on sponge cubes, using optimized medium in loop bioremediation column, showed relatively constant lead removal efficiency when reused six successive cycles over the range of time interval. Also metal removal efficiency was not affected by flow rate changes. Finally, the results of this research refer to the possibility of lead bioremediation by free or immobilized cells of Halomonas sp. ES015. Also, bioremediation can be done in batch cultures and semicontinuous cultures using column technology.

Keywords: bioremediation, lead, Box–Behnken, Halomonas sp. ES015, loop bioremediation, Plackett-Burman

Procedia PDF Downloads 196
5220 Who Killed Kalief? Examining the Effects of Solitary Confinement on Juvenile Detainees in the United States

Authors: Esther Baldwin

Abstract:

It is well settled that the use of solitary confinement can cause psychological and physical harm to detainees. For juveniles, who are more susceptible to irreparable harm due to their underdeveloped psyches, the risks are exacerbated. Despite these risks, across the United States juvenile detainees are regularly held in isolation for prolonged periods of time. This essay will examine the broad impact of solitary confinement on juvenile detainees while giving particular focus to the story of Kalief Browder, a juvenile awaiting trial on Rikers Island in New York for a period of three years, nearly two years of which were spent in solitary confinement. Although sadly, his story is not uncommon, Kalief’s story offers a unique perspective in that it provides first-hand insight on the effects of solitary confinement on juveniles. It is our hope that by sharing his story, we will demand better detention practices and policies for juveniles under correctional control in the United States.

Keywords: criminal justice system, juveniles, Kalief browder, solitary confinement

Procedia PDF Downloads 322
5219 Impact of Pharmacist-Led Care on Glycaemic Control in Patients with Type 2 Diabetes: A Randomised-Controlled Trial

Authors: Emmanuel A. David, Rebecca O. Soremekun, Roseline I. Aderemi-Williams

Abstract:

Background: The complexities involved in the management of diabetes mellitus require a multi-dimensional, multi-professional collaborative and continuous care by health care providers and a substantial self-care by the patients in order to achieve desired treatment outcomes. The effect of pharmacists’ care in the management of diabetes in resource-endowed nations is well documented in literature, but randomised-controlled assessment of the impact of pharmacist-led care among patients with diabetes in resource-limited settings like Nigeria and sub-Saharan Africa countries is scarce. Objective: To evaluate the impact of Pharmacist-led care on glycaemic control in patients with uncontrolled type 2 diabetes, using a randomised-controlled study design Methods: This study employed a prospective randomised controlled design, to assess the impact of pharmacist-led care on glycaemic control of 108 poorly controlled type 2 diabetic patients. A total of 200 clinically diagnosed type 2 diabetes patients were purposively selected using fasting blood glucose ≥ 7mmol/L and tested for long term glucose control using Glycated haemoglobin measure. One hundred and eight (108) patients with ≥ 7% Glycated haemoglobin were recruited for the study and assigned unique identification numbers. They were further randomly allocated to intervention and usual care groups using computer generated random numbers, with each group containing 54 subjects. Patients in the intervention group received pharmacist-structured intervention, including education, periodic phone calls, adherence counselling, referral and 6 months follow-up, while patients in usual care group only kept clinic appointments with their physicians. Data collected at baseline and six months included socio-demographic characteristics, fasting blood glucose, Glycated haemoglobin, blood pressure, lipid profile. With an intention to treat analysis, Mann-Whitney U test was used to compared median change from baseline in the primary outcome (Glycated haemoglobin) and secondary outcomes measure, effect size was computed and proportion of patients that reached target laboratory parameter were compared in both arms. Results: All enrolled participants (108) completed the study, 54 in each study. Mean age was 51±11.75 and majority were female (68.5%). Intervention patients had significant reduction in Glycated haemoglobin (-0.75%; P<0.001; η2 = 0.144), with greater proportion attaining target laboratory parameter after 6 months of care compared to usual care group (Glycated haemoglobin: 42.6% vs 20.8%; P=0.02). Furthermore, patients who received pharmacist-led care were about 3 times more likely to have better glucose control (AOR 2.718, 95%CI: 1.143-6.461) compared to usual care group. Conclusion: Pharmacist-led care significantly improved glucose control in patients with uncontrolled type 2 diabetes mellitus and should be integrated in the routine management of diabetes patients, especially in resource-limited settings.

Keywords: glycaemic control , pharmacist-led care, randomised-controlled trial , type 2 diabetes mellitus

Procedia PDF Downloads 121
5218 Early Versus Delayed Antiretroviral Therapy in HIV‐positive People with Tuberculosis

Authors: Mohhamed El Habib Labdouni

Abstract:

Introduction: Co-infection with VIH and tuberculosis poses one of the major ongoing challenges for global TB and AIDS prevention and control. The objective of this study is to raise the issue of the resurgence of TB, in People living with VIH supported in a referent center in western Algeria. Its epidemiological, clinical, biological and radiological new trends, and to compare the mortality rate between early and delayed ART. Methods: It was a prospective study, during 36 months from the 01st/01/2012 to 31st/12/2014, by identifying and analyzing cases of TB-VIH co-infection. Our population was devised in two groups/ early ART and delayed ART. The primary and secondary endpoints were analyzed with Kaplan-Meier curves and log-rank test the period of follow up, which was fixed at 300 weeks. Results: Sixty cases of co-infection TB -VIH were enrolled in our study: 78.3% had pulmonary tuberculosis associated with extra-pulmonary, 13.3% had only pulmonary tuberculosis and 08.3% presented strictly extra-pulmonary TB. The clinical particularity of this co-infection is the frequency of serious localization such us: pleural 23.3%, peritoneal 31.7%, and meningeal suffusion 13.3%.y-.biologicaly we notice the predominance both of pancytopenia and leucoanemia, hyponatremia in 38,6% and hypokalemia in 19,3%. By analyzing Kaplan-Meier survival curves, we notice that early ART initiation is associated with a significant reduction of all-cause mortality (p = 0,000), and we have identified several prognostic factors such as hypokalemia hyponatremia, leukocytosis thrombopenemia leucothrombopenia (p = 0,005). Conclusion: Our study confirms most of the results reported in the literature. Early ART initiation reduces the rate of all-cause mortality, despite the probability of the occurrence of TB-IRIS.

Keywords: TB-HIV co-infection, early ART, hyponatremia, extrapulmonary tuberculosis

Procedia PDF Downloads 182
5217 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 336
5216 Robotic Arm-Automated Spray Painting with One-Shot Object Detection and Region-Based Path Optimization

Authors: Iqraq Kamal, Akmal Razif, Sivadas Chandra Sekaran, Ahmad Syazwan Hisaburi

Abstract:

Painting plays a crucial role in the aerospace manufacturing industry, serving both protective and cosmetic purposes for components. However, the traditional manual painting method is time-consuming and labor-intensive, posing challenges for the sector in achieving higher efficiency. Additionally, the current automated robot path planning has been a bottleneck for spray painting processes, as typical manual teaching methods are time-consuming, error-prone, and skill-dependent. Therefore, it is essential to develop automated tool path planning methods to replace manual ones, reducing costs and improving product quality. Focusing on flat panel painting in aerospace manufacturing, this study aims to address issues related to unreliable part identification techniques caused by the high-mixture, low-volume nature of the industry. The proposed solution involves using a spray gun and a UR10 robotic arm with a vision system that utilizes one-shot object detection (OS2D) to identify parts accurately. Additionally, the research optimizes path planning by concentrating on the region of interest—specifically, the identified part, rather than uniformly covering the entire painting tray.

Keywords: aerospace manufacturing, one-shot object detection, automated spray painting, vision-based path optimization, deep learning, automation, robotic arm

Procedia PDF Downloads 82