Search results for: mean bias error
466 Formulation and Optimization of Topical 5-Fluorouracil Microemulsions Using Central Compisite Design
Authors: Sudhir Kumar, V. R. Sinha
Abstract:
Water in oil topical microemulsions of 5-FU were developed and optimized using face centered central composite design. Topical w/o microemulsion of 5-FU were prepared using sorbitan monooleate (Span 80), polysorbate 80 (Tween 80), with different oils such as oleic acid (OA), triacetin (TA), and isopropyl myristate (IPM). The ternary phase diagrams designated the microemulsion region and face centered central composite design helped in determining the effects of selected variables viz. type of oil, smix ratio and water concentration on responses like drug content, globule size and viscosity of microemulsions. The CCD design exhibited that the factors have statistically significant effects (p<0.01) on the selected responses. The actual responses showed excellent agreement with the predicted values as suggested by the CCD with lower residual standard error. Similarly, the optimized values were found within the range as predicted by the model. Furthermore, other characteristics of microemulsions like pH, conductivity were investigated. For the optimized microemulsion batch, ex-vivo skin flux, skin irritation and retention studies were performed and compared with marketed 5-FU formulation. In ex vivo skin permeation studies, higher skin retention of drug and minimal flux was achieved for optimized microemulsion batch then the marketed cream. Results confirmed the actual responses to be in agreement with predicted ones with least residual standard errors. Controlled release of drug was achieved for the optimized batch with higher skin retention of 5-FU, which can further be utilized for the treatment of many dermatological disorders.Keywords: 5-FU, central composite design, microemulsion, ternanry phase diagram
Procedia PDF Downloads 479465 Nonlinear Estimation Model for Rail Track Deterioration
Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami
Abstract:
Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.Keywords: ANFIS, MGT, prediction modeling, rail track degradation
Procedia PDF Downloads 335464 Computational Fluid Dynamics Simulations and Analysis of Air Bubble Rising in a Column of Liquid
Authors: Baha-Aldeen S. Algmati, Ahmed R. Ballil
Abstract:
Multiphase flows occur widely in many engineering and industrial processes as well as in the environment we live in. In particular, bubbly flows are considered to be crucial phenomena in fluid flow applications and can be studied and analyzed experimentally, analytically, and computationally. In the present paper, the dynamic motion of an air bubble rising within a column of liquid is numerically simulated using an open-source CFD modeling tool 'OpenFOAM'. An interface tracking numerical algorithm called MULES algorithm, which is built-in OpenFOAM, is chosen to solve an appropriate mathematical model based on the volume of fluid (VOF) numerical method. The bubbles initially have a spherical shape and starting from rest in the stagnant column of liquid. The algorithm is initially verified against numerical results and is also validated against available experimental data. The comparison revealed that this algorithm provides results that are in a very good agreement with the 2D numerical data of other CFD codes. Also, the results of the bubble shape and terminal velocity obtained from the 3D numerical simulation showed a very good qualitative and quantitative agreement with the experimental data. The simulated rising bubbles yield a very small percentage of error in the bubble terminal velocity compared with the experimental data. The obtained results prove the capability of OpenFOAM as a powerful tool to predict the behavior of rising characteristics of the spherical bubbles in the stagnant column of liquid. This will pave the way for a deeper understanding of the phenomenon of the rise of bubbles in liquids.Keywords: CFD simulations, multiphase flows, OpenFOAM, rise of bubble, volume of fluid method, VOF
Procedia PDF Downloads 123463 Protective Effect of Levetiracetam on Aggravation of Memory Impairment in Temporal Lobe Epilepsy by Phenytoin
Authors: Asher John Mohan, Krishna K. L.
Abstract:
Objectives: (1) To assess the extent of memory impairment induced by Phenytoin (PHT) at normal and reduced dose on temporal lobe epileptic mice. (2) To evaluate the protective effect of Levetiracetam (LEV) on aggravation of memory impairment in temporal lobe epileptic mice by PHT. Materials and Methods: Albino mice of either sex (n=36) were used for the study for a period of 64 days. Convulsions were induced by intraperitoneal administration of pilocarpine 280 mg/kg on every 6th day. Radial arm maze (RAM) was employed to evaluate the memory impairment activity on every 7th day. The anticonvulsant and memory impairment activity were assessed in PHT normal and reduced doses both alone and in combination with LEV. RAM error scores and convulsive scores were the parameters considered for this study. Brain acetylcholine esterase and glutamate were determined along with histopathological studies of frontal cortex. Results: Administration of PHT for 64 days on mice has shown aggravation of memory impairment activity on temporal lobe epileptic mice. Although the reduction in PHT dose was found to decrease the degree of memory impairment the same decreased the anticonvulsant potency. The combination with LEV not only brought about the correction of impaired memory but also replaced the loss of potency due to the reduction of the dose of the antiepileptic drug employed. These findings were confirmed with enzyme and neurotransmitter levels in addition to histopathological studies. Conclusion: This study thus builds a foundation in combining a nootropic anticonvulsant with an antiepileptic drug to curb the adverse effect of memory impairment associated with temporal lobe epilepsy. However further extensive research is a must for the practical incorporation of this approach into disease therapy.Keywords: anti-epileptic drug, Phenytoin, memory impairment, Pilocarpine
Procedia PDF Downloads 316462 Autonomous Flight Control for Multirotor by Alternative Input Output State Linearization with Nested Saturations
Authors: Yong Eun Yoon, Eric N. Johnson, Liling Ren
Abstract:
Multirotor is one of the most popular types of small unmanned aircraft systems and has already been used in many areas including transport, military, surveillance, and leisure. Together with its popularity, the needs for proper flight control is growing because in most applications it is required to conduct its missions autonomously, which is in many aspects based on autonomous flight control. There have been many studies about the flight control for multirotor, but there is still room for enhancements in terms of performance and efficiency. This paper presents an autonomous flight control method for multirotor based on alternative input output linearization coupled with nested saturations. With alternative choice of the output of the multirotor flight control system, we can reduce computational cost regarding Lie algebra, and the linearized system can be stabilized with the introduction of nested saturations with real poles of our own design. Stabilization of internal dynamics is also based on the nested saturations and accompanies the determination of part of desired states. In particular, outer control loops involving state variables which originally are not included in the output of the flight control system is naturally rendered through this internal dynamics stabilization. We can also observe that desired tilting angles are determined by error dynamics from outer loops. Simulation results show that in any tracking situations multirotor stabilizes itself with small time constants, preceded by tuning process for control parameters with relatively low degree of complexity. Future study includes control of piecewise linear behavior of multirotor with actuator saturations, and the optimal determination of desired states while tracking multiple waypoints.Keywords: automatic flight control, input output linearization, multirotor, nested saturations
Procedia PDF Downloads 228461 Honneth, Feenberg, and the Redemption of Critical Theory of Technology
Authors: David Schafer
Abstract:
Critical Theory is in sore need of a workable account of technology. It had one in the writings of Herbert Marcuse, or so it seemed until Jürgen Habermas mounted a critique in 'Technology and Science as Ideology' (Habermas, 1970) that decisively put it away. Ever since Marcuse’s work has been regarded outdated – a 'philosophy of consciousness' no longer seriously tenable. But with Marcuse’s view has gone the important insight that technology is no norm-free system (as Habermas portrays it) but can be laden with social bias. Andrew Feenberg is among a few serious scholars who have perceived this problem in post-Habermasian critical theory and has sought to revive a basically Marcusean account of technology. On his view, while so-called ‘technical elements’ that physically make up technologies are neutral with regard to social interests, there is a sense in which we may speak of a normative grammar or ‘technical code’ built-in to technology that can be socially biased in favor of certain groups over others (Feenberg, 2002). According to Feenberg, those perspectives on technology are reified which consider technology only by their technical elements to the neglect of their technical codes. Nevertheless, Feenberg’s account fails to explain what is normatively problematic with such reified views of technology. His plausible claim that they represent false perspectives on technology by itself does not explain how such views may be oppressive, even though Feenberg surely wants to be doing that stronger level of normative theorizing. Perceiving this deficit in his own account of reification, he tries to adopt Habermas’s version of systems-theory to ground his own critical theory of technology (Feenberg, 1999). But this is a curious move in light of Feenberg’s own legitimate critiques of Habermas’s portrayals of technology as reified or ‘norm-free.’ This paper argues that a better foundation may be found in Axel Honneth’s recent text, Freedom’s Right (Honneth, 2014). Though Honneth there says little explicitly about technology, he offers an implicit account of reification formulated in opposition to Habermas’s systems-theoretic approach. On this ‘normative functionalist’ account of reification, social spheres are reified when participants prioritize individualist ideals of freedom (moral and legal freedom) to the neglect of an intersubjective form of freedom-through-recognition that Honneth calls ‘social freedom.’ Such misprioritization is ultimately problematic because it is unsustainable: individual freedom is philosophically and institutionally dependent upon social freedom. The main difficulty in adopting Honneth’s social theory for the purposes of a theory of technology, however, is that the notion of social freedom is predicable only of social institutions, whereas it appears difficult to conceive of technology as an institution. Nevertheless, in light of Feenberg’s work, the idea that technology includes within itself a normative grammar (technical code) takes on much plausibility. To the extent that this normative grammar may be understood by the category of social freedom, Honneth’s dialectical account of the relationship between individual and social forms of freedom provides a more solid basis from which to ground the normative claims of Feenberg’s sociological account of technology than Habermas’s systems theory.Keywords: Habermas, Honneth, technology, Feenberg
Procedia PDF Downloads 197460 6 DOF Cable-Driven Haptic Robot for Rendering High Axial Force with Low Off-Axis Impedance
Authors: Naghmeh Zamani, Ashkan Pourkand, David Grow
Abstract:
This paper presents the design and mechanical model of a hybrid impedance/admittance haptic device optimized for applications, like bone drilling, spinal awl probe use, and other surgical techniques were high force is required in the tool-axial direction, and low impedance is needed in all other directions. The performance levels required cannot be satisfied by existing, off-the-shelf haptic devices. This design may allow critical improvements in simulator fidelity for surgery training. The device consists primarily of two low-mass (carbon fiber) plates with a rod passing through them. Collectively, the device provides 6 DOF. The rod slides through a bushing in the top plate and it is connected to the bottom plate with a universal joint, constrained to move in only 2 DOF, allowing axial torque display the user’s hand. The two parallel plates are actuated and located by means of four cables pulled by motors. The forward kinematic equations are derived to ensure that the plates orientation remains constant. The corresponding equations are solved using the Newton-Raphson method. The static force/torque equations are also presented. Finally, we present the predicted distribution of location error, cables velocity, cable tension, force and torque for the device. These results and preliminary hardware fabrication indicate that this design may provide a revolutionary approach for haptic display of many surgical procedures by means of an architecture that allows arbitrary workspace scaling. Scaling of the height and width can be scaled arbitrarily.Keywords: cable direct driven robot, haptics, parallel plates, bone drilling
Procedia PDF Downloads 258459 The Adaptive Role of Negative Emotions in Optimal Functioning
Authors: Brianne Nichols, John A. Parkinson
Abstract:
Positive Psychology has provided a rich understanding of the beneficial effects of positive emotions in relation to optimal functioning, and research has been devoted to promote states of positive feeling and thinking. While this is a worthwhile pursuit, positive emotions are not useful in all contexts - some situations may require the individual to make use of their negative emotions to reach a desired end state. To account for the potential value of a wider range of emotional experiences that are common to the human condition, Positive Psychology needs to expand its horizons and investigate how individuals achieve positive outcomes using varied means. The current research seeks to understand the positive psychology of fear of failure (FF), which is a commonly experienced negative emotion relevant to most life domains. On the one hand, this emotion has been linked with avoidance motivation and self-handicap behaviours, on the other; FF has been shown to act as a drive to move the individual forward. To fully capture the depth of this highly subjective emotional experience and understand the circumstances under which FF may be adaptive, this study adopted a mixed methods design using SenseMaker; a web-based tool that combines the richness of narratives with the objectivity of numerical data. Two hundred participants consisting mostly of undergraduate university students shared a story of a time in the recent past when they feared failure of achieving a valued goal. To avoid researcher bias in the interpretation of narratives, participants self-signified their stories in a tagging system that was based on researchers’ aim to explore the role of past failures, the cognitive, emotional and behavioural profile of individuals high and low in FF, and the relationship between these factors. In addition, the role of perceived personal control and self-esteem were investigated in relation to FF using self-report questionnaires. Results from quantitative analyses indicated that individuals with high levels of FF, compared to low, were strongly influenced by past failures and preoccupied with their thoughts and emotions relating to the fear. This group also reported an unwillingness to accept their internal experiences, which in turn was associated with withdrawal from goal pursuit. Furthermore, self-esteem was found to mediate the relationship between perceived control and FF, suggesting that self-esteem, with or without control beliefs, may have the potential to buffer against high FF. It is hoped that the insights provided by the current study will inspire future research to explore the ways in which ‘acceptance’ may help individuals keep moving towards a goal despite the presence of FF, and whether cultivating a non-contingent self-esteem is the key to resilience in the face of failures.Keywords: fear of failure, goal-pursuit, negative emotions, optimal functioning, resilience
Procedia PDF Downloads 195458 Seepage Analysis through Earth Dam Embankment: Case Study of Batu Dam
Authors: Larifah Mohd Sidik, Anuar Kasa
Abstract:
In recent years, the demands for raw water are increasing along with the growth of the economy and population. Hence, the need for the construction and operation of dams is one of the solutions for the management of water resources problems. The stability of the embankment should be taken into consideration to evaluate the safety of retaining water. The safety of the dam is mostly based on numerous measurable components, for instance, seepage flowrate, pore water pressure and deformation of the embankment. Seepage and slope stability is the primary and most important reason to ascertain the overall safety behavior of the dams. This research study was conducted to evaluate static condition seepage and slope stability performances of Batu dam which is located in Kuala Lumpur capital city. The numerical solution Geostudio-2012 software was employed to analyse the seepage using finite element method, SEEP/W and slope stability using limit equilibrium method, SLOPE/W for three different cases of reservoir level operations; normal and flooded condition. Results of seepage analysis using SEEP/W were utilized as parental input for the analysis of SLOPE/W. Sensitivity analysis on hydraulic conductivity of material was done and calibrated to minimize the relative error of simulation SEEP/W, where the comparison observed field data and predicted value were also carried out. In seepage analysis, such as leakage flow rate, pore water distribution and location of a phreatic line are determined using the SEEP/W. The result of seepage analysis shows the clay core effectively lowered the phreatic surface and no piping failure is shown in the result. Hence, the total seepage flux was acceptable and within the permissible limit.Keywords: earth dam, dam safety, seepage, slope stability, pore water pressure
Procedia PDF Downloads 220457 Microstructure Evolution and Modelling of Shear Forming
Authors: Karla D. Vazquez-Valdez, Bradley P. Wynne
Abstract:
In the last decades manufacturing needs have been changing, leading to the study of manufacturing methods that were underdeveloped, such as incremental forming processes like shear forming. These processes use rotating tools in constant local contact with the workpiece, which is often also rotating, to generate shape. This means much lower loads to forge large parts and no need for expensive special tooling. Potential has already been established by demonstrating manufacture of high-value products, e.g., turbine and satellite parts, with high dimensional accuracy from difficult to manufacture materials. Thus, huge opportunities exist for these processes to replace the current method of manufacture for a range of high value components, e.g., eliminating lengthy machining, reducing material waste and process times; or the manufacture of a complicated shape without the development of expensive tooling. However, little is known about the exact deformation conditions during processing and why certain materials are better than others for shear forming, leading to a lot of trial and error before production. Three alloys were used for this study: Ti-54M, Jethete M154, and IN718. General Microscopy and Electron Backscatter Diffraction (EBSD) were used to measure strains and orientation maps during shear forming. A Design of Experiments (DOE) analysis was also made in order to understand the impact of process parameters in the properties of the final workpieces. Such information was the key to develop a reliable Finite Element Method (FEM) model that closely resembles the deformation paths of this process. Finally, the potential of these three materials to be shear spun was studied using the FEM model and their Forming Limit Diagram (FLD) which led to the development of a rough methodology for testing the shear spinnability of various metals.Keywords: shear forming, damage, principal strains, forming limit diagram
Procedia PDF Downloads 163456 Numerical Study of Elastic Performances of Sandwich Beam with Carbon-Fibre Reinforced Skins
Authors: Soukaina Ounss, Hamid Mounir, Abdellatif El Marjani
Abstract:
Sandwich materials with composite reinforced skins are mostly required in advanced construction applications with a view to ensure resistant structures. Their lightweight, their high flexural stiffness and their optimal thermal insulation make them a suitable solution to obtain efficient structures with performing rigidity and optimal energy safety. In this paper, the mechanical behavior of a sandwich beam with composite skins reinforced by unidirectional carbon fibers is investigated numerically through analyzing the impact of reinforcements specifications on the longitudinal elastic modulus in order to select the adequate sandwich configuration that has an interesting rigidity and an accurate convergence to the analytical approach which is proposed to verify performed numerical simulations. Therefore, concerned study starts by testing flexion performances of skins with various fibers orientations and volume fractions to determine those to use in sandwich beam. For that, the combination of a reinforcement inclination of 30° and a volume ratio of 60% is selected with the one with 60° of fibers orientation and 40% of volume fraction, this last guarantees to chosen skins an important rigidity with an optimal fibers concentration and a great enhance in convergence to analytical results in the sandwich model for the reason of the crucial core role as transverse shear absorber. Thus, a resistant sandwich beam is elaborated from a face-sheet constituted from two layers of previous skins with fibers oriented in 60° and an epoxy core; concerned beam has a longitudinal elastic modulus of 54 Gpa (gigapascal) that equals to the analytical value by a negligible error of 2%.Keywords: fibers orientation, fibers volume ratio, longitudinal elastic modulus, sandwich beam
Procedia PDF Downloads 168455 Factor Influencing Pharmacist Engagement and Turnover Intention in Thai Community Pharmacist: A Structural Equation Modelling Approach
Authors: T. Nakpun, T. Kanjanarach, T. Kittisopee
Abstract:
Turnover of community pharmacist can affect continuity of patient care and most importantly the quality of care and also the costs of a pharmacy. It was hypothesized that organizational resources, job characteristics, and social supports had direct effect on pharmacist turnover intention, and indirect effect on pharmacist turnover intention via pharmacist engagement. This research aimed to study influencing factors on pharmacist engagement and pharmacist turnover intention by testing the proposed structural hypothesized model to explain the relationship among organizational resources, job characteristics, and social supports that effect on pharmacist turnover intention and pharmacist engagement in Thai community pharmacists. A cross sectional study design with self-administered questionnaire was conducted in 209 Thai community pharmacists. Data were analyzed using Structural Equation Modeling technique with analysis of a moment structures AMOS program. The final model showed that only organizational resources had significant negative direct effect on pharmacist turnover intention (β =-0.45). Job characteristics and social supports had significant positive relationship with pharmacist engagement (β = 0.44, and 0.55 respectively). Pharmacist engagement had significant negative relationship with pharmacist turnover intention (β = - 0.24). Thus, job characteristics and social supports had significant negative indirect effect on turnover intention via pharmacist engagement (β =-0.11 and -0.13, respectively). The model fit the data well (χ2/ degree of freedom (DF) = 2.12, the goodness of fit index (GFI)=0.89, comparative fit index (CFI) = 0.94 and root mean square error of approximation (RMSEA) = 0.07). This study can be concluded that organizational resources were the most important factor because it had direct effect on pharmacist turnover intention. Job characteristics and social supports were also help decrease pharmacist turnover intention via pharmacist engagement.Keywords: community pharmacist, influencing factor, turnover intention, work engagement
Procedia PDF Downloads 204454 Real Time Classification of Political Tendency of Twitter Spanish Users based on Sentiment Analysis
Authors: Marc Solé, Francesc Giné, Magda Valls, Nina Bijedic
Abstract:
What people say on social media has turned into a rich source of information to understand social behavior. Specifically, the growing use of Twitter social media for political communication has arisen high opportunities to know the opinion of large numbers of politically active individuals in real time and predict the global political tendencies of a specific country. It has led to an increasing body of research on this topic. The majority of these studies have been focused on polarized political contexts characterized by only two alternatives. Unlike them, this paper tackles the challenge of forecasting Spanish political trends, characterized by multiple political parties, by means of analyzing the Twitters Users political tendency. According to this, a new strategy, named Tweets Analysis Strategy (TAS), is proposed. This is based on analyzing the users tweets by means of discovering its sentiment (positive, negative or neutral) and classifying them according to the political party they support. From this individual political tendency, the global political prediction for each political party is calculated. In order to do this, two different strategies for analyzing the sentiment analysis are proposed: one is based on Positive and Negative words Matching (PNM) and the second one is based on a Neural Networks Strategy (NNS). The complete TAS strategy has been performed in a Big-Data environment. The experimental results presented in this paper reveal that NNS strategy performs much better than PNM strategy to analyze the tweet sentiment. In addition, this research analyzes the viability of the TAS strategy to obtain the global trend in a political context make up by multiple parties with an error lower than 23%.Keywords: political tendency, prediction, sentiment analysis, Twitter
Procedia PDF Downloads 238453 Gender Specific Differences in Clinical Outcomes of Knee Osteoarthritis Treated with Micro-Fragmented Adipose Tissue
Authors: Tiffanie-Marie Borg, Yasmin Zeinolabediny, Nima Heidari, Ali Noorani, Mark Slevin, Angel Cullen, Stefano Olgiati, Alberto Zerbi, Alessandro Danovi, Adrian Wilson
Abstract:
Knee Osteoarthritis (OA) is a critical cause of disability globally. In recent years, there has been growing interest in non-invasive treatments, such as intra-articular injection of micro-fragmented fat (MFAT), showing great potential in treating OA. Mesenchymal stem cells (MSCs), originating from pericytes of micro-vessels in MFAT, can differentiate into mesenchymal lineage cells such as cartilage, osteocytes, adipocytes, and osteoblasts. Secretion of growth factor and cytokines from MSCs have the capability to inhibit T cell growth, reduced pain and inflammation, and create a micro-environment that through paracrine signaling, can promote joint repair and cartilage regeneration. Here we have shown, for the first time, data supporting the hypothesis that women respond better in terms of improvements in pain and function to MFAT injection compared to men. Historically, women have been underrepresented in studies, and studies with both sexes regularly fail to analyse the results by sex. To mitigate this bias and quantify it, we describe a technique using reproducible statistical analysis and replicable results with Open Access statistical software R to calculate the magnitude of this difference. Genetic, hormonal, environmental, and age factors play a role in our observed difference between the sexes. This observational, intention-to-treat study included the complete sample of 456 patients who agreed to be scored for pain (visual analogue scale (VAS)) and function (Oxford knee score (OKS)) at baseline regardless of subsequent changes to adherence or status during follow-up. We report that a significantly larger number of women responded to treatment than men: [90% vs. 60% change in VAS scores with 87% vs. 65% change in OKS scores, respectively]. Women overall had a stronger positive response to treatment with reduced pain and improved mobility and function. Pre-injection, our cohort of women were in more pain with worse joint function which is quite common to see in orthopaedics. However, during the 2-year follow-up, they consistently maintained a lower incidence of discomfort with superior joint function. This data clearly identifies a clear need for further studies to identify the cell and molecular biological and other basis for these differences and be able to utilize this information for stratification in order to improve outcome for both women and men.Keywords: gender differences, micro-fragmented adipose tissue, knee osteoarthritis, stem cells
Procedia PDF Downloads 181452 Adaptive Motion Compensated Spatial Temporal Filter of Colonoscopy Video
Authors: Nidhal Azawi
Abstract:
Colonoscopy procedure is widely used in the world to detect an abnormality. Early diagnosis can help to heal many patients. Because of the unavoidable artifacts that exist in colon images, doctors cannot detect a colon surface precisely. The purpose of this work is to improve the visual quality of colonoscopy videos to provide better information for physicians by removing some artifacts. This work complements a series of work consisting of three previously published papers. In this paper, Optic flow is used for motion compensation, and then consecutive images are aligned/registered to integrate some information to create a new image that has or reveals more information than the original one. Colon images have been classified into informative and noninformative images by using a deep neural network. Then, two different strategies were used to treat informative and noninformative images. Informative images were treated by using Lucas Kanade (LK) with an adaptive temporal mean/median filter, whereas noninformative images are treated by using Lucas Kanade with a derivative of Gaussian (LKDOG) with adaptive temporal median images. A comparison result showed that this work achieved better results than that results in the state- of- the- art strategies for the same degraded colon images data set, which consists of 1000 images. The new proposed algorithm reduced the error alignment by about a factor of 0.3 with a 100% successfully image alignment ratio. In conclusion, this algorithm achieved better results than the state-of-the-art approaches in case of enhancing the informative images as shown in the results section; also, it succeeded to convert the non-informative images that have very few details/no details because of the blurriness/out of focus or because of the specular highlight dominate significant amount of an image to informative images.Keywords: optic flow, colonoscopy, artifacts, spatial temporal filter
Procedia PDF Downloads 113451 Determination of Gold in Microelectronics Waste Pieces
Authors: S. I. Usenko, V. N. Golubeva, I. A. Konopkina, I. V. Astakhova, O. V. Vakhnina, A. A. Korableva, A. A. Kalinina, K. B. Zhogova
Abstract:
Gold can be determined in natural objects and manufactured articles of different origin. The up-to-date status of research and problems of high gold level determination in alloys and manufactured articles are described in detail in the literature. No less important is the task of this metal determination in minerals, process products and waste pieces. The latters, as objects of gold content chemical analysis, are most hard-to-study for two reasons: Because of high requirements to accuracy of analysis results and because of difference in chemical and phase composition. As a rule, such objects are characterized by compound, variable and very often unknown matrix composition that leads to unpredictable and uncontrolled effect on accuracy and other analytical characteristics of analysis technique. In this paper, the methods for the determination of gold are described, using flame atomic-absorption spectrophotometry and gravimetric analysis technique. The techniques are aimed at gold determination in a solution for gold etching (KJ+J2), in the technological mixture formed after cleaning stainless steel members of vacuum-deposit installation with concentrated nitric and hydrochloric acids as well as in gold-containing powder resulted from liquid wastes reprocessing. Optimal conditions for sample preparation and analysis of liquid and solid waste specimens of compound and variable matrix composition were chosen. The boundaries of relative resultant error were determined for the methods within the range of gold mass concentration from 0.1 to 30g/dm3 in the specimens of liquid wastes and mass fractions from 3 to 80% in the specimens of solid wastes.Keywords: microelectronics waste pieces, gold, sample preparation, atomic-absorption spectrophotometry, gravimetric analysis technique
Procedia PDF Downloads 204450 From Clients to Colleagues: Supporting the Professional Development of Survivor Social Work Students
Authors: Stephanie Jo Marchese
Abstract:
This oral presentation is a reflective piece regarding current social work teaching methods that value and devalue the lived experiences of survivor students. This presentation grounds the term ‘survivor’ in feminist frameworks. A survivor-defined approach to feminist advocacy assumes an individual’s agency, considers each case and needs independent of generalizations, and provides resources and support to empower victims. Feminist ideologies are ripe arenas to update and influence the rapport-building schools of social work have with these students. Survivor-based frameworks are rooted in nuanced understandings of intersectional realities, staunchly combat both conscious and unconscious deficit lenses wielded against victims, elevate lived experiences to the realm of experiential expertise, and offer alternatives to traditional power structures and knowledge exchanges. Actively importing a survivor framework into the methodology of social work teaching breaks open barriers many survivor students have faced in institutional settings, this author included. The profession of social work is at an important crux of change, both in the United States and globally. The United States is currently undergoing a radical change in its citizenry and outlier communities have taken to the streets again in opposition to their othered-ness. New waves of students are entering this field, emboldened by their survival of personal and systemic oppressions- heavily influenced by third-wave feminism, critical race theory, queer theory, among other post-structuralist ideologies. Traditional models of sociological and psychological studies are actively being challenged. The profession of social work was not founded on the diagnosis of disorders but rather a grassroots-level activism that heralded and demanded resources for oppressed communities. Institutional and classroom acceptance and celebration of survivor narratives can catapult the resurgence of these values needed in the profession’s service-delivery models and put social workers back in the driver's seat of social change (a combined advocacy and policy perspective), moving away from outsider-based intervention models. Survivor students should be viewed as agents of change, not solely former victims and clients. The ideas of this presentation proposal are supported through various qualitative interviews, as well as reviews of ‘best practices’ in the field of education that incorporate feminist methods of inclusion and empowerment. Curriculum and policy recommendations are also offered.Keywords: deficit lens bias, empowerment theory, feminist praxis, inclusive teaching models, strengths-based approaches, social work teaching methods
Procedia PDF Downloads 289449 A Spatial Information Network Traffic Prediction Method Based on Hybrid Model
Authors: Jingling Li, Yi Zhang, Wei Liang, Tao Cui, Jun Li
Abstract:
Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.Keywords: spatial information network, traffic prediction, wavelet decomposition, time series model
Procedia PDF Downloads 147448 Symmetry Properties of Linear Algebraic Systems with Non-Canonical Scalar Multiplication
Authors: Krish Jhurani
Abstract:
The research paper presents an in-depth analysis of symmetry properties in linear algebraic systems under the operation of non-canonical scalar multiplication structures, specifically semirings, and near-rings. The objective is to unveil the profound alterations that occur in traditional linear algebraic structures when we replace conventional field multiplication with these non-canonical operations. In the methodology, we first establish the theoretical foundations of non-canonical scalar multiplication, followed by a meticulous investigation into the resulting symmetry properties, focusing on eigenvectors, eigenspaces, and invariant subspaces. The methodology involves a combination of rigorous mathematical proofs and derivations, supplemented by illustrative examples that exhibit these discovered symmetry properties in tangible mathematical scenarios. The core findings uncover unique symmetry attributes. For linear algebraic systems with semiring scalar multiplication, we reveal eigenvectors and eigenvalues. Systems operating under near-ring scalar multiplication disclose unique invariant subspaces. These discoveries drastically broaden the traditional landscape of symmetry properties in linear algebraic systems. With the application of these findings, potential practical implications span across various fields such as physics, coding theory, and cryptography. They could enhance error detection and correction codes, devise more secure cryptographic algorithms, and even influence theoretical physics. This expansion of applicability accentuates the significance of the presented research. The research paper thus contributes to the mathematical community by bringing forth perspectives on linear algebraic systems and their symmetry properties through the lens of non-canonical scalar multiplication, coupled with an exploration of practical applications.Keywords: eigenspaces, eigenvectors, invariant subspaces, near-rings, non-canonical scalar multiplication, semirings, symmetry properties
Procedia PDF Downloads 123447 Reducing the Imbalance Penalty Through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey
Authors: Hayriye Anıl, Görkem Kar
Abstract:
In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations since geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning, and, time series methods, the total generation of the power plants belonging to Zorlu Natural Electricity Generation, which has a high installed capacity in terms of geothermal, was estimated for the first one and two weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.Keywords: machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting
Procedia PDF Downloads 110446 Laparoscopic Resection Shows Comparable Outcomes to Open Thoracotomy for Thoracoabdominal Neuroblastomas: A Meta-Analysis and Systematic Review
Authors: Peter J. Fusco, Dave M. Mathew, Chris Mathew, Kenneth H. Levy, Kathryn S. Varghese, Stephanie Salazar-Restrepo, Serena M. Mathew, Sofia Khaja, Eamon Vega, Mia Polizzi, Alyssa Mullane, Adham Ahmed
Abstract:
Background: Laparoscopic (LS) removal of neuroblastomas in children has been reported to offer favorable outcomes compared to the conventional open thoracotomy (OT) procedure. Critical perioperative measures such as blood loss, operative time, length of stay, and time to postoperative chemotherapy have all supported laparoscopic use rather than its more invasive counterpart. Herein, a pairwise meta-analysis was performed comparing perioperative outcomes between LS and OT in thoracoabdominal neuroblastoma cases. Methods: A comprehensive literature search was performed on PubMed, Ovid EMBASE, and Scopus databases to identify studies comparing the outcomes of pediatric patients with thoracoabdominal neuroblastomas undergoing resection via OT or LS. After deduplication, 4,227 studies were identified and subjected to initial title screening with exclusion and inclusion criteria to ensure relevance. When studies contained overlapping cohorts, only the larger series were included. Primary outcomes include estimated blood loss (EBL), hospital length of stay (LOS), and mortality, while secondary outcomes were tumor recurrence, post-operative complications, and operation length. The “meta” and “metafor” packages were used in R, version 4.0.2, to pool risk ratios (RR) or standardized mean differences (SMD) in addition to their 95% confidence intervals in the random effects model via the Mantel-Haenszel method. Heterogeneity between studies was assessed using the I² test, while publication bias was assessed via funnel plot. Results: The pooled analysis included 209 patients from 5 studies (141 OT, 68 LS). Of the included studies, 2 originated from the United States, 1 from Toronto, 1 from China, and 1was from a Japanese center. Mean age between study cohorts ranged from 2.4 to 5.3 years old, with female patients occupying between 30.8% to 50% of the study populations. No statistically significant difference was found between the two groups for LOS (SMD -1.02; p=0.083), mortality (RR 0.30; p=0.251), recurrence(RR 0.31; p=0.162), post-operative complications (RR 0.73; p=0.732), or operation length (SMD -0.07; p=0.648). Of note, LS appeared to be protective in the analysis for EBL, although it did not reach statistical significance (SMD -0.4174; p= 0.051). Conclusion: Despite promising literature assessing LS removal of pediatric neuroblastomas, results showed it was non-superior to OT for any explored perioperative outcomes. Given the limited comparative data on the subject, it is evident that randomized trials are necessary to further the efficacy of the conclusions reached.Keywords: laparoscopy, neuroblastoma, thoracoabdominal, thoracotomy
Procedia PDF Downloads 132445 Improvement Performances of the Supersonic Nozzles at High Temperature Type Minimum Length Nozzle
Authors: W. Hamaidia, T. Zebbiche
Abstract:
This paper presents the design of axisymmetric supersonic nozzles, in order to accelerate a supersonic flow to the desired Mach number and that having a small weight, in the same time gives a high thrust. The concerned nozzle gives a parallel and uniform flow at the exit section. The nozzle is divided into subsonic and supersonic regions. The supersonic portion is independent to the upstream conditions of the sonic line. The subsonic portion is used to give a sonic flow at the throat. In this case, nozzle gives a uniform and parallel flow at the exit section. It’s named by minimum length Nozzle. The study is done at high temperature, lower than the dissociation threshold of the molecules, in order to improve the aerodynamic performances. Our aim consists of improving the performances both by the increase of exit Mach number and the thrust coefficient and by reduction of the nozzle's mass. The variation of the specific heats with the temperature is considered. The design is made by the Method of Characteristics. The finite differences method with predictor-corrector algorithm is used to make the numerical resolution of the obtained nonlinear algebraic equations. The application is for air. All the obtained results depend on three parameters which are exit Mach number, the stagnation temperature, the chosen mesh in characteristics. A numerical simulation of nozzle through Computational Fluid Dynamics-FASTRAN was done to determine and to confirm the necessary design parameters.Keywords: flux supersonic flow, axisymmetric minimum length nozzle, high temperature, method of characteristics, calorically imperfect gas, finite difference method, trust coefficient, mass of the nozzle, specific heat at constant pressure, air, error
Procedia PDF Downloads 150444 Enhancement of Primary User Detection in Cognitive Radio by Scattering Transform
Authors: A. Moawad, K. C. Yao, A. Mansour, R. Gautier
Abstract:
The detecting of an occupied frequency band is a major issue in cognitive radio systems. The detection process becomes difficult if the signal occupying the band of interest has faded amplitude due to multipath effects. These effects make it hard for an occupying user to be detected. This work mitigates the missed-detection problem in the context of cognitive radio in frequency-selective fading channel by proposing blind channel estimation method that is based on scattering transform. By initially applying conventional energy detection, the missed-detection probability is evaluated, and if it is greater than or equal to 50%, channel estimation is applied on the received signal followed by channel equalization to reduce the channel effects. In the proposed channel estimator, we modify the Morlet wavelet by using its first derivative for better frequency resolution. A mathematical description of the modified function and its frequency resolution is formulated in this work. The improved frequency resolution is required to follow the spectral variation of the channel. The channel estimation error is evaluated in the mean-square sense for different channel settings, and energy detection is applied to the equalized received signal. The simulation results show improvement in reducing the missed-detection probability as compared to the detection based on principal component analysis. This improvement is achieved at the expense of increased estimator complexity, which depends on the number of wavelet filters as related to the channel taps. Also, the detection performance shows an improvement in detection probability for low signal-to-noise scenarios over principal component analysis- based energy detection.Keywords: channel estimation, cognitive radio, scattering transform, spectrum sensing
Procedia PDF Downloads 196443 Robust Segmentation of Salient Features in Automatic Breast Ultrasound (ABUS) Images
Authors: Lamees Nasser, Yago Diez, Robert Martí, Joan Martí, Ibrahim Sadek
Abstract:
Automated 3D breast ultrasound (ABUS) screening is a novel modality in medical imaging because of its common characteristics shared with other ultrasound modalities in addition to the three orthogonal planes (i.e., axial, sagittal, and coronal) that are useful in analysis of tumors. In the literature, few automatic approaches exist for typical tasks such as segmentation or registration. In this work, we deal with two problems concerning ABUS images: nipple and rib detection. Nipple and ribs are the most visible and salient features in ABUS images. Determining the nipple position plays a key role in some applications for example evaluation of registration results or lesion follow-up. We present a nipple detection algorithm based on color and shape of the nipple, besides an automatic approach to detect the ribs. In point of fact, rib detection is considered as one of the main stages in chest wall segmentation. This approach consists of four steps. First, images are normalized in order to minimize the intensity variability for a given set of regions within the same image or a set of images. Second, the normalized images are smoothed by using anisotropic diffusion filter. Next, the ribs are detected in each slice by analyzing the eigenvalues of the 3D Hessian matrix. Finally, a breast mask and a probability map of regions detected as ribs are used to remove false positives (FP). Qualitative and quantitative evaluation obtained from a total of 22 cases is performed. For all cases, the average and standard deviation of the root mean square error (RMSE) between manually annotated points placed on the rib surface and detected points on rib borders are 15.1188 mm and 14.7184 mm respectively.Keywords: Automated 3D Breast Ultrasound, Eigenvalues of Hessian matrix, Nipple detection, Rib detection
Procedia PDF Downloads 330442 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology
Authors: Sanjeev Kumar Appicharla
Abstract:
This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach
Procedia PDF Downloads 188441 Comparison of Two Home Sleep Monitors Designed for Self-Use
Authors: Emily Wood, James K. Westphal, Itamar Lerner
Abstract:
Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.Keywords: DREEM, EEG, seep monitoring, Z-machine
Procedia PDF Downloads 107440 Just Child Protection Practice for Immigrant and Racialized Families in Multicultural Western Settings: Considerations for Context and Culture
Authors: Sarah Maiter
Abstract:
Heightened globalization, migration, displacement of citizens, and refugee needs is putting increasing demand for approaches to social services for diverse populations that responds to families to ensure the safety and protection of vulnerable members while providing supports and services. Along with this social works re-focus on socially just approaches to practice increasingly asks social workers to consider the challenging circumstances of families when providing services rather than a focus on individual shortcomings alone. Child protection workers then struggle to ensure safety of children while assessing the needs of families. This assessment can prove to be difficult when providing services to immigrant, refugee, and racially diverse families as understanding of and familiarity with these families is often limited. Furthermore, child protection intervention in western countries is state mandated having legal authority when intervening in the lives of families where child protection concerns have been identified. Within this context, racialized immigrant and refugee families are at risk of misunderstandings that can result in interventions that are overly intrusive, unhelpful, and harsh. Research shows disproportionality and overrepresentation of racial and ethnic minorities, and immigrant families in the child protection system. Reasons noted include: a) possibilities of racial bias in reporting and substantiating abuse, b) struggles on the part of workers when working with families from diverse ethno-racial backgrounds and who are immigrants and may have limited proficiency in the national language of the country, c) interventions during crisis and differential ongoing services for these families, d) diverse contexts of these families that poses additional challenges for families and children, and e) possible differential definitions of child maltreatment. While cultural and ethnic diversity in child rearing approaches have been cited as contributors to child protection concerns, this approach should be viewed cautiously as it can result in stereotyping and generalizing that then results in inappropriate assessment and intervention. However, poverty and the lack of social supports, both well-known contributors to child protection concerns, also impact these families disproportionately. Child protection systems, therefore, need to continue to examine policy and practice approaches with these families that ensures safety of children while balancing the needs of families. This presentation provides data from several research studies that examined definitions of child maltreatment among a sample of racialized immigrant families, experiences of a sample of immigrant families with the child protection system, concerns of a sample of child protection workers in the provision of services to these families, and struggles of families in the transitions to their new country. These studies, along with others provide insights into areas of consideration for practice that can contribute to safety for children while ensuring just and equitable responses that have greater potential for keeping families together rather than premature apprehension and removal of children to state care.Keywords: child protection, child welfare services, immigrant families, racial and ethnic diversity
Procedia PDF Downloads 292439 Effectiveness, Safety, and Tolerability Profile of Stribild® in HIV-1-infected Patients in the Clinical Setting
Authors: Heiko Jessen, Laura Tanus, Slobodan Ruzicic
Abstract:
Objectives: The efficacy of Stribild®, an integrase strand transfer inhibitor (INSTI) -based STR, has been evaluated in randomized clinical trials and it has demonstrated durable capability in terms of achieving sustained suppression of HIV-1 RNA-levels. However, differences in monitoring frequency, existing selection bias and profile of patients enrolled in the trials, may all result in divergent efficacy of this regimen in routine clinical settings. The aim of this study was to assess the virologic outcomes, safety and tolerability profile of Stribild® in a routine clinical setting. Methods: This was a retrospective monocentric analysis on HIV-1-infected patients, who started with or were switched to Stribild®. Virological failure (VF) was defined as confirmed HIV-RNA>50 copies/ml. The minimum time of follow-up was 24 weeks. The percentage of patients remaining free of therapeutic failure was estimated using the time-to-loss-of-virologic-response (TLOVR) algorithm, by intent-to-treat analysis. Results: We analyzed the data of 197 patients (56 ART-naïve and 141 treatment-experienced patients), who fulfilled the inclusion criteria. Majority (95.9%) of patients were male. The median time of HIV-infection at baseline was 2 months in treatment-naïve and 70 months in treatment-experienced patients. Median time [IQR] under ART in treatment-experienced patients was 37 months. Among the treatment-experienced patients 27.0% had already been treated with a regimen consisting of two NRTIs and one INSTI, whereas 18.4% of them experienced a VF. The median time [IQR] of virological suppression prior to therapy with Stribild® in the treatment-experienced patients was 10 months [0-27]. At the end of follow-up (median 33 months), 87.3% (95% CI, 83.5-91.2) of treatment-naïve and 80.3% (95% CI, 75.8-84.8) of treatment-experienced patients remained free of therapeutic failure. Considering only treatment-experienced patients with baseline VL<50 copies/ml, 83.0% (95% CI, 78.5-87.5) remained free of therapeutic failure. A total of 17 patients stopped treatment with Stribild®, 5.4% (3/56) of them were treatment-naïve and 9.9% (14/141) were treatment-experienced patients. The Stribild® therapy was discontinued in 2 (1.0%) because of VF, loss to follow-up in 4 (2.0%), and drug-drug interactions in 2 (1.0%) patients. Adverse events were in 7 (3.6%) patients the reason to switch from therapy with Stribild® and further 2 (1.0%) patients decided personally to switch. The most frequently observed adverse events were gastrointestinal side effects (20.0%), headache (8%), rash events (7%) and dizziness (6%). In two patients we observed an emergence of novel resistances in integrase-gene. The N155H evolved in one patient and resulted in VF. In another patient S119R evolved either during or shortly upon switch from therapy with Stribild®. In one further patient with VF two novel mutations in the RT-gene were observed when compared to historical genotypic test result (V106I/M and M184V), whereby it is not clear whether they evolved during or already before the switch to Stribild®. Conclusions: Effectiveness of Stribild® for treatment-naïve patients was consistent with data obtained in clinical trials. The safety and tolerability profile as well as resistance development confirmed clinical efficacy of Stribild® in a daily practice setting.Keywords: ART, HIV, integrase inhibitor, stribild
Procedia PDF Downloads 285438 Identifying Biomarker Response Patterns to Vitamin D Supplementation in Type 2 Diabetes Using K-means Clustering: A Meta-Analytic Approach to Glycemic and Lipid Profile Modulation
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Background and Aims: This meta-analysis aimed to evaluate the effect of vitamin D supplementation on key metabolic and cardiovascular parameters, such as glycated hemoglobin (HbA1C), fasting blood sugar (FBS), low-density lipoprotein (LDL), high-density lipoprotein (HDL), systolic blood pressure (SBP), and total vitamin D levels in patients with Type 2 diabetes mellitus (T2DM). Methods: A systematic search was performed across databases, including PubMed, Scopus, Embase, Web of Science, Cochrane Library, and ClinicalTrials.gov, from January 1990 to January 2024. A total of 4,177 relevant studies were initially identified. Using an unsupervised K-means clustering algorithm, publications were grouped based on common text features. Maximum entropy classification was then applied to filter studies that matched a pre-identified training set of 139 potentially relevant articles. These selected studies were manually screened for relevance. A parallel manual selection of all initially searched studies was conducted for validation. The final inclusion of studies was based on full-text evaluation, quality assessment, and meta-regression models using random effects. Sensitivity analysis and publication bias assessments were also performed to ensure robustness. Results: The unsupervised K-means clustering algorithm grouped the patients based on their responses to vitamin D supplementation, using key biomarkers such as HbA1C, FBS, LDL, HDL, SBP, and total vitamin D levels. Two primary clusters emerged: one representing patients who experienced significant improvements in these markers and another showing minimal or no change. Patients in the cluster associated with significant improvement exhibited lower HbA1C, FBS, and LDL levels after vitamin D supplementation, while HDL and total vitamin D levels increased. The analysis showed that vitamin D supplementation was particularly effective in reducing HbA1C, FBS, and LDL within this cluster. Furthermore, BMI, weight gain, and disease duration were identified as factors that influenced cluster assignment, with patients having lower BMI and shorter disease duration being more likely to belong to the improvement cluster. Conclusion: The findings of this machine learning-assisted meta-analysis confirm that vitamin D supplementation can significantly improve glycemic control and reduce the risk of cardiovascular complications in T2DM patients. The use of automated screening techniques streamlined the process, ensuring the comprehensive evaluation of a large body of evidence while maintaining the validity of traditional manual review processes.Keywords: HbA1C, T2DM, SBP, FBS
Procedia PDF Downloads 11437 Inverse Prediction of Thermal Parameters of an Annular Hyperbolic Fin Subjected to Thermal Stresses
Authors: Ashis Mallick, Rajeev Ranjan
Abstract:
The closed form solution for thermal stresses in an annular fin with hyperbolic profile is derived using Adomian decomposition method (ADM). The conductive-convective fin with variable thermal conductivity is considered in the analysis. The nonlinear heat transfer equation is efficiently solved by ADM considering insulated convective boundary conditions at the tip of fin. The constant of integration in the solution is to be estimated using minimum decomposition error method. The solution of temperature field is represented in a polynomial form for convenience to use in thermo-elasticity equation. The non-dimensional thermal stress fields are obtained using the ADM solution of temperature field coupled with the thermo-elasticity solution. The influence of the various thermal parameters in temperature field and stress fields are presented. In order to show the accuracy of the ADM solution, the present results are compared with the results available in literature. The stress fields in fin with hyperbolic profile are compared with those of uniform thickness profile. Result shows that hyperbolic fin profile is better choice for enhancing heat transfer. Moreover, less thermal stresses are developed in hyperbolic profile as compared to rectangular profile. Next, Nelder-Mead based simplex search method is employed for the inverse estimation of unknown non-dimensional thermal parameters in a given stress fields. Owing to the correlated nature of the unknowns, the best combinations of the model parameters which are satisfying the predefined stress field are to be estimated. The stress fields calculated using the inverse parameters give a very good agreement with the stress fields obtained from the forward solution. The estimated parameters are suitable to use for efficient and cost effective fin designing.Keywords: Adomian decomposition, inverse analysis, hyperbolic fin, variable thermal conductivity
Procedia PDF Downloads 327