Search results for: three step search
264 Combining Patients Pain Scores Reports with Functionality Scales in Chronic Low Back Pain Patients
Authors: Ivana Knezevic, Kenneth D. Candido, N. Nick Knezevic
Abstract:
Background: While pain intensity scales remain generally accepted assessment tool, and the numeric pain rating score is highly subjective, we nevertheless rely on them to make a judgment about treatment effects. Misinterpretation of pain can lead practitioners to underestimate or overestimate the patient’s medical condition. The purpose of this study was to analyze how the numeric rating pain scores given by patients with low back pain correlate with their functional activity levels. Methods: We included 100 consecutive patients with radicular low back pain (LBP) after the Institutional Review Board (IRB) approval. Pain scores, numeric rating scale (NRS) responses at rest and in the movement,Oswestry Disability Index (ODI) questionnaire answers were collected 10 times through 12 months. The ODI questionnaire is targeting a patient’s activities and physical limitations as well as a patient’s ability to manage stationary everyday duties. Statistical analysis was performed by using SPSS Software version 20. Results: The average duration of LBP was 14±22 months at the beginning of the study. All patients included in the study were between 24 and 78 years old (average 48.85±14); 56% women and 44% men. Differences between ODI and pain scores in the range from -10% to +10% were considered “normal”. Discrepancies in pain scores were graded as mild between -30% and -11% or +11% and +30%; moderate between -50% and -31% and +31% and +50% and severe if differences were more than -50% or +50%. Our data showed that pain scores at rest correlate well with ODI in 65% of patients. In 30% of patients mild discrepancies were present (negative in 21% and positive in 9%), 4% of patients had moderate and 1% severe discrepancies. “Negative discrepancy” means that patients graded their pain scores much higher than their functional ability, and most likely exaggerated their pain. “Positive discrepancy” means that patients graded their pain scores much lower than their functional ability, and most likely underrated their pain. Comparisons between ODI and pain scores during movement showed normal correlation in only 39% of patients. Mild discrepancies were present in 42% (negative in 39% and positive in 3%); moderate in 14% (all negative), and severe in 5% (all negative) of patients. A 58% unknowingly exaggerated their pain during movement. Inconsistencies were equal in male and female patients (p=0.606 and p=0.928).Our results showed that there was a negative correlation between patients’ satisfaction and the degree of reporting pain inconsistency. Furthermore, patients talking opioids showed more discrepancies in reporting pain intensity scores than did patients taking non-opioid analgesics or not taking medications for LBP (p=0.038). There was a highly statistically significant correlation between morphine equivalents doses and the level of discrepancy (p<0.0001). Conclusion: We have put emphasis on the patient education in pain evaluation as a vital step in accurate pain level reporting. We have showed a direct correlation with patients’ satisfaction. Furthermore, we must identify other parameters in defining our patients’ chronic pain conditions, such as functionality scales, quality of life questionnaires, etc., and should move away from an overly simplistic subjective rating scale.Keywords: pain score, functionality scales, low back pain, lumbar
Procedia PDF Downloads 234263 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 70262 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays
Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner
Abstract:
Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation
Procedia PDF Downloads 292261 The Power of in situ Characterization Techniques in Heterogeneous Catalysis: A Case Study of Deacon Reaction
Authors: Ramzi Farra, Detre Teschner, Marc Willinger, Robert Schlögl
Abstract:
Introduction: The conventional approach of characterizing solid catalysts under static conditions, i.e., before and after reaction, does not provide sufficient knowledge on the physicochemical processes occurring under dynamic conditions at the molecular level. Hence, the necessity of improving new in situ characterizing techniques with the potential of being used under real catalytic reaction conditions is highly desirable. In situ Prompt Gamma Activation Analysis (PGAA) is a rapidly developing chemical analytical technique that enables us experimentally to assess the coverage of surface species under catalytic turnover and correlate these with the reactivity. The catalytic HCl oxidation (Deacon reaction) over bulk ceria will serve as our example. Furthermore, the in situ Transmission Electron Microscopy is a powerful technique that can contribute to the study of atmosphere and temperature induced morphological or compositional changes of a catalyst at atomic resolution. The application of such techniques (PGAA and TEM) will pave the way to a greater and deeper understanding of the dynamic nature of active catalysts. Experimental/Methodology: In situ Prompt Gamma Activation Analysis (PGAA) experiments were carried out to determine the Cl uptake and the degree of surface chlorination under reaction conditions by varying p(O2), p(HCl), p(Cl2), and the reaction temperature. The abundance and dynamic evolution of OH groups on working catalyst under various steady-state conditions were studied by means of in situ FTIR with a specially designed homemade transmission cell. For real in situ TEM we use a commercial in situ holder with a home built gas feeding system and gas analytics. Conclusions: Two complimentary in situ techniques, namely in situ PGAA and in situ FTIR were utilities to investigate the surface coverage of the two most abundant species (Cl and OH). The OH density and Cl uptake were followed under multiple steady-state conditions as a function of p(O2), p(HCl), p(Cl2), and temperature. These experiments have shown that, the OH density positively correlates with the reactivity whereas Cl negatively. The p(HCl) experiments give rise to increased activity accompanied by Cl-coverage increase (opposite trend to p(O2) and T). Cl2 strongly inhibits the reaction, but no measurable increase of the Cl uptake was found. After considering all previous observations we conclude that only a minority of the available adsorption sites contribute to the reactivity. In addition, the mechanism of the catalysed reaction was proposed. The chlorine-oxygen competition for the available active sites renders re-oxidation as the rate-determining step of the catalysed reaction. Further investigations using in situ TEM are planned and will be conducted in the near future. Such experiments allow us to monitor active catalysts at the atomic scale under the most realistic conditions of temperature and pressure. The talk will shed a light on the potential and limitations of in situ PGAA and in situ TEM in the study of catalyst dynamics.Keywords: CeO2, deacon process, in situ PGAA, in situ TEM, in situ FTIR
Procedia PDF Downloads 291260 Functional Ingredients from Potato By-Products: Innovative Biocatalytic Processes
Authors: Salwa Karboune, Amanda Waglay
Abstract:
Recent studies indicate that health-promoting functional ingredients and nutraceuticals can help support and improve the overall public health, which is timely given the aging of the population and the increasing cost of health care. The development of novel ‘natural’ functional ingredients is increasingly challenging. Biocatalysis offers powerful approaches to achieve this goal. Our recent research has been focusing on the development of innovative biocatalytic approaches towards the isolation of protein isolates from potato by-products and the generation of peptides. Potato is a vegetable whose high-quality proteins are underestimated. In addition to their high proportion in the essential amino acids, potato proteins possess angiotensin-converting enzyme-inhibitory potency, an ability to reduce plasma triglycerides associated with a reduced risk of atherosclerosis, and stimulate the release of the appetite regulating hormone CCK. Potato proteins have long been considered not economically feasible due to the low protein content (27% dry matter) found in tuber (Solanum tuberosum). However, potatoes rank the second largest protein supplying crop grown per hectare following wheat. Potato proteins include patatin (40-45 kDa), protease inhibitors (5-25 kDa), and various high MW proteins. Non-destructive techniques for the extraction of proteins from potato pulp and for the generation of peptides are needed in order to minimize functional losses and enhance quality. A promising approach for isolating the potato proteins was developed, which involves the use of multi-enzymatic systems containing selected glycosyl hydrolase enzymes that synergistically work to open the plant cell wall network. This enzymatic approach is advantageous due to: (1) the use of milder reaction conditions, (2) the high selectivity and specificity of enzymes, (3) the low cost and (4) the ability to market natural ingredients. Another major benefit to this enzymatic approach is the elimination of a costly purification step; indeed, these multi-enzymatic systems have the ability to isolate proteins, while fractionating them due to their specificity and selectivity with minimal proteolytic activities. The isolated proteins were used for the enzymatic generation of active peptides. In addition, they were applied into a reduced gluten cookie formulation as consumers are putting a high demand for easy ready to eat snack foods, with high nutritional quality and limited to no gluten incorporation. The addition of potato protein significantly improved the textural hardness of reduced gluten cookies, more comparable to wheat flour alone. The presentation will focus on our recent ‘proof-of principle’ results illustrating the feasibility and the efficiency of new biocatalytic processes for the production of innovative functional food ingredients, from potato by-products, whose potential health benefits are increasingly being recognized.Keywords: biocatalytic approaches, functional ingredients, potato proteins, peptides
Procedia PDF Downloads 379259 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus
Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya
Abstract:
Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.Keywords: driverless vehicle, path planning, sensor fusion, state estimate
Procedia PDF Downloads 144258 Exploring Nature and Pattern of Mentoring Practices: A Study on Mentees' Perspectives
Authors: Nahid Parween Anwar, Sadia Muzaffar Bhutta, Takbir Ali
Abstract:
Mentoring is a structured activity which is designed to facilitate engagement between mentor and mentee to enhance mentee’s professional capability as an effective teacher. Both mentor and mentee are important elements of the ‘mentoring equation’ and play important roles in nourishing this dynamic, collaborative and reciprocal relationship. Cluster-Based Mentoring Programme (CBMP) provides an indigenous example of a project which focused on development of primary school teachers in selected clusters with a particular focus on their classroom practice. A study was designed to examine the efficacy of CBMP as part of Strengthening Teacher Education in Pakistan (STEP) project. This paper presents results of one of the components of this study. As part of the larger study, a cross-sectional survey was employed to explore nature and patterns of mentoring process from mentees’ perspectives in the selected districts of Sindh and Balochistan. This paper focuses on the results of the study related to the question: What are mentees’ perceptions of their mentors’ support for enhancing their classroom practice during mentoring process? Data were collected from mentees (n=1148) using a 5-point scale -‘Mentoring for Effective Primary Teaching’ (MEPT). MEPT focuses on seven factors of mentoring: personal attributes, pedagogical knowledge, modelling, feedback, system requirement, development and use of material, and gender equality. Data were analysed using SPSS 20. Mentees perceptions of mentoring practice of their mentors were summarized using mean and standard deviation. Results showed that mean scale scores on mentees’ perceptions of their mentors’ practices fell between 3.58 (system requirement) and 4.55 (personal attributes). Mentees’ perceives personal attribute of the mentor as the most significant factor (M=4.55) towards streamlining mentoring process by building good relationship between mentor and mentees. Furthermore, mentees have shared positive views about their mentors efforts towards promoting gender impartiality (M=4.54) during workshop and follow up visit. Contrary to this, mentees felt that more could have been done by their mentors in sharing knowledge about system requirement (e.g. school policies, national curriculum). Furthermore, some of the aspects in high scoring factors were highlighted by the mentees as areas for further improvement (e.g. assistance in timetabling, written feedback, encouragement to develop learning corners). Mentees’ perceptions of their mentors’ practices may assist in determining mentoring needs. The results may prove useful for the professional development programme for the mentors and mentees for specific mentoring programme in order to enhance practices in primary classrooms in Pakistan. Results would contribute into the body of much-needed knowledge from developing context.Keywords: cluster-based mentoring programme, mentoring for effective primary teaching (MEPT), professional development, survey
Procedia PDF Downloads 233257 Exploring Closed-Loop Business Systems Which Eliminates Solid Waste in the Textile and Fashion Industry: A Systematic Literature Review Covering the Developments Occurred in the Last Decade
Authors: Bukra Kalayci, Geraldine Brennan
Abstract:
Introduction: Over the last decade, a proliferation of literature related to textile and fashion business in the context of sustainable production and consumption has emerged. However, the economic and environmental benefits of solid waste recovery have not been comprehensively searched. Therefore at the end-of-life or end-of-use textile waste management remains a gap. Solid textile waste reuse and recycling principles of the circular economy need to be developed to close the disposal stage of the textile supply chain. The environmental problems associated with the over-production and –consumption of textile products arise. Together with growing population and fast fashion culture the share of solid textile waste in municipal waste is increasing. Focusing on post-consumer textile waste literature, this research explores the opportunities, obstacles and enablers or success factors associated with closed-loop textile business systems. Methodology: A systematic literature review was conducted in order to identify best practices and gaps from the existing body of knowledge related to closed-loop post-consumer textile waste initiatives over the last decade. Selected keywords namely: ‘cradle-to-cradle ‘, ‘circular* economy* ‘, ‘closed-loop* ‘, ‘end-of-life* ‘, ‘reverse* logistic* ‘, ‘take-back* ‘, ‘remanufacture* ‘, ‘upcycle* ‘ with the combination of (and) ‘fashion* ‘, ‘garment* ‘, ‘textile* ‘, ‘apparel* ‘, clothing* ‘ were used and the time frame of the review was set between 2005 to 2017. In order to obtain a broad coverage, Web of Knowledge and Science Direct databases were used, and peer-reviewed journal articles were chosen. The keyword search identified 299 number of papers which was further refined into 54 relevant papers that form the basis of the in-depth thematic analysis. Preliminary findings: A key finding was that the existing literature is predominantly conceptual rather than applied or empirical work. Moreover, the enablers or success factors, obstacles and opportunities to implement closed-loop systems in the textile industry were not clearly articulated and the following considerations were also largely overlooked in the literature. While the circular economy suggests multiple cycles of discarded products, components or materials, most research has to date tended to focus on a single cycle. Thus the calculations of environmental and economic benefits of closed-loop systems are limited to one cycle which does not adequately explore the feasibility or potential benefits of multiple cycles. Additionally, the time period textile products spend between point of sale, and end-of-use/end-of-life return is a crucial factor. Despite past efforts to study closed-loop textile systems a clear gap in the literature is the lack of a clear evaluation framework which enables manufacturers to clarify the reusability potential of textile products through consideration of indicators related too: quality, design, lifetime, length of time between manufacture and product return, volume of collected disposed products, material properties, and brand segment considerations (e.g. fast fashion versus luxury brands).Keywords: circular fashion, closed loop business, product service systems, solid textile waste elimination
Procedia PDF Downloads 204256 Identification of Three Strategies to Enhance University Students’ Professional Identity, Using Hierarchical Regression Analysis
Authors: Alba Barbara-i-Molinero, Rosalia Cascon-Pereira, Ana Beatriz Hernandez
Abstract:
Students’ transitions from high school to the university have been challenged by the lack of continuity between both contexts. This mismatch directly affects students by generating feelings of anxiety and uncertainty, which increases the dropout rates and reduces students’ academic success. This discontinuity emanates because ‘transitions concern a restructuring of what the person does and who the person perceives him or herself to be’. Hence, identity becomes essential in these transitions. Generally, identity is the answer to questions such as who am I? or who are we? This is integrated by personal identity, and as many social identities as groups, the individual feels he/she is a part. A case in point to construct a social identity is the identification with a profession. For this reason, a way to lighten the generated tension during transitions is applying strategies orientated to enhance students’ professional identity in their point of entry to the higher education institution. That would create a sense of continuity between high school and higher education contexts, increasing their Professional Identity Strength. To develop the strategies oriented to enhance students Professional Identity, it is important to analyze what influences it. There exist several influencing factors that influence Professional Identity (e.g., professional status, the recommendation of family and peers, the academic environment, or the chosen bachelor degree). There is a gap in the literature analyzing the impact of these factors on more than one bachelor degree. In this regards, our study takes an additional step with the aim of evaluating the influence of several factors on Professional Identity using a cohort of university students from multiple degrees between the ages of 17-19 years. To do so, we used hierarchical regression analyses to assess the impact of the following factors: External Motivation Conditionals (EMC), Educational Experience Conditionals (EEC) and Personal Motivational Conditional (PMP). After conducting the analyses, we found that the assessed factors influenced students’ professional identity differently according to their bachelor degree and discipline. For example, PMC and EMC positively affected science students, while architecture, law and economics and engineering students were just influenced by PMC. Basing on that influences, we proposed three different strategies aimed to enhance students’ professional identity, in the short and long term. These strategies are: to enhance students’ professional identity before the incorporation to university through campuses and icebreaker activities; to apply recruitment strategies aimed to provide realistic information of the bachelor degree; and to incorporate different activities, such as in-vitro, in situ and self-directed activities aimed to enhance longitudinally students’ professional identity from the university. From these results, theoretical contributions and practical implications arise. First, we contribute to the literature by identifying which factors influence students from different bachelor degrees since there is still no evidence. And, second, using as a benchmark the obtained results, we contribute from a practical perspective, by proposing several alternative strategies to increase students’ professional identity strength aiming to lighten their transition from high school to higher education.Keywords: professional identity, higher education, educational strategies , students
Procedia PDF Downloads 144255 Embodied Neoliberalism and the Mind as Tool to Manage the Body: A Descriptive Study Applied to Young Australian Amateur Athletes
Authors: Alicia Ettlin
Abstract:
Amid the rise of neoliberalism to the leading economic policy model in Western societies in the 1980s, people have started to internalise a neoliberal way of thinking, whereby the human body has become an entity that can and needs to be precisely managed through free yet rational decision-making processes. The neoliberal citizen has consequently become an entrepreneur of the self who is free, independent, rational, productive and responsible for themselves, their health and wellbeing as well as their appearance. The focus on individuals as entrepreneurs who manage their bodies through the rationally thinking mind has, however, become increasingly criticised for viewing the social actor as ‘disembodied’, as a detached, social actor whose powerful mind governs over the passive body. On the other hand, the discourse around embodiment seeks to connect rational decision-making processes to the dominant neoliberal discourse which creates an embodied understanding that the body, just as other areas of people’s lives, can and should be shaped, monitored and managed through cognitive and rational thinking. This perspective offers an understanding of the body regarding its connections with the social environment that reaches beyond the debates around mind-body binary thinking. Hence, following this argument, body management should not be thought of as either solely guided by embodied discourses nor as merely falling into a mind-body dualism, but rather, simultaneously and inseparably as both at once. The descriptive, qualitative analysis of semi-structured in-depth interviews conducted with young Australian amateur athletes between the age of 18 and 24 has shown that most participants are interested in measuring and managing their body to create self-knowledge and self-improvement. The participants thereby connected self-improvement to weight loss, muscle gain or simply staying fit and healthy. Self-knowledge refers to body measurements including weight, BMI or body fat percentage. Self-management and self-knowledge that are reliant on one another to take rational and well-thought-out decisions, are both characteristic values of the neoliberal doctrine. A neoliberal way of thinking and looking after the body has also by many been connected to rewarding themselves for their discipline, hard work or achievement of specific body management goals (e.g. eating chocolate for reaching the daily step count goal). A few participants, however, have shown resistance against these neoliberal values, and in particular, against the precise monitoring and management of the body with the help of self-tracking devices. Ultimately, however, it seems that most participants have internalised the dominant discourses around self-responsibility, and by association, a sense of duty to discipline their body in normative ways. Even those who have indicated their resistance against body work and body management practices that follow neoliberal thinking and measurement systems, are aware and have internalised the concept of the rational operating mind that needs or should decide how to look after the body in terms of health but also appearance ideals. The discussion around the collected data thereby shows that embodiment and the mind/body dualism constitute two connected, rather than two separate or opposing concepts.Keywords: dualism, embodiment, mind, neoliberalism
Procedia PDF Downloads 163254 Memories of Lost Fathers: The Unfinished Transmission of Generational Values in Hungarian Cinema by Peter Falanga
Authors: Peter Falanga
Abstract:
During the process of de-Stalinization that began in 1956 with the Twentieth Congress of the Soviet Communist Party, many filmmakers in Hungary chose to explore their country’s political discomforts by using Socialist Realism as a negative model against which they could react to the dominating ideology. A renewed national film industry and a more permissive political regime would allow filmmakers to take to task the plight of the preceding generation who had experienced the fatal political turmoil of both World Wars and the purges of Stalin. What follows is no longer the multigenerational unity found in Socialist Realism wherein both the old and the young embrace Stalin’s revolutionary optimism; instead, the protagonists are parentless, and thus their connection to the previous generation is partially severed. In these films, violent historical forces leave one generation to search for both a connection with their family’s past, and for moral guidance to direct their future. István Szabó’s Father (1966), Márta Mészáros Diary for My Children (1984), and Pál Gábor’s Angi Vera (1978) each consider the fraught relationship between successive generations through the lens of postwar youth. A characteristic each of their protagonist’s share is that they are all missing one or both parents, and cope with familial loss either through recalling memories of their parents in dream-like sequences, or, in the case of Angi Vera, through embracing the surrogate paternalism that the Communist Party promises to provide. This paper considers the argument these films present about the progress of Hungarian history, and how this topic is explored in more recent films that similarly focus on the transmission of generational values. Scholars such as László Strausz and John Cunningham have written on the continuous concern with the transmission of generational values in more recent films such as István Szabó’s Sunshine (1999), Béla Tarr’s Werckmeister Harmonies (2000), György Pálfi’s Taxidermia (2006), Ágnes Kocsis’ Pál Adrienn (2010), and Kornél Mundruczó’s Evolution (2021). These films, they argue, make intimate portrayals of the various sweeping political changes in Hungary’s history and question how these epochs or events have impacted Hungarian identities. If these films attempt to personalize historical shifts of Hungary, then what is the significance of featuring characters who have lost one or both parents? An attempt to understand this coherent trend in Hungarian cinema will profit from examining the earlier, celebrated films of Szabó, Mészáros, and Gábor, who inaugurated this preoccupation with generational values. The pervasive interplay of dreams and memory in their films invites an additional element to their argument concerning historical progression. This paper incorporates Richard Teniman’s notion of the “dialectics of memory” in which memory is in a constant process of negation and reinvention to explain why these Directors prefer to explore Hungarian identity through the disarranged form of psychological realism over the linear causality structure of historical realism.Keywords: film theory, Eastern European Studies, film history, Eastern European History
Procedia PDF Downloads 122253 Influence of Infrared Radiation on the Growth Rate of Microalgae Chlorella sorokiniana
Authors: Natalia Politaeva, Iuliia Smiatskaia, Iuliia Bazarnova, Iryna Atamaniuk, Kerstin Kuchta
Abstract:
Nowadays, the progressive decrease of primary natural resources and ongoing upward trend in terms of energy demand, have resulted in development of new generation technological processes which are focused on step-wise production and residues utilization. Thus, microalgae-based 3rd generation bioeconomy is considered one of the most promising approaches that allow production of value-added products and sophisticated utilization of residues biomass. In comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, and thus, addressing issues associated with negative social and environmental impacts. However, one of the most challenging tasks is to undergo seasonal variations and to achieve optimal growing conditions for indoor closed systems that can cover further demand for material and energetic utilization of microalgae. For instance, outdoor cultivation in St. Petersburg (Russia) is only suitable within rather narrow time frame (from mid-May to mid-September). At earlier and later periods, insufficient sunlight and heat for the growth of microalgae were detected. On the other hand, without additional physical effects, the biomass increment in summer is 3-5 times per week, depending on the solar radiation and the ambient temperature. In order to increase biomass production, scientists from all over the world have proposed various technical solutions for cultivators and have been studying the influence of various physical factors affecting biomass growth namely: magnetic field, radiation impact, and electric field, etc. In this paper, the influence of infrared radiation (IR) and fluorescent light on the growth rate of microalgae Chlorella sorokiniana has been studied. The cultivation of Chlorella sorokiniana was carried out in 500 ml cylindrical glass vessels, which were constantly aerated. To accelerate the cultivation process, the mixture was stirred for 15 minutes at 500 rpm following 120 minutes of rest time. At the same time, the metabolic needs in nutrients were provided by the addition of micro- and macro-nutrients in the microalgae growing medium. Lighting was provided by fluorescent lamps with the intensity of 2500 ± 300 lx. The influence of IR was determined using IR lamps with a voltage of 220 V, power of 250 W, in order to achieve the intensity of 13 600 ± 500 lx. The obtained results show that under the influence of fluorescent lamps along with the combined effect of active aeration and variable mixing, the biomass increment on the 2nd day was three times, and on the 7th day, it was eight-fold. The growth rate of microalgae under the influence of IR radiation was lower and has reached 22.6·106 cells·mL-1. However, application of IR lamps for the biomass growth allows maintaining the optimal temperature of microalgae suspension at approximately 25-28°C, which might especially be beneficial during the cold season in extreme climate zones.Keywords: biomass, fluorescent lamp, infrared radiation, microalgae
Procedia PDF Downloads 187252 A Dynamic Model for Circularity Assessment of Nutrient Recovery from Domestic Sewage
Authors: Anurag Bhambhani, Jan Peter Van Der Hoek, Zoran Kapelan
Abstract:
The food system depends on the availability of Phosphorus (P) and Nitrogen (N). Growing population, depleting Phosphorus reserves and energy-intensive industrial nitrogen fixation are threats to their future availability. Recovering P and N from domestic sewage water offers a solution. Recovered P and N can be applied to agricultural land, replacing virgin P and N. Thus, recovery from sewage water offers a solution befitting a circular economy. To ensure minimum waste and maximum resource efficiency a circularity assessment method is crucial to optimize nutrient flows and minimize losses. Material Circularity Indicator (MCI) is a useful method to quantify the circularity of materials. It was developed for materials that remain within the market and recently extended to include biotic materials that may be composted or used for energy recovery after end-of-use. However, MCI has not been used in the context of nutrient recovery. Besides, MCI is time-static, i.e., it cannot account for dynamic systems such as the terrestrial nutrient cycles. Nutrient application to agricultural land is a highly dynamic process wherein flows and stocks change with time. The rate of recycling of nutrients in nature can depend on numerous factors such as prevailing soil conditions, local hydrology, the presence of animals, etc. Therefore, a dynamic model of nutrient flows with indicators is needed for the circularity assessment. A simple substance flow model of P and N will be developed with the help of flow equations and transfer coefficients that incorporate the nutrient recovery step along with the agricultural application, the volatilization and leaching processes, plant uptake and subsequent animal and human uptake. The model is then used for calculating the proportions of linear and restorative flows (coming from reused/recycled sources). The model will simulate the adsorption process based on the quantity of adsorbent and nutrient concentration in the water. Thereafter, the application of the adsorbed nutrients to agricultural land will be simulated based on adsorbate release kinetics, local soil conditions, hydrology, vegetation, etc. Based on the model, the restorative nutrient flow (returning to the sewage plant following human consumption) will be calculated. The developed methodology will be applied to a case study of resource recovery from wastewater. In the aforementioned case study located in Italy, biochar or zeolite is to be used for recovery of P and N from domestic sewage through adsorption and thereafter, used as a slow-release fertilizer in agriculture. Using this model, information regarding the efficiency of nutrient recovery and application can be generated. This can help to optimize the recovery process and application of the nutrients. Consequently, this will help to optimize nutrient recovery and application and reduce the dependence of the food system on the virgin extraction of P and N.Keywords: circular economy, dynamic substance flow, nutrient cycles, resource recovery from water
Procedia PDF Downloads 197251 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining
Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj
Abstract:
Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.Keywords: data mining, SME growth, success factors, web mining
Procedia PDF Downloads 266250 The Burmese Exodus of 1942: Towards Evolving Policy Protocols for a Refugee Archive
Authors: Vinod Balakrishnan, Chrisalice Ela Joseph
Abstract:
The Burmese Exodus of 1942, which left more than 4 lakh as refugees and thousands dead, is one of the worst forced migrations in recorded history. Adding to the woes of the refugees is the lack of credible documentation of their lived experiences, trauma, and stories and their erasure from recorded history. Media reports, national records, and mainstream narratives that have registered the exodus provide sanitized versions which have reduced the refugees to a nameless, faceless mass of travelers and obliterated their lived experiences, trauma, and sufferings. This attitudinal problem compels the need to stem the insensitivity that accompanies institutional memory by making a case for a more humanistically evolved policy that puts in place protocols for the way the humanities would voice the concern for the refugee. A definite step in this direction and a far more relevant project in our times is the need to build a comprehensive refugee archive that can be a repository of the refugee experiences and perspectives. The paper draws on Hannah Arendt’s position on the Jewish refugee crisis, Agamben’s work on statelessness and citizenship, Foucault’s notion of governmentality and biopolitics, Edward Said’s concepts on Exile, Fanon’s work on the dispossessed, Derrida’s work on ‘the foreigner and hospitality’ in order to conceptualize the refugee condition which will form the theoretical framework for the paper. It also refers to the existing scholarship in the field of refugee studies such as Roger Zetter’s work on the ‘refugee label’, Philip Marfleet’s work on ‘refugees and history’, Lisa Malkki’s research on the anthropological discourse of the refugee and refugee studies. The paper is also informed by the work that has been done by the international organizations to address the refugee crisis. The emphasis is on building a strong argument for the establishment of the refugee archive that finds but a passing and a none too convincing reference in refugee studies in order to enable a multi-dimensional understanding of the refugee crisis. Some of the old questions cannot be dismissed as outdated as the continuing travails of the refugees in different parts of the world only remind us that they are still, largely, unanswered. The questions are -What is the nature of a Refugee Archive? How is it different from the existing historical and political archives? What are the implications of the refugee archive? What is its contribution to refugee studies? The paper draws on Diana Taylor’s concept of the archive and the repertoire to theorize the refugee archive as a repository that has the documentary function of the ‘archive’ and the ‘agency’ function of the repertoire. It then reads Ayya’s Accounts- a memoir by Anand Pandian -in the light of Hannah Arendt’s concepts of the ‘refugee as vanguard’ and ‘story telling as political action’- to illustrate how the memoir contributes to the refugee archive that provides the refugee a place and agency in history. The paper argues for a refugee archive that has implications for the formulation of inclusive refugee policies.Keywords: Ayya’s Accounts, Burmese Exodus, policy protocol, refugee archive
Procedia PDF Downloads 140249 Hydrogen Storage Systems for Enhanced Grid Balancing Services in Wind Energy Conversion Systems
Authors: Nezmin Kayedpour, Arash E. Samani, Siavash Asiaban, Jeroen M. De Kooning, Lieven Vandevelde, Guillaume Crevecoeur
Abstract:
The growing adoption of renewable energy sources, such as wind power, in electricity generation is a significant step towards a sustainable and decarbonized future. However, the inherent intermittency and uncertainty of wind resources pose challenges to the reliable and stable operation of power grids. To address this, hydrogen storage systems have emerged as a promising and versatile technology to support grid balancing services in wind energy conversion systems. In this study, we propose a supplementary control design that enhances the performance of the hydrogen storage system by integrating wind turbine (WT) pitch and torque control systems. These control strategies aim to optimize the hydrogen production process, ensuring efficient utilization of wind energy while complying with grid requirements. The wind turbine pitch control system plays a crucial role in managing the turbine's aerodynamic performance. By adjusting the blade pitch angle, the turbine's rotational speed and power output can be regulated. Our proposed control design dynamically coordinates the pitch angle to match the wind turbine's power output with the optimal hydrogen production rate. This ensures that the electrolyzer receives a steady and optimal power supply, avoiding unnecessary strain on the system during high wind speeds and maximizing hydrogen production during low wind speeds. Moreover, the wind turbine torque control system is incorporated to facilitate efficient operation at varying wind speeds. The torque control system optimizes the energy capture from the wind while limiting mechanical stress on the turbine components. By harmonizing the torque control with hydrogen production requirements, the system maintains stable wind turbine operation, thereby enhancing the overall energy-to-hydrogen conversion efficiency. To enable grid-friendly operation, we introduce a cascaded controller that regulates the electrolyzer's electrical power-current in accordance with grid requirements. This controller ensures that the hydrogen production rate can be dynamically adjusted based on real-time grid demands, supporting grid balancing services effectively. By maintaining a close relationship between the wind turbine's power output and the electrolyzer's current, the hydrogen storage system can respond rapidly to grid fluctuations and contribute to enhanced grid stability. In this paper, we present a comprehensive analysis of the proposed supplementary control design's impact on the overall performance of the hydrogen storage system in wind energy conversion systems. Through detailed simulations and case studies, we assess the system's ability to provide grid balancing services, maximize wind energy utilization, and reduce greenhouse gas emissions.Keywords: active power control, electrolyzer, grid balancing services, wind energy conversion systems
Procedia PDF Downloads 84248 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology
Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert
Abstract:
Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.Keywords: BIPV, building simulation, optimized control strategy, planning tool
Procedia PDF Downloads 110247 Factors Affecting Treatment Resilience in Patients with Oesophago-Gastric Cancers Undergoing Palliative Chemotherapy: A Literature Review
Authors: Kiran Datta, Daniella Holland-Hart, Anthony Byrne
Abstract:
Introduction: Oesophago-gastric (OG) cancers are the fifth commonest in the UK, accounting for over 12,000 deaths each year. Most patients will present at later stages of the disease, with only 21% of patients with stage 4 disease surviving longer than a year. As a result, many patients are unsuitable for curative surgery and instead receive palliative treatment to improve prognosis and symptom burden. However, palliative chemotherapy can result in significant toxicity: almost half of the patients are unable to complete their chemotherapy regimen, with this proportion rising significantly in older and frailer patients. In addition, clinical trials often exclude older and frailer patients due to strict inclusion criteria, meaning there is limited evidence to guide which patients are most likely to benefit from palliative chemotherapy. Inappropriate chemotherapy administration is at odds with the goals of palliative treatment and care, which are to improve quality of life, and this also represents a significant resource expenditure. This literature review aimed to examine and appraise evidence regarding treatment resilience in order to guide clinicians in identifying the most suitable candidates for palliative chemotherapy. Factors influencing treatment resilience were assessed, as measured by completion rates, dose reductions, and toxicities. Methods: This literature review was conducted using rapid review methodology, utilising modified systematic methods. A literature search was performed across the MEDLINE, EMBASE, and Cochrane Library databases, with results limited to papers within the last 15 years and available in English. Key inclusion criteria included: 1) participants with either oesophageal, gastro-oesophageal junction, or gastric cancers; 2) patients treated with palliative chemotherapy; 3) available data evaluating the association between baseline participant characteristics and treatment resilience. Results: Of the 2326 papers returned, 11 reports of 10 studies were included in this review after excluding duplicates and irrelevant papers. Treatment resilience factors that were assessed included: age, performance status, frailty, inflammatory markers, and sarcopenia. Age was generally a poor predictor for how well patients would tolerate chemotherapy, while poor performance status was a better indicator of the need for dose reduction and treatment non-completion. Frailty was assessed across one cohort using multiple screening tools and was an effective marker of the risk of toxicity and the requirement for dose reduction. Inflammatory markers included lymphopenia and the Glasgow Prognostic Score, which assessed inflammation and hypoalbuminaemia. Although quick to obtain and interpret, these findings appeared less reliable due to the inclusion of patients treated with palliative radiotherapy. Sarcopenia and body composition were often associated with chemotherapy toxicity but not the rate of regimen completion. Conclusion: This review demonstrates that there are numerous measures that can estimate the ability of patients with oesophago-gastric cancer to tolerate palliative chemotherapy, and these should be incorporated into clinical assessments to promote personalised decision-making around treatment. Age should not be a barrier to receiving chemotherapy and older and frailer patients should be included in future clinical trials to better represent typical patients with oesophago-gastric cancers. Decisions regarding palliative treatment should be guided by these factors identified as well as patient preference.Keywords: frailty, oesophago-gastric cancer, palliative chemotherapy, treatment resilience
Procedia PDF Downloads 76246 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 23245 Steel Concrete Composite Bridge: Modelling Approach and Analysis
Authors: Kaviyarasan D., Satish Kumar S. R.
Abstract:
India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge
Procedia PDF Downloads 185244 A Greener Approach towards the Synthesis of an Antimalarial Drug Lumefantrine
Authors: Luphumlo Ncanywa, Paul Watts
Abstract:
Malaria is a disease that kills approximately one million people annually. Children and pregnant women in sub-Saharan Africa lost their lives due to malaria. Malaria continues to be one of the major causes of death, especially in poor countries in Africa. Decrease the burden of malaria and save lives is very essential. There is a major concern about malaria parasites being able to develop resistance towards antimalarial drugs. People are still dying due to lack of medicine affordability in less well-off countries in the world. If more people could receive treatment by reducing the cost of drugs, the number of deaths in Africa could be massively reduced. There is a shortage of pharmaceutical manufacturing capability within many of the countries in Africa. However one has to question how Africa would actually manufacture drugs, active pharmaceutical ingredients or medicines developed within these research programs. It is quite likely that such manufacturing would be outsourced overseas, hence increasing the cost of production and potentially limiting the full benefit of the original research. As a result the last few years has seen major interest in developing more effective and cheaper technology for manufacturing generic pharmaceutical products. Micro-reactor technology (MRT) is an emerging technique that enables those working in research and development to rapidly screen reactions utilizing continuous flow, leading to the identification of reaction conditions that are suitable for usage at a production level. This emerging technique will be used to develop antimalarial drugs. It is this system flexibility that has the potential to reduce both the time was taken and risk associated with transferring reaction methodology from research to production. Using an approach referred to as scale-out or numbering up, a reaction is first optimized within the laboratory using a single micro-reactor, and in order to increase production volume, the number of reactors employed is simply increased. The overall aim of this research project is to develop and optimize synthetic process of antimalarial drugs in the continuous processing. This will provide a step change in pharmaceutical manufacturing technology that will increase the availability and affordability of antimalarial drugs on a worldwide scale, with a particular emphasis on Africa in the first instance. The research will determine the best chemistry and technology to define the lowest cost manufacturing route to pharmaceutical products. We are currently developing a method to synthesize Lumefantrine in continuous flow using batch process as bench mark. Lumefantrine is a dichlorobenzylidine derivative effective for the treatment of various types of malaria. Lumefantrine is an antimalarial drug used with artemether for the treatment of uncomplicated malaria. The results obtained when synthesizing Lumefantrine in a batch process are transferred into a continuous flow process in order to develop an even better and reproducible process. Therefore, development of an appropriate synthetic route for Lumefantrine is significant in pharmaceutical industry. Consequently, if better (and cheaper) manufacturing routes to antimalarial drugs could be developed and implemented where needed, it is far more likely to enable antimalarial drugs to be available to those in need.Keywords: antimalarial, flow, lumefantrine, synthesis
Procedia PDF Downloads 202243 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 253242 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life
Authors: Sandra Young
Abstract:
The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics
Procedia PDF Downloads 137241 New Media and the Personal Vote in General Elections: A Comparison of Constituency Level Candidates in the United Kingdom and Japan
Authors: Sean Vincent
Abstract:
Within the academic community, there is a consensus that political parties in established liberal democracies are facing a myriad of organisational challenges as a result of falling membership, weakening links to grass-roots support and rising voter apathy. During the same period of party decline and growing public disengagement political parties have become increasingly professionalised. The professionalisation of political parties owes much to changes in technology, with television becoming the dominant medium for political communication. In recent years, however, it has become clear that a new medium of communication is becoming utilised by political parties and candidates – New Media. New Media, a term hard to define but related to internet based communication, offers a potential revolution in political communication. It can be utilised by anyone with access to the internet and its most widely used platforms of communication such as Facebook and Twitter, are free to use. The advent of Web 2.0 has dramatically changed what can be done with the Internet. Websites now allow candidates at the constituency level to fundraise, organise and set out personalised policies. Social media allows them to communicate with supporters and potential voters practically cost-free. As such candidate dependency on the national party for resources and image now lies open to debate. Arguing that greater candidate independence may be a natural next step in light of the contemporary challenges faced by parties, this paper examines how New Media is being used by candidates at the constituency level to increase their personal vote. The paper will present findings from research carried out during two elections – the Japanese Lower House election of 2014 and the UK general election of 2015. During these elections a sample of candidates, totalling 150 candidates, from the three biggest parties in each country were selected and their new media output, specifically candidate websites, Twitter and Facebook output subjected to content analysis. The analysis examines how candidates are using new media to both become more functionally, through fundraising and volunteer mobilisation and politically, through the promotion of personal/local policies, independent from the national party. In order to validate the results of content analysis this paper will also present evidence from interviews carried out with 17 candidates that stood in the 2014 Japanese Lower House election or 2015 UK general election. With a combination of statistical analysis and interviews, several conclusions can be made about the use of New Media at constituency level. The findings show not just a clear difference in the way candidates from each country are using New Media but also differences within countries based upon the particular circumstances of each constituency. While it has not yet replaced traditional methods of fundraising and activist mobilisation, New Media is also becoming increasingly important in campaign organisation and the general consensus amongst candidates is that its importance will continue to grow along as politics in both countries becomes more diffuse.Keywords: political campaigns, elections, new media, political communication
Procedia PDF Downloads 226240 Time Travel Testing: A Mechanism for Improving Renewal Experience
Authors: Aritra Majumdar
Abstract:
While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas
Procedia PDF Downloads 159239 Familial Exome Sequencing to Decipher the Complex Genetic Basis of Holoprosencephaly
Authors: Artem Kim, Clara Savary, Christele Dubourg, Wilfrid Carre, Houda Hamdi-Roze, Valerie Dupé, Sylvie Odent, Marie De Tayrac, Veronique David
Abstract:
Holoprosencephaly (HPE) is a rare congenital brain malformation resulting from the incomplete separation of the two cerebral hemispheres. It is characterized by a wide phenotypic spectrum and a high degree of locus heterogeneity. Genetic defects in 16 genes have already been implicated in HPE, but account for only 30% of cases, suggesting that a large part of genetic factors remains to be discovered. HPE has been recently redefined as a complex multigenic disorder, requiring the joint effect of multiple mutational events in genes belonging to one or several developmental pathways. The onset of HPE may result from accumulation of the effects of multiple rare variants in functionally-related genes, each conferring a moderate increase in the risk of HPE onset. In order to decipher the genetic basis of HPE, unconventional patterns of inheritance involving multiple genetic factors need to be considered. The primary objective of this study was to uncover possible disease causing combinations of multiple rare variants underlying HPE by performing trio-based Whole Exome Sequencing (WES) of familial cases where no molecular diagnosis could be established. 39 families were selected with no fully-penetrant causal mutation in known HPE gene, no chromosomic aberrations/copy number variants and without any implication of environmental factors. As the main challenge was to identify disease-related variants among a large number of nonpathogenic polymorphisms detected by WES classical scheme, a novel variant prioritization approach was established. It combined WES filtering with complementary gene-level approaches: transcriptome-driven (RNA-Seq data) and clinically-driven (public clinical data) strategies. Briefly, a filtering approach was performed to select variants compatible with disease segregation, population frequency and pathogenicity prediction to identify an exhaustive list of rare deleterious variants. The exome search space was then reduced by restricting the analysis to candidate genes identified by either transcriptome-driven strategy (genes sharing highly similar expression patterns with known HPE genes during cerebral development) or clinically-driven strategy (genes associated to phenotypes of interest overlapping with HPE). Deeper analyses of candidate variants were then performed on a family-by-family basis. These included the exploration of clinical information, expression studies, variant characteristics, recurrence of mutated genes and available biological knowledge. A novel bioinformatics pipeline was designed. Applied to the 39 families, this final integrated workflow identified an average of 11 candidate variants per family. Most of candidate variants were inherited from asymptomatic parents suggesting a multigenic inheritance pattern requiring the association of multiple mutational events. The manual analysis highlighted 5 new strong HPE candidate genes showing recurrences in distinct families. Functional validations of these genes are foreseen.Keywords: complex genetic disorder, holoprosencephaly, multiple rare variants, whole exome sequencing
Procedia PDF Downloads 203238 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection
Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément
Abstract:
The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars
Procedia PDF Downloads 117237 Theorizing Optimal Use of Numbers and Anecdotes: The Science of Storytelling in Newsrooms
Authors: Hai L. Tran
Abstract:
When covering events and issues, the news media often employ both personal accounts as well as facts and figures. However, the process of using numbers and narratives in the newsroom is mostly operated through trial and error. There is a demonstrated need for the news industry to better understand the specific effects of storytelling and data-driven reporting on the audience as well as explanatory factors driving such effects. In the academic world, anecdotal evidence and statistical evidence have been studied in a mutually exclusive manner. Existing research tends to treat pertinent effects as though the use of one form precludes the other and as if a tradeoff is required. Meanwhile, narratives and statistical facts are often combined in various communication contexts, especially in news presentations. There is value in reconceptualizing and theorizing about both relative and collective impacts of numbers and narratives as well as the mechanism underlying such effects. The current undertaking seeks to link theory to practice by providing a complete picture of how and why people are influenced by information conveyed through quantitative and qualitative accounts. Specifically, the cognitive-experiential theory is invoked to argue that humans employ two distinct systems to process information. The rational system requires the processing of logical evidence effortful analytical cognitions, which are affect-free. Meanwhile, the experiential system is intuitive, rapid, automatic, and holistic, thereby demanding minimum cognitive resources and relating to the experience of affect. In certain situations, one system might dominate the other, but rational and experiential modes of processing operations in parallel and at the same time. As such, anecdotes and quantified facts impact audience response differently and a combination of data and narratives is more effective than either form of evidence. In addition, the present study identifies several media variables and human factors driving the effects of statistics and anecdotes. An integrative model is proposed to explain how message characteristics (modality, vividness, salience, congruency, position) and individual differences (involvement, numeracy skills, cognitive resources, cultural orientation) impact selective exposure, which in turn activates pertinent modes of processing, and thereby induces corresponding responses. The present study represents a step toward bridging theoretical frameworks from various disciplines to better understand the specific effects and the conditions under which the use of anecdotal evidence and/or statistical evidence enhances or undermines information processing. In addition to theoretical contributions, this research helps inform news professionals about the benefits and pitfalls of incorporating quantitative and qualitative accounts in reporting. It proposes a typology of possible scenarios and appropriate strategies for journalists to use when presenting news with anecdotes and numbers.Keywords: data, narrative, number, anecdote, storytelling, news
Procedia PDF Downloads 79236 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures
Authors: A. T. Al-Isawi, P. E. F. Collins
Abstract:
The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction
Procedia PDF Downloads 121235 Methodology for Risk Assessment of Nitrosamine Drug Substance Related Impurities in Glipizide Antidiabetic Formulations
Authors: Ravisinh Solanki, Ravi Patel, Chhaganbhai Patel
Abstract:
Purpose: The purpose of this study is to develop a methodology for the risk assessment and evaluation of nitrosamine impurities in Glipizide antidiabetic formulations. Nitroso compounds, including nitrosamines, have emerged as significant concerns in drug products, as highlighted by the ICH M7 guidelines. This study aims to identify known and potential sources of nitrosamine impurities that may contaminate Glipizide formulations and assess their presence. By determining observed or predicted levels of these impurities and comparing them with regulatory guidance, this research will contribute to ensuring the safety and quality of combination antidiabetic drug products on the market. Factors contributing to the presence of genotoxic nitrosamine contaminants in glipizide medications, such as secondary and tertiary amines, and nitroso group-complex forming molecules, will be investigated. Additionally, conditions necessary for nitrosamine formation, including the presence of nitrosating agents, and acidic environments, will be examined to enhance understanding and mitigation strategies. Method: The methodology for the study involves the implementation of the N-Nitroso Acid Precursor (NAP) test, as recommended by the WHO in 1978 and detailed in the 1980 International Agency for Research on Cancer monograph. Individual glass vials containing equivalent to 10mM quantities of Glipizide is prepared. These compounds are dissolved in an acidic environment and supplemented with 40 mM NaNO2. The resulting solutions are maintained at a temperature of 37°C for a duration of 4 hours. For the analysis of the samples, an HPLC method is employed for fit-for-purpose separation. LC resolution is achieved using a step gradient on an Agilent Eclipse Plus C18 column (4.6 X 100 mm, 3.5µ). Mobile phases A and B consist of 0.1% v/v formic acid in water and acetonitrile, respectively, following a gradient mode program. The flow rate is set at 0.6 mL/min, and the column compartment temperature is maintained at 35°C. Detection is performed using a PDA detector within the wavelength range of 190-400 nm. To determine the exact mass of formed nitrosamine drug substance related impurities (NDSRIs), the HPLC method is transferred to LC-TQ-MS/MS with the same mobile phase composition and gradient program. The injection volume is set at 5 µL, and MS analysis is conducted in Electrospray Ionization (ESI) mode within the mass range of 100−1000 Daltons. Results: The samples of NAP test were prepared according to the protocol. The samples were analyzed using HPLC and LC-TQ-MS/MS identify possible NDSRIs generated in different formulations of glipizide. It was found that the NAP test generated a various NDSRIs. The new finding, which has not been reported yet, discovered contamination of Glipizide. These NDSRIs are categorised based on the predicted carcinogenic potency and recommended its acceptable intact in medicines. The analytical method was found specific and reproducible.Keywords: NDSRI, nitrosamine impurities, antidiabetic, glipizide, LC-MS/MS
Procedia PDF Downloads 32