Search results for: functional outcome
166 The Use of Cross-cultural Approaches (CCAs) in Psychotherapy in Addressing Mental Health Issues Amongst Women of Ethnic Minority
Authors: Adaku Thelma Olatise
Abstract:
Mental health disparities among women from diverse ethnic, cultural, and religious backgrounds remain a pressing concern, particularly as current psychotherapeutic models often fail to address the unique challenges these groups face. This is of particular concern since epidemiological studies across various countries and cultures consistently demonstrate higher prevalence rates of common mental disorders amongst these groups of women because of a lack of access to culturally oriented psychotherapeutic services. This literature review aims to examine how CCAs in psychotherapy can address the specific ethnic, cultural, and religious challenges women encounter in accessing mental health care. A search of relevant articles was conducted through PsycARTICLES and PubMed databases, using terms such as ‘mental health’, ‘women’, ‘culture’, and ‘ethnic minorities’. Supplementary searches on Google Scholar were also performed to capture literature not covered by traditional databases. While the importance of cross-cultural approaches in psychotherapy has become more apparent because people from diverse ethnic backgrounds inevitably perceive the world through different lenses, influencing their interpretations of human behavior and norms, there is a notable gap in the literature in understanding the influences of using of CCAs in psychotherapy amongst women of an ethnic minority. This gap not only reflects a poor understanding of the complex stressors faced by these women—such as familial, communal, and societal expectations—but also highlights the lack of support and culturally adapted interventions available to them. Even though scholars have posited that aligning treatment approaches with patients' cultural backgrounds is important to enhance therapeutic effectiveness, and the acknowledgment of culture is crucial in psychotherapy theory and practice. As well as the increasing global focus on psychotherapy applications that integrate non-Western practices, such as spiritual healing and community-based interventions, the adaptation of these approaches in mainstream mental health care has remained limited. This review found that the expectations and experiences of ethnic minority women were heavily influenced by family and community pressures. However, there were limited evidence-based, culturally oriented psychotherapeutic interventions tailored to ethnic minority women. This gap extends to inadequate representation of minority groups in clinical research, as well as a lack of culturally validated mental health outcome measures. Furthermore, studies have shown that psychotherapeutic models have largely been Western-oriented and Euro-centric because of socially constructed hierarchies. The origin of psychology from the Western world has predominantly reflected Western cultural traditions, shaped by historical, linguistic, and sociopolitical influences. These factors have led to a lack of recognition of therapeutic approaches from minority ethnic groups and the biases that emanate from hegemonic cultural beliefs and power dynamics influence the decisions about which psychotherapeutic modalities to integrate and practice. Therefore, this plethora of factors adds to the challenges women from ethnically and culturally diverse backgrounds face in accessing mental health services at the individual, familial, community, and societal levels. In conclusion, a cross-cultural approach is urgently needed within psychotherapy to address these challenges, ensuring that treatment frameworks are both culturally sensitive and gender responsive. Only by considering the lived experiences of minority women, particularly in relation to their cultural and religious contexts, can mental health services provide the appropriate care necessary to support their well-being.Keywords: mental health, women, culture, ethnicity
Procedia PDF Downloads 24165 Assessment of Tidal Influence in Spatial and Temporal Variations of Water Quality in Masan Bay, Korea
Abstract:
Slack-tide sampling was carried out at seven stations at high and low tides for a tidal cycle, in summer (7, 8, 9) and fall (10), 2016 to determine the differences of water quality according to tides in Masan Bay. The data were analyzed by Pearson correlation and factor analysis. The mixing state of all the water quality components investigated is well explained by the correlation with salinity (SAL). Turbidity (TURB), dissolved silica (DSi), nitrite and nitrate nitrogen (NNN) and total nitrogen (TN), which find their way into the bay from the streams and have no internal source and sink reaction, showed a strong negative correlation with SAL at low tide, indicating the property of conservative mixing. On the contrary, in summer and fall, dissolved oxygen (DO), hydrogen sulfide (H2S) and chemical oxygen demand with KMnO4 (CODMn) of the surface and bottom water, which were sensitive to an internal source and sink reaction, showed no significant correlation with SAL at high and low tides. The remaining water quality parameters showed a conservative or a non-conservative mixing pattern depending on the mixing characteristics at high and low tides, determined by the functional relationship between the changes of the flushing time and the changes of the characteristics of water quality components of the end-members in the bay. Factor analysis performed on the concentration difference data sets between high and low tides helped in identifying the principal latent variables for them. The concentration differences varied spatially and temporally. Principal factors (PFs) scores plots for each monitoring situation showed high associations of the variations to the monitoring sites. At sampling station 1 (ST1), temperature (TEMP), SAL, DSi, TURB, NNN and TN of the surface water in summer, TEMP, SAL, DSi, DO, TURB, NNN, TN, reactive soluble phosphorus (RSP) and total phosphorus (TP) of the bottom water in summer, TEMP, pH, SAL, DSi, DO, TURB, CODMn, particulate organic carbon (POC), ammonia nitrogen (AMN), NNN, TN and fecal coliform (FC) of the surface water in fall, TEMP, pH, SAL, DSi, H2S, TURB, CODMn, AMN, NNN and TN of the bottom water in fall commonly showed up as the most significant parameters and the large concentration differences between high and low tides. At other stations, the significant parameters showed differently according to the spatial and temporal variations of mixing pattern in the bay. In fact, there is no estuary that always maintains steady-state flow conditions. The mixing regime of an estuary might be changed at any time from linear to non-linear, due to the change of flushing time according to the combination of hydrogeometric properties, inflow of freshwater and tidal action, And furthermore the change of end-member conditions due to the internal sinks and sources makes the occurrence of concentration difference inevitable. Therefore, when investigating the water quality of the estuary, it is necessary to take a sampling method considering the tide to obtain average water quality data.Keywords: conservative mixing, end-member, factor analysis, flushing time, high and low tide, latent variables, non-conservative mixing, slack-tide sampling, spatial and temporal variations, surface and bottom water
Procedia PDF Downloads 130164 Floating Building Potential for Adaptation to Rising Sea Levels: Development of a Performance Based Building Design Framework
Authors: Livia Calcagni
Abstract:
Most of the largest cities in the world are located in areas that are vulnerable to coastal erosion and flooding, both linked to climate change and rising sea levels (RSL). Nevertheless, more and more people are moving to these vulnerable areas as cities keep growing. Architects, engineers and policy makers are called to rethink the way we live and to provide timely and adequate responses not only by investigating measures to improve the urban fabric, but also by developing strategies capable of planning change, exploring unusual and resilient frontiers of living, such as floating architecture. Since the beginning of the 21st century we have seen a dynamic growth of water-based architecture. At the same time, the shortage of land available for urban development also led to reclaim the seabed or to build floating structures. In light of these considerations, time is ripe to consider floating architecture not only as a full-fledged building typology but especially as a full-fledged adaptation solution for RSL. Currently, there is no global international legal framework for urban development on water and there is no structured performance based building design (PBBD) approach for floating architecture in most countries, let alone national regulatory systems. Thus, the research intends to identify the technological, morphological, functional, economic, managerial requirements that must be considered in a the development of the PBBD framework conceived as a meta-design tool. As it is expected that floating urban development is mostly likely to take place as extension of coastal areas, the needs and design criteria are definitely more similar to those of the urban environment than of the offshore industry. Therefor, the identification and categorization of parameters takes the urban-architectural guidelines and regulations as the starting point, taking the missing aspects, such as hydrodynamics, from the offshore and shipping regulatory frameworks. This study is carried out through an evidence-based assessment of performance guidelines and regulatory systems that are effective in different countries around the world addressing on-land and on-water architecture as well as offshore and shipping industries. It involves evidence-based research and logical argumentation methods. Overall, this paper highlights how inhabiting water is not only a viable response to the problem of RSL, thus a resilient frontier for urban development, but also a response to energy insecurity, clean water and food shortages, environmental concerns and urbanization, in line with Blue Economy principles and the Agenda 2030. Moreover, the discipline of architecture is presented as a fertile field for investigating solutions to cope with climate change and its effects on life safety and quality. Future research involves the development of a decision support system as an information tool to guide the user through the decision-making process, emphasizing the logical interaction between the different potential choices, based on the PBBD.Keywords: adaptation measures, floating architecture, performance based building design, resilient architecture, rising sea levels
Procedia PDF Downloads 86163 Revisionist Powers Seeking for Status within the System by Adopting a Compresence of Cooperative and Competitive Strategies
Authors: Mirele Plenishti
Abstract:
Revisionist powers are sometimes associated to revolutionary and status quo powers, this because along the line representing the level of satisfaction–dissatisfaction with the system, revisionist powers are located in between status quo and revolutionary powers. In particular, the case of revisionist powers seeking for social status adjustments (while having status quo intentions) can, in the first option, be refuted due to the disbelief that dissatisfaction could coexist with status quo intentions – this entailing the possibility to trigger a spiral effect by over-counter-reacting. In the second option, revisionist powers can be underestimated as a real threat, this entailing a potential inadequate reaction. The necessity to well manage international change entails the need to understand better how revisionist powers seek for changes in status, within the system. The complexity of this case is heightened by the propensity of both IR scholars and practitioners to infer states' aims and intentions – towards the system – by looking at their behaviours. This has resulted in the tendency to consider cooperative international behaviours as symptomatic of status quo intentions, and vice versa: status quo intentions as manifested through positive/cooperative behaviours. Similarly, assertive/competitive international behaviours are considered as symptomatic (and vice versa, as manifestations) of revolutionary intentions. Therefore, within complex and composite foreign policies, scholars who disbelieve the existence of revisionist powers with status quo intentions, tend to highlight the negative/competitive elements; while more optimist scholars tend to focus on conforming/cooperative behaviours. Both perspectives, while understanding relevant components of the complex international interaction, still miss a composite overview. In order to closely investigate the strategies adopted by (status quo aiming) revisionist states, and by drawing on sociological studies on peer relations, focused on children's behaviour, one could expect that the compresence of both positive (compliant/cooperative) and negative (competitive/assertive) behaviours, is deliberate, and functional to seeking social status adjustments. Indeed, at the end of 90s, peer relation studies focused on children's behaviour, discerned between the concept of social acceptance (that refers to the degree of social preference assigned to the child– how much is s/he liked) and popularity (which refers to the social status assigned to the child within the group). By building on this distinction, it was possible to identify a link relating social acceptance to prosocial (compliant/cooperative) behaviours and strategies, and popularity to both prosocial and antisocial (aggressive/assertive) behaviours and strategies. Since then, antisocial behaviours ceased to be considered as a proof of social maladjustment and were finally identified as socially recognized strategies adopted in function of the achievement of popularity. Drawing on these results, one can hypothesize that also international status seekers perform both positive (conforming/compliant/cooperative) and negative (assertive/aggressive/competitive) behaviours. Therefore, the link between aims and behaviours loses its strength, since cooperative and competitive behaviours are both means for status seeking strategies that aim at status quo intentions. By carrying out a historical investigation of Italy's foreign policy during fascism, the intent is to closely look at this compresence of behaviours, in order to better qualify its components and their relations.Keywords: compresence of cooperative and competitive behaviours and strategies, revisionist powers, status quo intentions, status seeking
Procedia PDF Downloads 320162 C-Coordinated Chitosan Metal Complexes: Design, Synthesis and Antifungal Properties
Authors: Weixiang Liu, Yukun Qin, Song Liu, Pengcheng Li
Abstract:
Plant diseases can cause the death of crops with great economic losses. Particularly, those diseases are usually caused by pathogenic fungi. Metal fungicides are a type of pesticide that has advantages of a low-cost, broad antimicrobial spectrum and strong sterilization effect. However, the frequent and wide application of traditional metal fungicides has caused serious problems such as environmental pollution, the outbreak of mites and phytotoxicity. Therefore, it is critically necessary to discover new organic metal fungicides alternatives that have a low metal content, low toxicity, and little influence on mites. Chitosan, the second most abundant natural polysaccharide next to cellulose, was proved to have broad-spectrum antifungal activity against a variety of fungi. However, the use of chitosan was limited due to its poor solubility and weaker antifungal activity compared with commercial fungicide. Therefore, in order to improve the water solubility and antifungal activity, many researchers grafted the active groups onto chitosan. The present work was to combine free metal ions with chitosan, to prepare more potent antifungal chitosan derivatives, thus, based on condensation reaction, chitosan derivative bearing amino pyridine group was prepared and subsequently followed by coordination with cupric ions, zinc ions and nickel ions to synthesize chitosan metal complexes. The calculations by density functional theory (DFT) show that the copper ions and nickel ions underwent dsp2 hybridization, the zinc ions underwent sp3 hybridization, and all of them are coordinated by the carbon atom in the p-π conjugate group and the oxygen atoms in the acetate ion. The antifungal properties of chitosan metal complexes against Phytophthora capsici (P. capsici), Gibberella zeae (G. zeae), Fusarium oxysporum (F. oxysporum) and Botrytis cinerea (B. cinerea) were also assayed. In addition, a plant toxicity experiment was carried out. The experiments indicated that the derivatives have significantly enhanced antifungal activity after metal ions complexation compared with the original chitosan. It was shown that 0.20 mg/mL of O-CSPX-Cu can 100% inhibit the growth of P. capsici and 0.20 mg/mL of O-CSPX-Ni can 87.5% inhibit the growth of B. cinerea. In general, their activities are better than the positive control oligosaccharides. The combination of the pyridine formyl groups seems to favor biological activity. Additionally, the ligand fashion was precisely analyzed, and the results revealed that the copper ions and nickel ions underwent dsp2 hybridization, the zinc ions underwent sp3 hybridization, and the carbon atoms of the p-π conjugate group and the oxygen atoms of acetate ion are involved in the coordination of metal ions. The phytotoxicity assay of O-CSPX-M was also conducted, unlike the traditional metal fungicides, the metal complexes were not significantly toxic to the leaves of wheat. O-CSPX-Zn can even increase chlorophyll content in wheat leaves at 0.40 mg/mL. This is mainly because chitosan itself promotes plant growth and counteracts the phytotoxicity of metal ions. The chitosan derivative described here may lend themselves to future applicative studies in crop protection.Keywords: coordination, chitosan, metal complex, antifungal properties
Procedia PDF Downloads 316161 The Lacuna in Understanding of Forensic Science amongst Law Practitioners in India
Authors: Poulomi Bhadra, Manjushree Palit, Sanjeev P. Sahni
Abstract:
Forensic science uses all branches of science for criminal investigation and trial and has increasingly emerged as an important tool in the administration of justice. However, the growth and development of this field in India has not been as rapid or widespread as compared to the more developed Western countries. For successful administration of justice, it is important that all agencies involved in law enforcement adopt an inter-professional approach towards forensic science, which is presently lacking. In light of the alarmingly high average acquittal rate in India, this study aims to examine the lack of understanding and appreciation of the importance and scope of forensic evidence and expert opinions amongst law professionals such as lawyers and judges. Based on a study of trial court cases from Delhi and surrounding areas, the study underline the areas in forensics where the criminal justice system has noticeably erred. Using this information, the authors examine the extent of forensic understanding amongst legal professionals and attempt to conclusively identify the areas in which they need further appraisal. A cross-sectional study done using a structured questionnaire was conducted amongst law professionals across age, gender, type and years of experience in court, to determine their understanding of DNA, fingerprints and other interdisciplinary scientific materials used as forensic evidence. In our study, we understand the levels of understanding amongst lawyers with regards to DNA and fingerprint evidence, and how it affects trial outcomes. We also aim to understand the factors that prevent credible and advanced awareness amongst legal personnel, amongst others. The survey identified the areas in modern and advanced forensics, such as forensic entomology, anthropology, cybercrime etc., in which Indian legal professionals are yet to attain a functional understanding. It also brings to light, what is commonly termed as the ‘CSI-effect’ in the Western courtrooms, and provides scope to study the existence of this phenomenon and its effects on the Indian courts and their judgements. This study highlighted the prevalence of unchallenged expert testimony presented by the prosecution in criminal trials and impressed upon the judicial system the need for independent analysis and evaluation of the scientist’s data and/or testimony by the defense. Overall, this study aims to define a clearer and rigid understanding of why legal professionals should have basic understanding of the interdisciplinary nature of forensic sciences. Based on the aforementioned findings, the author suggests various measures by which judges and lawyers might obtain an extensive knowledge of the advances and promising potentialities of forensic science. This includes promoting a forensic curriculum in legal studies at Bachelor’s and Master’s level as well as in mid-career professional courses. Formation of forensic-legal consultancies, in consultation with the Department of Justice, will not only assist in training police, military and law personnel but will also encourage legal research in this field. These suggestions also aim to bridge the communication gap that presently exists between law practitioners, forensic scientists and the general community’s awareness of the criminal justice system.Keywords: forensic science, Indian legal professionals, interdisciplinary awareness, legal education
Procedia PDF Downloads 341160 Framework Proposal on How to Use Game-Based Learning, Collaboration and Design Challenges to Teach Mechatronics
Authors: Michael Wendland
Abstract:
This paper presents a framework to teach a methodical design approach by the help of using a mixture of game-based learning, design challenges and competitions as forms of direct assessment. In today’s world, developing products is more complex than ever. Conflicting goals of product cost and quality with limited time as well as post-pandemic part shortages increase the difficulty. Common design approaches for mechatronic products mitigate some of these effects by helping the users with their methodical framework. Due to the inherent complexity of these products, the number of involved resources and the comprehensive design processes, students very rarely have enough time or motivation to experience a complete approach in one semester course. But, for students to be successful in the industrial world, it is crucial to know these methodical frameworks and to gain first-hand experience. Therefore, it is necessary to teach these design approaches in a real-world setting and keep the motivation high as well as learning to manage upcoming problems. This is achieved by using a game-based approach and a set of design challenges that are given to the students. In order to mimic industrial collaboration, they work in teams of up to six participants and are given the main development target to design a remote-controlled robot that can manipulate a specified object. By setting this clear goal without a given solution path, a constricted time-frame and limited maximal cost, the students are subjected to similar boundary conditions as in the real world. They must follow the methodical approach steps by specifying requirements, conceptualizing their ideas, drafting, designing, manufacturing and building a prototype using rapid prototyping. At the end of the course, the prototypes will be entered into a contest against the other teams. The complete design process is accompanied by theoretical input via lectures which is immediately transferred by the students to their own design problem in practical sessions. To increase motivation in these sessions, a playful learning approach has been chosen, i.e. designing the first concepts is supported by using lego construction kits. After each challenge, mandatory online quizzes help to deepen the acquired knowledge of the students and badges are awarded to those who complete a quiz, resulting in higher motivation and a level-up on a fictional leaderboard. The final contest is held in presence and involves all teams with their functional prototypes that now need to contest against each other. Prices for the best mechanical design, the most innovative approach and for the winner of the robotic contest are awarded. Each robot design gets evaluated with regards to the specified requirements and partial grades are derived from the results. This paper concludes with a critical review of the proposed framework, the game-based approach for the designed prototypes, the reality of the boundary conditions, the problems that occurred during the design and manufacturing process, the experiences and feedback of the students and the effectiveness of their collaboration as well as a discussion of the potential transfer to other educational areas.Keywords: design challenges, game-based learning, playful learning, methodical framework, mechatronics, student assessment, constructive alignment
Procedia PDF Downloads 67159 Functionalizing Gold Nanostars with Ninhydrin as Vehicle Molecule for Biomedical Applications
Authors: Swati Mishra
Abstract:
In recent years, there has been an explosion in Gold NanoParticle (GNP) research, with a rapid increase in publications in diverse fields, including imaging, bioengineering, and molecular biology. GNPs exhibit unique physicochemical properties, including surface plasmon resonance (SPR) and bind amine and thiol groups, allowing surface modification and use in biomedical applications. Nanoparticle functionalization is the subject of intense research at present, with rapid progress being made towards developing biocompatible, multi-functional particles. In the present study, the photochemical method has been done to functionalize various-shaped GNPs like nanostars by the molecules like ninhydrin. Ninhydrin is bactericidal, virucidal, fungicidal, antigen-antibody reactive, and used in fingerprint technology in forensics. The GNPs functionalized with ninhydrin efficiently will bind to the amino acids on the target protein, which is of eminent importance during the pandemic, especially where long-term treatments of COVID- 19 bring many side effects of the drugs. The photochemical method is adopted as it provides low thermal load, selective reactivity, selective activation, and controlled radiation in time, space, and energy. The GNPs exhibit their characteristic spectrum, but a distinctly blue or redshift in the peak will be observed after UV irradiation, ensuring efficient ninhydrin binding. Now, the bound ninhydrin in the GNP carrier, upon chemically reacting with any amino acid, will lead to the formation of Rhumann purple. A common method of GNP production includes citrate reduction of Au [III] derivatives such as aurochloric acid (HAuCl4) in water to Au [0] through a one-step synthesis of size-tunable GNPs. The following reagents are prepared to validate the approach. Reagent A solution 1 is0.0175 grams ninhydrin in 5 ml Millipore water Reagent B 30 µl of HAuCl₄.3H₂O in 3 ml of solution 1 Reagent C 1 µl of gold nanostars in 3 ml of solution 1 Reagent D 6 µl of cetrimonium bromide (CTAB) in 3 ml of solution1 ReagentE 1 µl of gold nanostars in 3 ml of ethanol ReagentF 30 µl of HAuCl₄.₃H₂O in 3 ml of ethanol ReagentG 30 µl of HAuCl₄.₃H₂O in 3 ml of solution 2 ReagentH solution 2 is0.0087 grams ninhydrin in 5 ml Millipore water ReagentI 30 µl of HAuCl₄.₃H₂O in 3 ml of water The reagents were irradiated at 254 nm for 15 minutes, followed by their UV Visible spectroscopy. The wavelength was selected based on the one reported for excitation of a similar molecule Pthalimide. It was observed that the solution B and G deviate around 600 nm, while C peaks distinctively at 567.25 nm and 983.9 nm. Though it is tough to say about the chemical reaction happening, butATR-FTIR of reagents will ensure that ninhydrin is not forming Rhumann purple in the absence of amino acids. Therefore, these experiments, we achieved the functionalization of gold nanostars with ninhydrin corroborated by the deviation in the spectrum obtained in a mixture of GNPs and ninhydrin irradiated with UV light. It prepares them as a carrier molecule totake up amino acids for targeted delivery or germicidal action.Keywords: gold nanostars, ninhydrin, photochemical method, UV visible specgtroscopy
Procedia PDF Downloads 148158 Non-Steroidal Microtubule Disrupting Analogues Induce Programmed Cell Death in Breast and Lung Cancer Cell Lines
Authors: Marcel Verwey, Anna M. Joubert, Elsie M. Nolte, Wolfgang Dohle, Barry V. L. Potter, Anne E. Theron
Abstract:
A tetrahydroisoquinolinone (THIQ) core can be used to mimic the A,B-ring of colchicine site-binding microtubule disruptors such as 2-methoxyestradiol in the design of anti-cancer agents. Steroidomimeric microtubule disruptors were synthesized by introducing C'2 and C'3 of the steroidal A-ring to C'6 and C'7 of the THIQ core and by introducing a decorated hydrogen bond acceptor motif projecting from the steroidal D-ring to N'2. For this in vitro study, four non-steroidal THIQ-based analogues were investigated and comparative studies were done between the non-sulphamoylated compound STX 3450 and the sulphamoylated compounds STX 2895, STX 3329 and STX 3451. The objective of this study was to investigate the modes of cell death induced by these four THIQ-based analogues in A549 lung carcinoma epithelial cells and metastatic breast adenocarcinoma MDA-MB-231 cells. Cytotoxicity studies to determine the half maximal growth inhibitory concentrations were done using spectrophotometric quantification via crystal violet staining and lactate dehydrogenase (LDH) assays. Microtubule integrity and morphologic changes of exposed cells were investigated using polarization-optical transmitted light differential interference contrast microscopy, transmission electron microscopy and confocal microscopy. Flow cytometric quantification was used to determine apoptosis induction and the effect that THIQ-based analogues have on cell cycle progression. Signal transduction pathways were elucidated by quantification of the mitochondrial membrane integrity, cytochrome c release and caspase 3, -6 and -8 activation. Induction of autophagic cell death by the THIQ-based analogues was investigated by morphological assessment of fluorescent monodansylcadaverine (MDC) staining of acidic vacuoles and by quantifying aggresome formation via flow cytometry. Results revealed that these non-steroidal microtubule disrupting analogues inhibited 50% of cell growth at nanomolar concentrations. Immunofluorescence microscopy indicated microtubule depolarization and the resultant mitotic arrest was further confirmed through cell cycle analysis. Apoptosis induction via the intrinsic pathway was observed due to depolarization of the mitochondrial membrane, induction of cytochrome c release as well as, caspase 3 activation. Potential involvement of programmed cell death type II was observed due to the presence of acidic vacuoles and aggresome formation. Necrotic cell death did not contribute significantly, indicated by stable LDH levels. This in vitro study revealed the induction of the intrinsic apoptotic pathway as well as possible involvement of autophagy after exposure to these THIQ-based analogues in both MDA-MB-231- and A549 cells. Further investigation of this series of anticancer drugs still needs to be conducted to elucidate the temporal, mechanistic and functional crosstalk mechanisms between the two observed programmed cell deaths pathways.Keywords: apoptosis, autophagy, cancer, microtubule disruptor
Procedia PDF Downloads 253157 Global Supply Chain Tuning: Role of National Culture
Authors: Aleksandr S. Demin, Anastasiia V. Ivanova
Abstract:
Purpose: The current economy tends to increase the influence of digital technologies and diminish the human role in management. However, it is impossible to deny that a person still leads a business with its own set of values and priorities. The article presented aims to incorporate the peculiarities of the national culture and the characteristics of the supply chain using the quantitative values of the national culture obtained by the scholars of comparative management (Hofstede, House, and others). Design/Methodology/Approach: The conducted research is based on the secondary data in the field of cross-country comparison achieved by Prof. Hofstede and received in the GLOBE project. The data mentioned are used to design different aspects of the supply chain both on the cross-functional and inter-organizational levels. The connection between a range of principles in general (roles assignment, customer service prioritization, coordination of supply chain partners) and in comparative management (acknowledgment of the national peculiarities of the country in which the company operates) is shown over economic and mathematical models, mainly linear programming models. Findings: The combination of the team management wheel concept, the business processes of the global supply chain, and the national culture characteristics let a transnational corporation to form a supply chain crew balanced in costs, functions, and personality. To elaborate on an effective customer service policy and logistics strategy in goods and services distribution in the country under review, two approaches are offered. The first approach relies exceptionally on the customer’s interest in the place of operation, while the second one takes into account the position of the transnational corporation and its previous experience in order to accord both organizational and national cultures. The effect of integration practice on the achievement of a specific supply chain goal in a specific location is advised to assess via types of correlation (positive, negative, non) and the value of national culture indices. Research Limitations: The models developed are intended to be used by transnational companies and business forms located in several nationally different areas. Some of the inputs to illustrate the application of the methods offered are simulated. That is why the numerical measurements should be used with caution. Practical Implications: The research can be of great interest for the supply chain managers who are responsible for the engineering of global supply chains in a transnational corporation and the further activities in doing business on the international area. As well, the methods, tools, and approaches suggested can be used by top managers searching for new ways of competitiveness and can be suitable for all staff members who are keen on the national culture traits topic. Originality/Value: The elaborated methods of decision-making with regard to the national environment suggest the mathematical and economic base to find a comprehensive solution.Keywords: logistics integration, logistics services, multinational corporation, national culture, team management, service policy, supply chain management
Procedia PDF Downloads 106156 A Model for Teaching Arabic Grammar in Light of the Common European Framework of Reference for Languages
Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla
Abstract:
The complexity of Arabic grammar poses challenges for learners, particularly in relation to its arrangement, classification, abundance, and bifurcation. The challenge at hand is a result of the contextual factors that gave rise to the grammatical rules in question, as well as the pedagogical approach employed at the time, which was tailored to the needs of learners during that particular historical period. Consequently, modern-day students encounter this same obstacle. This requires a thorough examination of the arrangement and categorization of Arabic grammatical rules based on particular criteria, as well as an assessment of their objectives. Additionally, it is necessary to identify the prevalent and renowned grammatical rules, as well as those that are infrequently encountered, obscure and disregarded. This paper presents a compilation of grammatical rules that require arrangement and categorization in accordance with the standards outlined in the Common European Framework of Reference for Languages (CEFR). In addition to facilitating comprehension of the curriculum, accommodating learners' requirements, and establishing the fundamental competencies for achieving proficiency in Arabic, it is imperative to ascertain the conventions that language learners necessitate in alignment with explicitly delineated benchmarks such as the CEFR criteria. The aim of this study is to reduce the quantity of grammatical rules that are typically presented to non-native Arabic speakers in Arabic textbooks. This reduction is expected to enhance the motivation of learners to continue their Arabic language acquisition and to approach the level of proficiency of native speakers. The primary obstacle faced by learners is the intricate nature of Arabic grammar, which poses a significant challenge in the realm of study. The proliferation and complexity of regulations evident in Arabic language textbooks designed for individuals who are not native speakers is noteworthy. The inadequate organisation and delivery of the material create the impression that the grammar is being imparted to a student with the intention of memorising "Alfiyyat-Ibn-Malik." Consequently, the sequence of grammatical rules instruction was altered, with rules originally intended for later instruction being presented first and those intended for earlier instruction being presented subsequently. Students often focus on learning grammatical rules that are not necessarily required while neglecting the rules that are commonly used in everyday speech and writing. Non-Arab students are taught Arabic grammar chapters that are infrequently utilised in Arabic literature and may be a topic of debate among grammarians. The aforementioned findings are derived from the statistical analysis and investigations conducted by the researcher, which will be disclosed in due course of the research. To instruct non-Arabic speakers on grammatical rules, it is imperative to discern the most prevalent grammatical frameworks in grammar manuals and linguistic literature (study sample). The present proposal suggests the allocation of grammatical structures across linguistic levels, taking into account the guidelines of the CEFR, as well as the grammatical structures that are necessary for non-Arabic-speaking learners to generate a modern, cohesive, and comprehensible language.Keywords: grammar, Arabic, functional, framework, problems, standards, statistical, popularity, analysis
Procedia PDF Downloads 91155 Rationally Designed Dual PARP-HDAC Inhibitor Elicits Striking Anti-leukemic Effects
Authors: Amandeep Thakur, Yi-Hsuan Chu, Chun-Hsu Pan, Kunal Nepali
Abstract:
The transfer of ADP-ribose residues onto target substrates from nicotinamide adenine dinucleotide (NAD) (PARylation) is catalyzed by Poly (ADP-ribose) polymerases (PARPs). Amongst the PARP family members, the DNA damage response in cancer is majorly regulated by PARP1 and PARP2. The blockade of DNA repair by PARP inhibitors leads to the progression of DNA single-strand breaks (induced by some triggering factors) to double-strand breaks. Notably, PARP inhibitors are remarkably effective in cancers with defective homologous recombination repair (HRR). In particular, cancer cells with BRCA mutations are responsive to therapy with PARP inhibitors. The aforementioned requirement for PARP inhibitors to be effective confers a narrow activity spectrum to PARP inhibitors, which hinders their clinical applicability. Thus, the quest to expand the application horizons of PARP inhibitors beyond BRCA mutations is the need of the hour. Literature precedents reveal that HDAC inhibition induces BRCAness in cancer cells and can broaden the therapeutic scope of PARP inhibitors. Driven by such disclosures, dual inhibitors targeting both PARP and HDAC enzymes were designed by our research group to extend the efficacy of PARP inhibitors beyond BRCA-mutated cancers to cancers with induced BRCAness. The design strategy involved the installation of Veliparib, an investigational PARP inhibitor, as a surface recognition part in the HDAC inhibitor pharmacophore model. The chemical architecture of veliparib was deemed appropriate as a starting point for the generation of dual inhibitors by virtue of its size and structural flexibility. A validatory docking study was conducted at the outset to predict the binding mode of the designed dual modulatory chemical architectures. Subsequently, the designed chemical architectures were synthesized via a multistep synthetic route and evaluated for antitumor efficacy. Delightfully, one compound manifested impressive anti-leukemic effects (HL-60 cell lines) mediated via dual inhibition of PARP and class I HDACs. The outcome of the western blot analysis revealed that the compound could downregulate the expression levels of PARP1 and PARP2 and the HDAC isoforms (HDAC1, 2, and 3). Also, the dual PARP-HDAC inhibitor upregulated the protein expression of the acetyl histone H3, confirming its abrogation potential for class I HDACs. In addition, the dual modulator could arrest the cell cycle at the G0/G1 phase and induce autophagy. Further, polymer-based nanoformulation of the dual inhibitor was furnished to afford targeted delivery of the dual inhibitor at the cancer site. Transmission electron microscopy (TEM) results indicate that the nanoparticles were monodispersed and spherical. Moreover, the polymeric nanoformulation exhibited an appropriate particle size. Delightfully, pH-sensitive behavior was manifested by the polymeric nanoformulation that led to selective antitumor effects towards the HL-60 cell lines. In light of the magnificent anti-leukemic profile of the identified dual PARP-HDAC inhibitor, in-vivo studies (pharmacokinetics and pharmacodynamics) are currently being conducted. Notably, the optimistic findings of the aforementioned study have spurred our research group to initiate several medicinal chemistry campaigns to create bifunctional small molecule inhibitors addressing PARP as the primary target.Keywords: PARP inhibitors, HDAC inhibitors, BRCA mutations, leukemia
Procedia PDF Downloads 23154 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach
Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft
Abstract:
Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology
Procedia PDF Downloads 108153 Investigation of Processing Conditions on Rheological Features of Emulsion Gels and Oleogels Stabilized by Biopolymers
Authors: M. Sarraf, J. E. Moros, M. C. Sánchez
Abstract:
Oleogels are self-standing systems that are able to trap edible liquid oil into a tridimensional network and also help to use less fat by forming crystallization oleogelators. There are different ways to generate oleogelation and oil structuring, including direct dispersion, structured biphasic systems, oil sorption, and indirect method (emulsion-template). The selection of processing conditions as well as the composition of the oleogels is essential to obtain a stable oleogel with characteristics suitable for its purpose. In this sense, one of the ingredients widely used in food products to produce oleogels and emulsions is polysaccharides. Basil seed gum (BSG), with the scientific name Ocimum basilicum, is a new native polysaccharide with high viscosity and pseudoplastic behavior because of its high molecular weight in the food industry. Also, proteins can stabilize oil in water due to the presence of amino and carboxyl moieties that result in surface activity. Whey proteins are widely used in the food industry due to available, cheap ingredients, nutritional and functional characteristics such as emulsifier and a gelling agent, thickening, and water-binding capacity. In general, the interaction of protein and polysaccharides has a significant effect on the food structures and their stability, like the texture of dairy products, by controlling the interactions in macromolecular systems. Using edible oleogels as oil structuring helps for targeted delivery of a component trapped in a structural network. Therefore, the development of efficient oleogel is essential in the food industry. A complete understanding of the important points, such as the ratio oil phase, processing conditions, and concentrations of biopolymers that affect the formation and stability of the emulsion, can result in crucial information in the production of a suitable oleogel. In this research, the effects of oil concentration and pressure used in the manufacture of the emulsion prior to obtaining the oleogel have been evaluated through the analysis of droplet size and rheological properties of obtained emulsions and oleogels. The results show that the emulsion prepared in the high-pressure homogenizer (HPH) at higher pressure values has smaller droplet sizes and a higher uniformity in the size distribution curve. On the other hand, in relation to the rheological characteristics of the emulsions and oleogels obtained, the predominantly elastic character of the systems must be noted, as they present values of the storage modulus higher than those of losses, also showing an important plateau zone, typical of structured systems. In the same way, if steady-state viscous flow tests have been analyzed on both emulsions and oleogels, the result is that, once again, the pressure used in the homogenizer is an important factor for obtaining emulsions with adequate droplet size and the subsequent oleogel. Thus, various routes for trapping oil inside a biopolymer matrix with adjustable mechanical properties could be applied for the creation of the three-dimensional network in order to the oil absorption and creating oleogel.Keywords: basil seed gum, particle size, viscoelastic properties, whey protein
Procedia PDF Downloads 66152 Effect of Methoxy and Polyene Additional Functionalized Group on the Photocatalytic Properties of Polyene-Diphenylaniline Organic Chromophores for Solar Energy Applications
Authors: Ife Elegbeleye, Nnditshedzeni Eric, Regina Maphanga, Femi Elegbeleye, Femi Agunbiade
Abstract:
The global potential of other renewable energy sources such as wind, hydroelectric, bio-mass, and geothermal is estimated to be approximately 13 %, with hydroelectricity constituting a larger percentage. Sunlight provides by far the largest of all carbon-neutral energy sources. More energy from the sunlight strikes the Earth in one hour (4.3 × 1020 J) than all the energy consumed on the planet in a year (4.1 × 1020 J), hence, solar energy remains the most abundant clean, renewable energy resources for mankind. Photovoltaic (PV) devices such as silicon solar cells, dye sensitized solar cells are utilized for harnessing solar energy. Polyene-diphenylaniline organic molecules are important sets of molecules that has stirred many research interest as photosensitizers in TiO₂ semiconductor-based dye sensitized solar cells (DSSCs). The advantages of organic dye molecule over metal-based complexes are higher extinction coefficient, moderate cost, good environmental compatibility, and electrochemical properties. The polyene-diphenylaniline organic dyes with basic configuration of donor-π-acceptor are affordable, easy to synthesize and possess chemical structures that can easily be modified to optimize their photocatalytic and spectral properties. The enormous interest in polyene-diphenylaniline dyes as photosensitizers is due to their fascinating spectral properties which include visible light to near infra-red-light absorption. In this work, density functional theory approach via GPAW software, Avogadro and ASE were employed to study the effect of methoxy functionalized group on the spectral properties of polyene-diphenylaniline dyes and their photons absorbing characteristics in the visible region to near infrared region of the solar spectrum. Our results showed that the two-phenyl based complexes D5 and D7 exhibits maximum absorption peaks at 750 nm and 850 nm, while D9 and D11 with methoxy group shows maximum absorption peak at 800 nm and 900 nm respectively. The highest absorption wavelength is notable for D9 and D11 containing additional polyene and methoxy groups. Also, D9 and D11 chromophores with the methoxy group shows lower energy gap of 0.98 and 0.85 respectively than the corresponding D5 and D7 dyes complexes with energy gap of 1.32 and 1.08. The analysis of their electron injection kinetics ∆Ginject into the band gap of TiO₂ shows that D9 and D11 with the methoxy group has higher electron injection kinetics of -2.070 and -2.030 than the corresponding polyene-diphenylaniline complexes without the addition of polyene group with ∆Ginject values of -2.820 and -2.130 respectively. Our findings suggest that the addition of functionalized group as an extension of the organic complexes results in higher light harvesting efficiencies and bathochromic shift of the absorption spectra to higher wavelength which suggest higher current densities and open circuit voltage in DSSCs. The study suggests that the photocatalytic properties of organic chromophores/complexes with donor-π-acceptor configuration can be enhanced by the addition of functionalized groups.Keywords: renewable energy resource, solar energy, dye sensitized solar cells, polyene-diphenylaniline organic chromophores
Procedia PDF Downloads 111151 Farm-Women in Technology Transfer to Foster the Capacity Building of Agriculture: A Forecast from a Draught-Prone Rural Setting in India
Authors: Pradipta Chandra, Titas Bhattacharjee, Bhaskar Bhowmick
Abstract:
The foundation of economy in India is primarily based on agriculture while this is the most neglected in the rural setting. More significantly, household women take part in agriculture with higher involvement. However, because of lower education of women they have limited access towards financial decisions, land ownership and technology but they have vital role towards the individual family level. There are limited studies on the institution-wise training barriers with the focus of gender disparity. The main purpose of this paper is to find out the factors of institution-wise training (non-formal education) barriers in technology transfer with the focus of participation of rural women in agriculture. For this study primary and secondary data were collected in the line of qualitative and quantitative approach. Qualitative data were collected by several field visits in the adjacent areas of Seva-Bharati, Seva Bharati Krishi Vigyan Kendra through semi-structured questionnaires. In the next level detailed field surveys were conducted with close-ended questionnaires scored on the seven-point Likert scale. Sample size was considered as 162. During the data collection the focus was to include women although some biasness from the end of respondents and interviewer might exist due to dissimilarity in observation, views etc. In addition to that the heterogeneity of sample is not very high although female participation is more than fifty percent. Data were analyzed using Exploratory Factor Analysis (EFA) technique with the outcome of three significant factors of training barriers in technology adoption by farmers: (a) Failure of technology transfer training (TTT) comprehension interprets that the technology takers, i.e., farmers can’t understand the technology either language barrier or way of demonstration exhibited by the experts/ trainers. (b) Failure of TTT customization, articulates that the training for individual farmer, gender crop or season-wise is not tailored. (c) Failure of TTT generalization conveys that absence of common training methods for individual trainers for specific crops is more prominent at the community level. The central finding is that the technology transfer training method can’t fulfill the need of the farmers under an economically challenged area. The impact of such study is very high in the area of dry lateritic and resource crunch area of Jangalmahal under Paschim Medinipur district, West Bengal and areas with similar socio-economy. Towards the policy level decision this research may help in framing digital agriculture for implementation of the appropriate information technology for the farming community, effective and timely investment by the government with the selection of beneficiary, formation of farmers club/ farm science club etc. The most important research implication of this study lies upon the contribution towards the knowledge diffusion mechanism of the agricultural sector in India. Farmers may overcome the barriers to achieve higher productivity through adoption of modern farm practices. Corporates will be interested in agro-sector through investment under corporate social responsibility (CSR). The research will help in framing public or industry policy and land use pattern. Consequently, a huge mass of rural farm-women will be empowered and farmer community will be benefitted.Keywords: dry lateritic zone, institutional barriers, technology transfer in India, farm-women participation
Procedia PDF Downloads 373150 Identification of a Panel of Epigenetic Biomarkers for Early Detection of Hepatocellular Carcinoma in Blood of Individuals with Liver Cirrhosis
Authors: Katarzyna Lubecka, Kirsty Flower, Megan Beetch, Lucinda Kurzava, Hannah Buvala, Samer Gawrieh, Suthat Liangpunsakul, Tracy Gonzalez, George McCabe, Naga Chalasani, James M. Flanagan, Barbara Stefanska
Abstract:
Hepatocellular carcinoma (HCC), the most prevalent type of primary liver cancer, is the second leading cause of cancer death worldwide. Late onset of clinical symptoms in HCC results in late diagnosis and poor disease outcome. Approximately 85% of individuals with HCC have underlying liver cirrhosis. However, not all cirrhotic patients develop cancer. Reliable early detection biomarkers that can distinguish cirrhotic patients who will develop cancer from those who will not are urgently needed and could increase the cure rate from 5% to 80%. We used Illumina-450K microarray to test whether blood DNA, an easily accessible source of DNA, bear site-specific changes in DNA methylation in response to HCC before diagnosis with conventional tools (pre-diagnostic). Top 11 differentially methylated sites were selected for validation by pyrosequencing. The diagnostic potential of the 11 pyrosequenced probes was tested in blood samples from a prospective cohort of cirrhotic patients. We identified 971 differentially methylated CpG sites in pre-diagnostic HCC cases as compared with healthy controls (P < 0.05, paired Wilcoxon test, ICC ≥ 0.5). Nearly 76% of differentially methylated CpG sites showed lower levels of methylation in cases vs. controls (P = 2.973E-11, Wilcoxon test). Classification of the CpG sites according to their location relative to CpG islands and transcription start site revealed that those hypomethylated loci are located in regulatory regions important for gene transcription such as CpG island shores, promoters, and 5’UTR at higher frequency than hypermethylated sites. Among 735 CpG sites hypomethylated in cases vs. controls, 482 sites were assigned to gene coding regions whereas 236 hypermethylated sites corresponded to 160 genes. Bioinformatics analysis using GO, KEGG and DAVID knowledgebase indicate that differentially methylated CpG sites are located in genes associated with functions that are essential for gene transcription, cell adhesion, cell migration, and regulation of signal transduction pathways. Taking into account the magnitude of the difference, statistical significance, location, and consistency across the majority of matched pairs case-control, we selected 11 CpG loci corresponding to 10 genes for further validation by pyrosequencing. We established that methylation of CpG sites within 5 out of those 10 genes distinguish cirrhotic patients who subsequently developed HCC from those who stayed cancer free (cirrhotic controls), demonstrating potential as biomarkers of early detection in populations at risk. The best predictive value was detected for CpGs located within BARD1 (AUC=0.70, asymptotic significance ˂0.01). Using an additive logistic regression model, we further showed that 9 CpG loci within those 5 genes, that were covered in pyrosequenced probes, constitute a panel with high diagnostic accuracy (AUC=0.887; 95% CI:0.80-0.98). The panel was able to distinguish pre-diagnostic cases from cirrhotic controls free of cancer with 88% sensitivity at 70% specificity. Using blood as a minimally invasive material and pyrosequencing as a straightforward quantitative method, the established biomarker panel has high potential to be developed into a routine clinical test after validation in larger cohorts. This study was supported by Showalter Trust, American Cancer Society (IRG#14-190-56), and Purdue Center for Cancer Research (P30 CA023168) granted to BS.Keywords: biomarker, DNA methylation, early detection, hepatocellular carcinoma
Procedia PDF Downloads 304149 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance
Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning
Procedia PDF Downloads 30148 Evaluation of Redundancy Architectures Based on System on Chip Internal Interfaces for Future Unmanned Aerial Vehicles Flight Control Computer
Authors: Sebastian Hiergeist
Abstract:
It is a common view that Unmanned Aerial Vehicles (UAV) tend to migrate into the civil airspace. This trend is challenging UAV manufacturer in plenty ways, as there come up a lot of new requirements and functional aspects. On the higher application levels, this might be collision detection and avoidance and similar features, whereas all these functions only act as input for the flight control components of the aircraft. The flight control computer (FCC) is the central component when it comes up to ensure a continuous safe flight and landing. As these systems are flight critical, they have to be built up redundantly to be able to provide a Fail-Operational behavior. Recent architectural approaches of FCCs used in UAV systems are often based on very simple microprocessors in combination with proprietary Application-Specific Integrated Circuit (ASIC) or Field Programmable Gate Array (FPGA) extensions implementing the whole redundancy functionality. In the future, such simple microprocessors may not be available anymore as they are more and more replaced by higher sophisticated System on Chip (SoC). As the avionic industry cannot provide enough market power to significantly influence the development of new semiconductor products, the use of solutions from foreign markets is almost inevitable. Products stemming from the industrial market developed according to IEC 61508, or automotive SoCs, according to ISO 26262, can be seen as candidates as they have been developed for similar environments. Current available SoC from the industrial or automotive sector provides quite a broad selection of interfaces like, i.e., Ethernet, SPI or FlexRay, that might come into account for the implementation of a redundancy network. In this context, possible network architectures shall be investigated which could be established by using the interfaces stated above. Of importance here is the avoidance of any single point of failures, as well as a proper segregation in distinct fault containment regions. The performed analysis is supported by the use of guidelines, published by the aviation authorities (FAA and EASA), on the reliability of data networks. The main focus clearly lies on the reachable level of safety, but also other aspects like performance and determinism play an important role and are considered in the research. Due to the further increase in design complexity of recent and future SoCs, also the risk of design errors, which might lead to common mode faults, increases. Thus in the context of this work also the aspect of dissimilarity will be considered to limit the effect of design errors. To achieve this, the work is limited to broadly available interfaces available in products from the most common silicon manufacturer. The resulting work shall support the design of future UAV FCCs by giving a guideline on building up a redundancy network between SoCs, solely using on board interfaces. Therefore the author will provide a detailed usability analysis on available interfaces provided by recent SoC solutions, suggestions on possible redundancy architectures based on these interfaces and an assessment of the most relevant characteristics of the suggested network architectures, like e.g. safety or performance.Keywords: redundancy, System-on-Chip, UAV, flight control computer (FCC)
Procedia PDF Downloads 218147 Superparamagnetic Core Shell Catalysts for the Environmental Production of Fuels from Renewable Lignin
Authors: Cristina Opris, Bogdan Cojocaru, Madalina Tudorache, Simona M. Coman, Vasile I. Parvulescu, Camelia Bala, Bahir Duraki, Jeroen A. Van Bokhoven
Abstract:
The tremendous achievements in the development of the society concretized by more sophisticated materials and systems are merely based on non-renewable resources. Consequently, after more than two centuries of intensive development, among others, we are faced with the decrease of the fossil fuel reserves, an increased impact of the greenhouse gases on the environment, and economic effects caused by the fluctuations in oil and mineral resource prices. The use of biomass may solve part of these problems, and recent analyses demonstrated that from the perspective of the reduction of the emissions of carbon dioxide, its valorization may bring important advantages conditioned by the usage of genetic modified fast growing trees or wastes, as primary sources. In this context, the abundance and complex structure of lignin may offer various possibilities of exploitation. However, its transformation in fuels or chemicals supposes a complex chemistry involving the cleavage of C-O and C-C bonds and altering of the functional groups. Chemistry offered various solutions in this sense. However, despite the intense work, there are still many drawbacks limiting the industrial application. Thus, the proposed technologies considered mainly homogeneous catalysts meaning expensive noble metals based systems that are hard to be recovered at the end of the reaction. Also, the reactions were carried out in organic solvents that are not acceptable today from the environmental point of view. To avoid these problems, the concept of this work was to investigate the synthesis of superparamagnetic core shell catalysts for the fragmentation of lignin directly in the aqueous phase. The magnetic nanoparticles were covered with a nanoshell of an oxide (niobia) with a double role: to protect the magnetic nanoparticles and to generate a proper (acidic) catalytic function and, on this composite, cobalt nanoparticles were deposed in order to catalyze the C-C bond splitting. With this purpose, we developed a protocol to prepare multifunctional and magnetic separable nano-composite Co@Nb2O5@Fe3O4 catalysts. We have also established an analytic protocol for the identification and quantification of the fragments resulted from lignin depolymerization in both liquid and solid phase. The fragmentation of various lignins occurred on the prepared materials in high yields and with very good selectivity in the desired fragments. The optimization of the catalyst composition indicated a cobalt loading of 4wt% as optimal. Working at 180 oC and 10 atm H2 this catalyst allowed a conversion of lignin up to 60% leading to a mixture containing over 96% in C20-C28 and C29-C37 fragments that were then completely fragmented to C12-C16 in a second stage. The investigated catalysts were completely recyclable, and no leaching of the elements included in the composition was determined by inductively coupled plasma optical emission spectrometry (ICP-OES).Keywords: superparamagnetic core-shell catalysts, environmental production of fuels, renewable lignin, recyclable catalysts
Procedia PDF Downloads 328146 Development of Mesoporous Gel Based Nonwoven Structure for Thermal Barrier Application
Authors: R. P. Naik, A. K. Rakshit
Abstract:
In recent years, with the rapid development in science and technology, people have increasing requirements on uses of clothing for new functions, which contributes to opportunities for further development and incorporation of new technologies along with novel materials. In this context, textiles are of fast decalescence or fast heat radiation media as per as comfort accountability of textile articles are concern. The microstructure and texture of textiles play a vital role in determining the heat-moisture comfort level of the human body because clothing serves as a barrier to the outside environment and a transporter of heat and moisture from the body to the surrounding environment to keep thermal balance between body heat produced and body heat loss. The main bottleneck which is associated with textile materials to be successful as thermal insulation materials can be enumerated as; firstly, high loft or bulkiness of material so as to provide predetermined amount of insulation by ensuring sufficient trapping of air. Secondly, the insulation depends on forced convection; such convective heat loss cannot be prevented by textile material. Third is that the textile alone cannot reach the level of thermal conductivity lower than 0.025 W/ m.k of air. Perhaps, nano-fibers can do so, but still, mass production and cost-effectiveness is a problem. Finally, such high loft materials for thermal insulation becomes heavier and uneasy to manage especially when required to carry over a body. The proposed works aim at developing lightweight effective thermal insulation textiles in combination with nanoporous silica-gel which provides the fundamental basis for the optimization of material properties to achieve good performance of the clothing system. This flexible nonwoven silica-gel composites fabric in intact monolith was successfully developed by reinforcing SiO2-gel in thermal bonded nonwoven fabric via sol-gel processing. Ambient Pressure Drying method is opted for silica gel preparation for cost-effective manufacturing. The formed structure of the nonwoven / SiO₂ -gel composites were analyzed, and the transfer properties were measured. The effects of structure and fibre on the thermal properties of the SiO₂-gel composites were evaluated. Samples are then tested against untreated samples of same GSM in order to study the effect of SiO₂-gel application on various properties of nonwoven fabric. The nonwoven fabric composites reinforced with aerogel showed intact monolith structure were also analyzed for their surface structure, functional group present, microscopic images. Developed product reveals a significant reduction in pores' size and air permeability than the conventional nonwoven fabric. Composite made from polyester fibre with lower GSM shows lowest thermal conductivity. Results obtained were statistically analyzed by using STATISTICA-6 software for their level of significance. Univariate tests of significance for various parameters are practiced which gives the P value for analyzing significance level along with that regression summary for dependent variable are also studied to obtain correlation coefficient.Keywords: silica-gel, heat insulation, nonwoven fabric, thermal barrier clothing
Procedia PDF Downloads 111145 Single Cell and Spatial Transcriptomics: A Beginners Viewpoint from the Conceptual Pipeline
Authors: Leo Nnamdi Ozurumba-Dwight
Abstract:
Messenger ribooxynucleic acid (mRNA) molecules are compositional, protein-based. These proteins, encoding mRNA molecules (which collectively connote the transcriptome), when analyzed by RNA sequencing (RNAseq), unveils the nature of gene expression in the RNA. The obtained gene expression provides clues of cellular traits and their dynamics in presentations. These can be studied in relation to function and responses. RNAseq is a practical concept in Genomics as it enables detection and quantitative analysis of mRNA molecules. Single cell and spatial transcriptomics both present varying avenues for expositions in genomic characteristics of single cells and pooled cells in disease conditions such as cancer, auto-immune diseases, hematopoietic based diseases, among others, from investigated biological tissue samples. Single cell transcriptomics helps conduct a direct assessment of each building unit of tissues (the cell) during diagnosis and molecular gene expressional studies. A typical technique to achieve this is through the use of a single-cell RNA sequencer (scRNAseq), which helps in conducting high throughput genomic expressional studies. However, this technique generates expressional gene data for several cells which lack presentations on the cells’ positional coordinates within the tissue. As science is developmental, the use of complimentary pre-established tissue reference maps using molecular and bioinformatics techniques has innovatively sprung-forth and is now used to resolve this set back to produce both levels of data in one shot of scRNAseq analysis. This is an emerging conceptual approach in methodology for integrative and progressively dependable transcriptomics analysis. This can support in-situ fashioned analysis for better understanding of tissue functional organization, unveil new biomarkers for early-stage detection of diseases, biomarkers for therapeutic targets in drug development, and exposit nature of cell-to-cell interactions. Also, these are vital genomic signatures and characterizations of clinical applications. Over the past decades, RNAseq has generated a wide array of information that is igniting bespoke breakthroughs and innovations in Biomedicine. On the other side, spatial transcriptomics is tissue level based and utilized to study biological specimens having heterogeneous features. It exposits the gross identity of investigated mammalian tissues, which can then be used to study cell differentiation, track cell line trajectory patterns and behavior, and regulatory homeostasis in disease states. Also, it requires referenced positional analysis to make up of genomic signatures that will be sassed from the single cells in the tissue sample. Given these two presented approaches to RNA transcriptomics study in varying quantities of cell lines, with avenues for appropriate resolutions, both approaches have made the study of gene expression from mRNA molecules interesting, progressive, developmental, and helping to tackle health challenges head-on.Keywords: transcriptomics, RNA sequencing, single cell, spatial, gene expression.
Procedia PDF Downloads 122144 Li-Ion Batteries vs. Synthetic Natural Gas: A Life Cycle Analysis Study on Sustainable Mobility
Authors: Guido Lorenzi, Massimo Santarelli, Carlos Augusto Santos Silva
Abstract:
The growth of non-dispatchable renewable energy sources in the European electricity generation mix is promoting the research of technically feasible and cost-effective solutions to make use of the excess energy, produced when the demand is low. The increasing intermittent renewable capacity is becoming a challenge to face especially in Europe, where some countries have shares of wind and solar on the total electricity produced in 2015 higher than 20%, with Denmark around 40%. However, other consumption sectors (mainly transportation) are still considerably relying on fossil fuels, with a slow transition to other forms of energy. Among the opportunities for different mobility concepts, electric (EV) and biofuel-powered vehicles (BPV) are the options that currently appear more promising. The EVs are targeting mainly the light duty users because of their zero (Full electric) or reduced (Hybrid) local emissions, while the BPVs encourage the use of alternative resources with the same technologies (thermal engines) used so far. The batteries which are applied to EVs are based on ions of Lithium because of their overall good performance in energy density, safety, cost and temperature performance. Biofuels, instead, can be various and the major difference is in their physical state (liquid or gaseous). In this study gaseous biofuels are considered and, more specifically, Synthetic Natural Gas (SNG) produced through a process of Power-to-Gas consisting in an electrochemical upgrade (with Solid Oxide Electrolyzers) of biogas with CO2 recycling. The latter process combines a first stage of electrolysis, where syngas is produced, and a second stage of methanation in which the product gas is turned into methane and then made available for consumption. A techno-economic comparison between the two alternatives is possible, but it does not capture all the different aspects involved in the two routes for the promotion of a more sustainable mobility. For this reason, a more comprehensive methodology, i.e. Life Cycle Assessment, is adopted to describe the environmental implications of using excess electricity (directly or indirectly) for new vehicle fleets. The functional unit of the study is 1 km and the two options are compared in terms of overall CO2 emissions, both considering Cradle to Gate and Cradle to Grave boundaries. Showing how production and disposal of materials affect the environmental performance of the analyzed routes is useful to broaden the perspective on the impacts that different technologies produce, in addition to what is emitted during the operational life. In particular, this applies to batteries for which the decommissioning phase has a larger impact on the environmental balance compared to electrolyzers. The lower (more than one order of magnitude) energy density of Li-ion batteries compared to SNG implies that for the same amount of energy used, more material resources are needed to obtain the same effect. The comparison is performed in an energy system that simulates the Western European one, in order to assess which of the two solutions is more suitable to lead the de-fossilization of the transport sector with the least resource depletion and the mildest consequences for the ecosystem.Keywords: electrical energy storage, electric vehicles, power-to-gas, life cycle assessment
Procedia PDF Downloads 178143 The Procedural Sedation Checklist Manifesto, Emergency Department, Jersey General Hospital
Authors: Jerome Dalphinis, Vishal Patel
Abstract:
The Bailiwick of Jersey is an island British crown dependency situated off the coast of France. Jersey General Hospital’s emergency department sees approximately 40,000 patients a year. It’s outside the NHS, with secondary care being free at the point of care. Sedation is a continuum which extends from a normal conscious level to being fully unresponsive. Procedural sedation produces a minimally depressed level of consciousness in which the patient retains the ability to maintain an airway, and they respond appropriately to physical stimulation. The goals of it are to improve patient comfort and tolerance of the procedure and alleviate associated anxiety. Indications can be stratified by acuity, emergency (cardioversion for life-threatening dysrhythmia), and urgency (joint reduction). In the emergency department, this is most often achieved using a combination of opioids and benzodiazepines. Some departments also use ketamine to produce dissociative sedation, a cataleptic state of profound analgesia and amnesia. The response to pharmacological agents is highly individual, and the drugs used occasionally have unpredictable pharmacokinetics and pharmacodynamics, which can always result in progression between levels of sedation irrespective of the intention. Therefore, practitioners must be able to ‘rescue’ patients from deeper sedation. These practitioners need to be senior clinicians with advanced airway skills (AAS) training. It can lead to adverse effects such as dangerous hypoxia and unintended loss of consciousness if incorrectly undertaken; studies by the National Confidential Enquiry into Patient Outcome and Death (NCEPOD) have reported avoidable deaths. The Royal College of Emergency Medicine, UK (RCEM) released an updated ‘Safe Sedation of Adults in the Emergency Department’ guidance in 2017 detailing a series of standards for staff competencies, and the required environment and equipment, which are required for each target sedation depth. The emergency department in Jersey undertook audit research in 2018 to assess their current practice. It showed gaps in clinical competency, the need for uniform care, and improved documentation. This spurred the development of a checklist incorporating the above RCEM standards, including contraindication for procedural sedation and difficult airway assessment. This was approved following discussion with the relevant heads of departments and the patient safety directorates. Following this, a second audit research was carried out in 2019 with 17 completed checklists (11 relocation of joints, 6 cardioversions). Data was obtained from looking at the controlled resuscitation drugs book containing documented use of ketamine, alfentanil, and fentanyl. TrakCare, which is the patient electronic record system, was then referenced to obtain further information. The results showed dramatic improvement compared to 2018, and they have been subdivided into six categories; pre-procedure assessment recording of significant medical history and ASA grade (2 fold increase), informed consent (100% documentation), pre-oxygenation (88%), staff (90% were AAS practitioners) and monitoring (92% use of non-invasive blood pressure, pulse oximetry, capnography, and cardiac rhythm monitoring) during procedure, and discharge instructions including the documented return of normal vitals and consciousness (82%). This procedural sedation checklist is a safe intervention that identifies pertinent information about the patient and provides a standardised checklist for the delivery of gold standard of care.Keywords: advanced airway skills, checklist, procedural sedation, resuscitation
Procedia PDF Downloads 117142 Stability of Porous SiC Based Materials under Relevant Conditions of Radiation and Temperature
Authors: Marta Malo, Carlota Soto, Carmen García-Rosales, Teresa Hernández
Abstract:
SiC based composites are candidates for possible use as structural and functional materials in the future fusion reactors, the main role is intended for the blanket modules. In the blanket, the neutrons produced in the fusion reaction slow down and their energy is transformed into heat in order to finally generate electrical power. In the blanket design named Dual Coolant Lead Lithium (DCLL), a PbLi alloy for power conversion and tritium breeding circulates inside hollow channels called Flow Channel Inserts (FCIs). These FCI must protect the steel structures against the highly corrosive PbLi liquid and the high temperatures, but also provide electrical insulation in order to minimize magnetohydrodynamic interactions of the flowing liquid metal with the high magnetic field present in a magnetically confined fusion environment. Due to their nominally high temperature and radiation stability as well as corrosion resistance, SiC is the main choice for the flow channel inserts. The significantly lower manufacturing cost presents porous SiC (dense coating is required in order to assure protection against corrosion and as a tritium barrier) as a firm alternative to SiC/SiC composites for this purpose. This application requires the materials to be exposed to high radiation levels and extreme temperatures, conditions for which previous studies have shown noticeable changes in both the microstructure and the electrical properties of different types of silicon carbide. Both initial properties and radiation/temperature induced damage strongly depend on the crystal structure, polytype, impurities/additives that are determined by the fabrication process, so the development of a suitable material requires full control of these variables. For this work, several SiC samples with different percentage of porosity and sintering additives have been manufactured by the so-called sacrificial template method at the Ceit-IK4 Technology Center (San Sebastián, Spain), and characterized at Ciemat (Madrid, Spain). Electrical conductivity was measured as a function of temperature before and after irradiation with 1.8 MeV electrons in the Ciemat HVEC Van de Graaff accelerator up to 140 MGy (~ 2·10 -5 dpa). Radiation-induced conductivity (RIC) was also examined during irradiation at 550 ºC for different dose rates (from 0.5 to 5 kGy/s). Although no significant RIC was found in general for any of the samples, electrical conductivity increase with irradiation dose was observed to occur for some compositions with a linear tendency. However, first results indicate enhanced radiation resistance for coated samples. Preliminary thermogravimetric tests of selected samples, together with posterior XRD analysis allowed interpret radiation-induced modification of the electrical conductivity in terms of changes in the SiC crystalline structure. Further analysis is needed in order to confirm this.Keywords: DCLL blanket, electrical conductivity, flow channel insert, porous SiC, radiation damage, thermal stability
Procedia PDF Downloads 200141 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories
Authors: Oibar Martinez, Clara Oliver
Abstract:
The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations
Procedia PDF Downloads 110140 Biophysical and Structural Characterization of Transcription Factor Rv0047c of Mycobacterium Tuberculosis H37Rv
Authors: Md. Samsuddin Ansari, Ashish Arora
Abstract:
Every year 10 million people fall ill with one of the oldest diseases known as tuberculosis, caused by Mycobacterium tuberculosis. The success of M. tuberculosis as a pathogen is because of its ability to persist in host tissues. Multidrug resistance (MDR) mycobacteria cases increase every day, which is associated with efflux pumps controlled at the level of transcription. The transcription regulators of MDR transporters in bacteria belong to one of the following four regulatory protein families: AraC, MarR, MerR, and TetR. Phenolic acid decarboxylase repressor (PadR), like a family of transcription regulators, is closely related to the MarR family. Phenolic acid decarboxylase repressor (PadR) was first identified as a transcription factor involved in the regulation of phenolic acid stress response in various microorganisms (including Mycobacterium tuberculosis H37Rv). Recently research has shown that the PadR family transcription factors are global, multifunction transcription regulators. Rv0047c is a PadR subfamily-1 protein. We are exploring the biophysical and structural characterization of Rv0047c. The Rv0047 gene was amplified by PCR using the primers containing EcoRI and HindIII restriction enzyme sites cloned in pET-NH6 vector and overexpressed in DH5α and BL21 (λDE3) cells of E. coli following purification with Ni2+-NTA column and size exclusion chromatography. We did DSC to know the thermal stability; the Tm (transition temperature) of protein is 55.29ºC, and ΔH (enthalpy change) of 6.92 kcal/mol. Circular dichroism to know the secondary structure and conformation and fluorescence spectroscopy for tertiary structure study of protein. To understand the effect of pH on the structure, function, and stability of Rv0047c we employed spectroscopy techniques such as circular dichroism, fluorescence, and absorbance measurements in a wide range of pH (from pH-2.0 to pH-12). At low and high pH, it shows drastic changes in the secondary and tertiary structure of the protein. EMSA studies showed the specific binding of Rv0047c with its own 30-bp promoter region. To determine the effect of complex formation on the secondary structure of Rv0047c, we examined the CD spectra of the complex of Rv0047c with promoter DNA of rv0047. The functional role of Rv0047c was characterized by over-expressing the Rv0047c gene under the control of hsp60 promoter in Mycobacterium tuberculosis H37Rv. We have predicted the three-dimensional structure of Rv0047c using the Swiss Model and Modeller, with validity checked by the Ramachandra plot. We did molecular docking of Rv0047c with dnaA, through PatchDock following refinement through FireDock. Through this, it is possible to easily identify the binding hot-stop of the receptor molecule with that of the ligand, the nature of the interface itself, and the conformational change undergone by the protein pattern. We are using X-crystallography to unravel the structure of Rv0047c. Overall the studies show that Rv0047c may have transcription regulation along with providing an insight into the activity of Rv0047c in the pH range of subcellular environment and helps to understand the protein-protein interaction, a novel target to kill dormant bacteria and potential strategy for tuberculosis control.Keywords: mycobacterium tuberculosis, phenolic acid decarboxylase repressor, Rv0047c, Circular dichroism, fluorescence spectroscopy, docking, protein-protein interaction
Procedia PDF Downloads 121139 Nursing Experience in Caring for a Patient with Terminal Gastric Cancer and Abdominal Aortic Aneurysm
Authors: Pei-Shan Liang
Abstract:
Objective: This article explores the nursing experience of caring for a patient with terminal gastric cancer complicated by an abdominal aortic aneurysm. The patient experienced physical discomfort due to the disease, initially unable to accept the situation, leading to anxiety, and eventually accepting the need for surgery. Methods: The nursing period was from June 6 to June 10, 2024. Through observation, direct care, conversations, and physical assessments, and using Gordon's eleven functional health patterns for a one-on-one holistic assessment, interdisciplinary team meetings were held with the critical care team and family. Three nursing health issues were identified: pain related to the disease and invasive procedures, anxiety related to uncertainty about disease recovery, and decreased cardiac tissue perfusion related to hemodynamic instability. Results: Open communication techniques and empathetic care were employed to establish a trusting nurse-patient relationship, and patient-centered nursing interventions were developed. Pain was assessed using a 10-point pain scale, and pain medications were adjusted by a pharmacist. Initially, Fentanyl 500mcg with pump run at 1ml/hr was administered, later changed to Ultracet 37.5mg/325mg, 1 tablet every 6 hours orally, reducing the pain score to 3. Lavender aromatherapy and listening to crystal music were used as distractions to alleviate pain, allowing the patient to sleep uninterrupted for at least 7 hours. The patient was encouraged to express feelings and fears through LINE messages or drawings, and a psychologist was invited to provide support. Family members were present at least twice a day for over an hour each time, reducing psychological distress and uncertainty about the prognosis. According to the Beck Anxiety Inventory, the anxiety score dropped from 17 (moderate anxiety) to 6 (no anxiety). Focused nursing care was implemented with close monitoring of vital signs maintaining systolic blood pressure between 112-118 mmHg to ensure adequate myocardial perfusion. The patient was encouraged to get out of bed for postoperative rehabilitation and to strengthen cardiopulmonary function. A chest X-ray showed no abnormalities, and breathing was smooth with Triflow use, maintaining at least 5 seconds with 2 balls four times a day, and SpO2 >96%. Conclusion: The care process highlighted the importance of addressing psychological care in addition to maintaining life when the patient’s condition changes. The presence of family often provided the greatest source of comfort for the patient, helping to reduce anxiety and pain. Nurses must play multiple roles, including advocate, coordinator, educator, and consultant, using various communication techniques and fostering hope by listening to and accepting the patient’s emotional responses. It is hoped that this report will provide a reference for clinical nursing staff and contribute to improving the quality of care.Keywords: intensive care, gastric cancer, aortic aneurysm, quality of care
Procedia PDF Downloads 23138 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 39137 Occipital Squama Convexity and Neurocranial Covariation in Extant Homo sapiens
Authors: Miranda E. Karban
Abstract:
A distinctive pattern of occipital squama convexity, known as the occipital bun or chignon, has traditionally been considered a derived Neandertal trait. However, some early modern and extant Homo sapiens share similar occipital bone morphology, showing pronounced internal and external occipital squama curvature and paralambdoidal flattening. It has been posited that these morphological patterns are homologous in the two groups, but this claim remains disputed. Many developmental hypotheses have been proposed, including assertions that the chignon represents a developmental response to a long and narrow cranial vault, a narrow or flexed basicranium, or a prognathic face. These claims, however, remain to be metrically quantified in a large subadult sample, and little is known about the feature’s developmental, functional, or evolutionary significance. This study assesses patterns of chignon development and covariation in a comparative sample of extant human growth study cephalograms. Cephalograms from a total of 549 European-derived North American subjects (286 male, 263 female) were scored on a 5-stage ranking system of chignon prominence. Occipital squama shape was found to exist along a continuum, with 34 subjects (6.19%) possessing defined chignons, and 54 subjects (9.84%) possessing very little occipital squama convexity. From this larger sample, those subjects represented by a complete radiographic series were selected for metric analysis. Measurements were collected from lateral and posteroanterior (PA) cephalograms of 26 subjects (16 male, 10 female), each represented at 3 longitudinal age groups. Age group 1 (range: 3.0-6.0 years) includes subjects during a period of rapid brain growth. Age group 2 (range: 8.0-9.5 years) includes subjects during a stage in which brain growth has largely ceased, but cranial and facial development continues. Age group 3 (range: 15.9-20.4 years) includes subjects at their adult stage. A total of 16 landmarks and 153 sliding semi-landmarks were digitized at each age point, and geometric morphometric analyses, including relative warps analysis and two-block partial least squares analysis, were conducted to study covariation patterns between midsagittal occipital bone shape and other aspects of craniofacial morphology. A convex occipital squama was found to covary significantly with a low, elongated neurocranial vault, and this pattern was found to exist from the youngest age group. Other tested patterns of covariation, including cranial and basicranial breadth, basicranial angle, midcoronal cranial vault shape, and facial prognathism, were not found to be significant at any age group. These results suggest that the chignon, at least in this sample, should not be considered an independent feature, but rather the result of developmental interactions relating to neurocranial elongation. While more work must be done to quantify chignon morphology in fossil subadults, this study finds no evidence to disprove the developmental homology of the feature in modern humans and Neandertals.Keywords: chignon, craniofacial covariation, human cranial development, longitudinal growth study, occipital bun
Procedia PDF Downloads 201